Compare commits

..

3 Commits

Author SHA1 Message Date
Elizabeth Fischer
30e563bd23 Edits... 2016-08-30 23:48:34 -04:00
Elizabeth Fischer
5450f9b917 1. Rewrite of "Getting Started": everything you need to set up Spack, even on old/ornery systems. This is not a reference manual section; items covered here are covered more systematically elsewhere in the manual. Some sections were moved here from elsewhere.
2. Beginning to write three methods of application developer support.  Two methods were moved from elsewhere.
2016-08-29 23:06:38 -04:00
Elizabeth Fischer
72a3d35d0c Transferred pending changes from efischer/develop 2016-08-29 18:26:10 -04:00
1381 changed files with 37172 additions and 79272 deletions

View File

@@ -1,8 +1,6 @@
# -*- conf -*-
# .coveragerc to control coverage.py
[run]
parallel = True
concurrency = multiprocessing
branch = True
source = lib
omit =

11
.flake8
View File

@@ -5,10 +5,8 @@
# rationale is.
#
# Let people line things up nicely:
# - E129: visually indented line with same indent as next logical line
# - E221: multiple spaces before operator
# - E241: multiple spaces after ','
# - E272: multiple spaces before keyword
# - E241: multiple spaces after ,
#
# Let people use terse Python features:
# - E731 : lambda expressions
@@ -17,10 +15,9 @@
# - F403: disable wildcard import
#
# These are required to get the package.py files to test clean.
# - F405: `name` may be undefined, or undefined from star imports: `module`
# - F821: undefined name `name` (needed for cmake, configure, etc.)
# - F999: syntax error in doctest
# - F821: undefined name (needed for cmake, configure, etc.)
# - F999: name name be undefined or undefined from star imports.
#
[flake8]
ignore = E129,E221,E241,E272,E731,F403,F405,F821,F999
ignore = E129,E221,E241,E272,E731,F403,F821,F999,F405
max-line-length = 79

7
.gitignore vendored
View File

@@ -1,4 +1,3 @@
/db
/var/spack/stage
/var/spack/cache
/var/spack/repos/*/index.yaml
@@ -13,12 +12,6 @@
/etc/spackconfig
/share/spack/dotkit
/share/spack/modules
/share/spack/lmod
/TAGS
*.swp
/htmlcov
.coverage
#*
.#*
/.cache
/bin/spackc

View File

@@ -1,56 +1,20 @@
Abhinav Bhatele <bhatele@llnl.gov> Abhinav Bhatele <bhatele@gmail.com>
Adam Moody <moody20@llnl.gov> Adam T. Moody <moody20@llnl.gov>
Alfredo Gimenez <gimenez1@llnl.gov> Alfredo Gimenez <alfredo.gimenez@gmail.com>
Alfredo Gimenez <gimenez1@llnl.gov> Alfredo Adolfo Gimenez <alfredo.gimenez@gmail.com>
Andrew Williams <williamsa89@cardiff.ac.uk> Andrew Williams <andrew@alshain.org.uk>
Ben Boeckel <ben.boeckel@kitware.com> Ben Boeckel <mathstuf@gmail.com>
Ben Boeckel <ben.boeckel@kitware.com> Ben Boeckel <mathstuf@users.noreply.github.com>
Benedikt Hegner <hegner@cern.ch> Benedikt Hegner <benedikt.hegner@cern.ch>
Brett Viren <bv@bnl.gov> Brett Viren <brett.viren@gmail.com>
David Boehme <boehme3@llnl.gov> David Boehme <boehme3@sierra324.llnl.gov>
David Boehme <boehme3@llnl.gov> David Boehme <boehme3@sierra648.llnl.gov>
David Poliakoff <poliakoff1@llnl.gov> David Poliakoff <david.poliakoff@gmail.com>
Dhanannjay Deo <dhanannjay.deo@kitware.com> Dhanannjay 'Djay' Deo <dhanannjay.deo@kitware.com>
Elizabeth Fischer <elizabeth.fischer@columbia.edu> Elizabeth F <elizabeth.fischer@columbia.edu>
Elizabeth Fischer <elizabeth.fischer@columbia.edu> Elizabeth F <rpf2116@columbia.edu>
Elizabeth Fischer <elizabeth.fischer@columbia.edu> Elizabeth Fischer <rpf2116@columbia.edu>
Elizabeth Fischer <elizabeth.fischer@columbia.edu> citibeth <rpf2116@columbia.edu>
Geoffrey Oxberry <oxberry1@llnl.gov> Geoffrey Oxberry <goxberry@gmail.com>
Glenn Johnson <glenn-johnson@uiowa.edu> Glenn Johnson <gjohnson@argon-ohpc.hpc.uiowa.edu>
Glenn Johnson <glenn-johnson@uiowa.edu> Glenn Johnson <glennpj@gmail.com>
Gregory Becker <becker33@llnl.gov> Gregory Becker <becker33.llnl.gov>
Gregory Becker <becker33@llnl.gov> becker33 <becker33.llnl.gov>
Gregory Becker <becker33@llnl.gov> becker33 <becker33@llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Greg Lee <lee218@llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@cab687.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@cab690.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@catalyst159.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@surface86.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory Lee <lee218@llnl.gov>
Ian Lee <lee1001@llnl.gov> Ian Lee <IanLee1521@gmail.com>
James Wynne III <wynnejr@ornl.gov> James Riley Wynne III <wynnejr@ornl.gov>
James Wynne III <wynnejr@ornl.gov> James Wynne III <wynnejr@gpujake.com>
Joachim Protze <protze@rz.rwth-aachen.de> jprotze <protze@rz.rwth-aachen.de>
Kelly (KT) Thompson <kgt@lanl.gov> <kellyt@MENE.localdomain>
Kelly (KT) Thompson <kgt@lanl.gov> Kelly Thompson <KineticTheory@users.noreply.github.com>
Kevin Brandstatter <kjbrandstatter@gmail.com> Kevin Brandstatter <kbrandst@hawk.iit.edu>
Luc Jaulmes <luc.jaulmes@bsc.es> Luc Jaulmes <jaulmes1@llnl.gov>
Mario Melara <maamelara@gmail.com> Mario Melara <mamelara@genepool1.nersc.gov>
Mark Miller <miller86@llnl.gov> miller86 <miller86@llnl.gov>
Massimiliano Culpo <massimiliano.culpo@epfl.ch> Massimiliano Culpo <massimiliano.culpo@googlemail.com>
Massimiliano Culpo <massimiliano.culpo@epfl.ch> alalazo <massimiliano.culpo@googlemail.com>
Mayeul d'Avezac <m.davezac@ucl.ac.uk> Mayeul d'Avezac <mdavezac@gmail.com>
Mitchell Devlin <mitchell.r.devlin@gmail.com> Mitchell Devlin <devlin@blogin4.lcrc.anl.gov>
Nicolas Richart <nicolas.richart@epfl.ch> Nicolas <nrichart@users.noreply.github.com>
Nicolas Richart <nicolas.richart@epfl.ch> Nicolas Richart <nrichart@users.noreply.github.com>
Peter Scheibel <scheibel1@llnl.gov> scheibelp <scheibel1@llnl.gov>
Robert D. French <frenchrd@ornl.gov> Robert D. French <robert@robertdfrench.me>
Robert D. French <frenchrd@ornl.gov> Robert.French <frenchrd@ornl.gov>
Robert D. French <frenchrd@ornl.gov> robertdfrench <frenchrd@ornl.gov>
Saravan Pantham <saravan.pantham@gmail.com> Saravan Pantham <pantham1@surface86.llnl.gov>
Stephen Herbein <sherbein@udel.edu> Stephen Herbein <stephen272@gmail.com>
Todd Gamblin <tgamblin@llnl.gov> George Todd Gamblin <gamblin2@llnl.gov>
Todd Gamblin <tgamblin@llnl.gov> Todd Gamblin <gamblin2@llnl.gov>
Tom Scogland <tscogland@llnl.gov> Tom Scogland <scogland1@llnl.gov>
Tom Scogland <tscogland@llnl.gov> Tom Scogland <tom.scogland@gmail.com>
Tzanio Kolev <tzanio@llnl.gov> Tzanio <tzanio@llnl.gov>
Todd Gamblin <tgamblin@llnl.gov> George Todd Gamblin <gamblin2@llnl.gov>
Todd Gamblin <tgamblin@llnl.gov> Todd Gamblin <gamblin2@llnl.gov>
Adam Moody <moody20@llnl.gov> Adam T. Moody <moody20@llnl.gov>
Alfredo Gimenez <gimenez1@llnl.gov> Alfredo Gimenez <alfredo.gimenez@gmail.com>
David Boehme <boehme3@llnl.gov> David Boehme <boehme3@sierra324.llnl.gov>
David Boehme <boehme3@llnl.gov> David Boehme <boehme3@sierra648.llnl.gov>
Kevin Brandstatter <kjbrandstatter@gmail.com> Kevin Brandstatter <kbrandst@hawk.iit.edu>
Luc Jaulmes <luc.jaulmes@bsc.es> Luc Jaulmes <jaulmes1@llnl.gov>
Saravan Pantham <saravan.pantham@gmail.com> Saravan Pantham <pantham1@surface86.llnl.gov>
Tom Scogland <tscogland@llnl.gov> Tom Scogland <scogland1@llnl.gov>
Tom Scogland <tscogland@llnl.gov> Tom Scogland <tom.scogland@gmail.com>
Joachim Protze <protze@rz.rwth-aachen.de> jprotze <protze@rz.rwth-aachen.de>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@surface86.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@cab687.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@cab690.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@catalyst159.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory Lee <lee218@llnl.gov>
Massimiliano Culpo <massimiliano.culpo@epfl.ch> Massimiliano Culpo <massimiliano.culpo@googlemail.com>
Massimiliano Culpo <massimiliano.culpo@epfl.ch> alalazo <massimiliano.culpo@googlemail.com>
Mark Miller <miller86@llnl.gov> miller86 <miller86@llnl.gov>

View File

@@ -1,72 +1,27 @@
#=============================================================================
# Project settings
#=============================================================================
language: python
# Only build master and develop on push; do not build every branch.
branches:
only:
- master
- develop
- /^releases\/.*$/
#=============================================================================
# Build matrix
#=============================================================================
python:
- 2.6
- 2.7
- "2.6"
- "2.7"
env:
- TEST_SUITE=unit
- TEST_SUITE=flake8
- TEST_SUITE=doc
- TEST_TYPE=unit
- TEST_TYPE=flake8
# Exclude flake8 from python 2.6
matrix:
# Flake8 and Sphinx no longer support Python 2.6, and one run is enough.
exclude:
- python: 2.6
env: TEST_SUITE=flake8
- python: 2.6
env: TEST_SUITE=doc
# Explicitly include an OS X build with homebrew's python.
# Works around Python issues on Travis for OSX, described here:
# http://blog.fizyk.net.pl/blog/running-python-tests-on-traviss-osx-workers.html
include:
- os: osx
language: generic
env: TEST_SUITE=unit
- python: "2.6"
env: TEST_TYPE=flake8
#=============================================================================
# Environment
#=============================================================================
# Use new Travis infrastructure (Docker can't sudo yet)
sudo: false
# Docs need graphviz to build
addons:
apt:
packages:
- gfortran
- graphviz
- libyaml-dev
# Work around Travis's lack of support for Python on OSX
before_install:
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update; fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew ls --versions python > /dev/null || brew install python; fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew ls --versions gcc > /dev/null || brew install gcc; fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then virtualenv venv; fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then source venv/bin/activate; fi
# Install various dependencies
# Install coveralls to obtain code coverage
install:
- pip install --upgrade coveralls
- pip install --upgrade flake8
- pip install --upgrade sphinx
- pip install --upgrade mercurial
- "pip install coveralls"
- "pip install flake8"
before_script:
before_install:
# Need this for the git tests to succeed.
- git config --global user.email "spack@example.com"
- git config --global user.name "Test User"
@@ -74,19 +29,18 @@ before_script:
# Need this to be able to compute the list of changed files
- git fetch origin develop:develop
#=============================================================================
# Building
#=============================================================================
script: share/spack/qa/run-$TEST_SUITE-tests
script:
# Run unit tests with code coverage plus install libdwarf
- 'if [ "$TEST_TYPE" = "unit" ]; then share/spack/qa/run-unit-tests; fi'
# Run flake8 code style checks.
- 'if [ "$TEST_TYPE" = "flake8" ]; then share/spack/qa/run-flake8; fi'
after_success:
- if [[ $TEST_SUITE == unit && $TRAVIS_PYTHON_VERSION == 2.7 && $TRAVIS_OS_NAME == "linux" ]]; then coveralls; fi
- 'if [ "$TEST_TYPE" = "unit" ] && [ "$TRAVIS_PYTHON_VERSION" = "2.7" ]; then coveralls; fi'
#=============================================================================
# Notifications
#=============================================================================
notifications:
email:
recipients: tgamblin@llnl.gov
recipients:
- tgamblin@llnl.gov
on_success: change
on_failure: always

View File

@@ -20,7 +20,7 @@ written in pure Python, and specs allow package authors to write a
single build script for many different builds of the same package.
See the
[Feature Overview](http://spack.readthedocs.io/en/latest/features.html)
[Feature Overview](http://software.llnl.gov/spack/features.html)
for examples and highlights.
To install spack and install your first package:
@@ -32,12 +32,9 @@ To install spack and install your first package:
Documentation
----------------
[**Full documentation**](http://spack.readthedocs.io/) for Spack is
[**Full documentation**](http://software.llnl.gov/spack) for Spack is
the first place to look.
We've also got a [**Spack 101 Tutorial**](http://spack.readthedocs.io/en/latest/tutorial_sc16.html),
so you can learn Spack yourself, or teach users at your own site.
See also:
* [Technical paper](http://www.computer.org/csdl/proceedings/sc/2015/3723/00/2807623.pdf) and
[slides](https://tgamblin.github.io/files/Gamblin-Spack-SC15-Talk.pdf) on Spack's design and implementation.
@@ -66,11 +63,17 @@ Contributing to Spack is relatively easy. Just send us a
When you send your request, make ``develop`` the destination branch on the
[Spack repository](https://github.com/LLNL/spack).
Your PR must pass Spack's unit tests and documentation tests, and must be
[PEP 8](https://www.python.org/dev/peps/pep-0008/) compliant.
Before you send a PR, your code should pass the following checks:
* Your contribution will need to pass the `spack test` command.
Run this before submitting your PR.
* Also run the `share/spack/qa/run-flake8` script to check for PEP8 compliance.
To encourage contributions and readability by a broad audience,
Spack uses the [PEP8](https://www.python.org/dev/peps/pep-0008/) coding
standard with [a few exceptions](https://github.com/LLNL/spack/blob/develop/.flake8).
We enforce these guidelines with [Travis CI](https://travis-ci.org/LLNL/spack).
To run these tests locally, and for helpful tips on git, see our
[Contribution Guide](http://spack.readthedocs.io/en/latest/contribution_guide.html).
Spack uses a rough approximation of the [Git
Flow](http://nvie.com/posts/a-successful-git-branching-model/)

View File

@@ -111,12 +111,8 @@ while read line && ((lines < 2)) ; do
done < "$script"
# Invoke any interpreter found, or raise an error if none was found.
if [[ -n "$interpreter" ]]; then
if [[ "${interpreter##*/}" = "perl" ]]; then
exec $interpreter -x "$@"
else
exec $interpreter "$@"
fi
if [ -n "$interpreter" ]; then
exec $interpreter "$@"
else
echo "error: sbang found no interpreter in $script"
exit 1

View File

@@ -25,13 +25,12 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
import sys
if (sys.version_info[0] > 2) or (sys.version_info[:2] < (2, 6)):
if not sys.version_info[:2] >= (2, 6):
v_info = sys.version_info[:3]
sys.exit("Spack requires Python 2.6 or 2.7. "
sys.exit("Spack requires Python 2.6 or higher. "
"This is Python %d.%d.%d." % v_info)
import os
import inspect
# Find spack's location and its prefix.
SPACK_FILE = os.path.realpath(os.path.expanduser(__file__))
@@ -41,26 +40,24 @@ SPACK_PREFIX = os.path.dirname(os.path.dirname(SPACK_FILE))
# Allow spack libs to be imported in our scripts
SPACK_LIB_PATH = os.path.join(SPACK_PREFIX, "lib", "spack")
sys.path.insert(0, SPACK_LIB_PATH)
# Add external libs
SPACK_EXTERNAL_LIBS = os.path.join(SPACK_LIB_PATH, "external")
sys.path.insert(0, SPACK_EXTERNAL_LIBS)
import warnings
# Avoid warnings when nose is installed with the python exe being used to run
# spack. Note this must be done after Spack's external libs directory is added
# to sys.path.
with warnings.catch_warnings():
warnings.filterwarnings("ignore", ".*nose was already imported")
import nose
# Quick and dirty check to clean orphaned .pyc files left over from
# previous revisions. These files were present in earlier versions of
# Spack, were removed, but shadow system modules that Spack still
# imports. If we leave them, Spack will fail in mysterious ways.
# TODO: more elegant solution for orphaned pyc files.
orphaned_pyc_files = [
os.path.join(SPACK_EXTERNAL_LIBS, 'functools.pyc'),
os.path.join(SPACK_EXTERNAL_LIBS, 'ordereddict.pyc'),
os.path.join(SPACK_LIB_PATH, 'spack', 'platforms', 'cray_xc.pyc'),
os.path.join(SPACK_LIB_PATH, 'spack', 'cmd', 'package-list.pyc'),
os.path.join(SPACK_LIB_PATH, 'spack', 'cmd', 'test-install.pyc'),
os.path.join(SPACK_LIB_PATH, 'spack', 'cmd', 'url-parse.pyc'),
os.path.join(SPACK_LIB_PATH, 'spack', 'test', 'yaml.pyc')
]
orphaned_pyc_files = [os.path.join(SPACK_EXTERNAL_LIBS, n)
for n in ('functools.pyc', 'ordereddict.pyc')]
for pyc_file in orphaned_pyc_files:
if not os.path.exists(pyc_file):
continue
@@ -113,8 +110,6 @@ parser.add_argument('-p', '--profile', action='store_true',
help="Profile execution using cProfile.")
parser.add_argument('-v', '--verbose', action='store_true',
help="Print additional output during builds")
parser.add_argument('-s', '--stacktrace', action='store_true',
help="Add stacktrace information to all printed statements")
parser.add_argument('-V', '--version', action='version',
version="%s" % spack.spack_version)
@@ -122,28 +117,33 @@ parser.add_argument('-V', '--version', action='version',
# subparser for setup.
subparsers = parser.add_subparsers(metavar='SUBCOMMAND', dest="command")
import spack.cmd
for cmd in spack.cmd.commands:
module = spack.cmd.get_module(cmd)
cmd_name = cmd.replace('_', '-')
subparser = subparsers.add_parser(cmd_name, help=module.description)
subparser = subparsers.add_parser(cmd, help=module.description)
module.setup_parser(subparser)
# Just print help and exit if run with no arguments at all
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
def _main(args, unknown_args):
# actually parse the args.
args = parser.parse_args()
def main():
# Set up environment based on args.
tty.set_verbose(args.verbose)
tty.set_debug(args.debug)
tty.set_stacktrace(args.stacktrace)
spack.debug = args.debug
if spack.debug:
import spack.util.debug as debug
debug.register_interrupt_handler()
# Run any available pre-run hooks
spack.hooks.pre_run()
from spack.yaml_version_check import check_yaml_versions
check_yaml_versions()
spack.spack_working_dir = working_dir
if args.mock:
@@ -153,25 +153,12 @@ def _main(args, unknown_args):
# If the user asked for it, don't check ssl certs.
if args.insecure:
tty.warn("You asked for --insecure. Will NOT check SSL certificates.")
spack.insecure = True
spack.curl.add_default_arg('-k')
# Try to load the particular command asked for and run it
command = spack.cmd.get_command(args.command.replace('-', '_'))
# Allow commands to inject an optional argument and get unknown args
# if they want to handle them.
info = dict(inspect.getmembers(command))
varnames = info['__code__'].co_varnames
argcount = info['__code__'].co_argcount
# Actually execute the command
command = spack.cmd.get_command(args.command)
try:
if argcount == 3 and varnames[2] == 'unknown_args':
return_val = command(parser, args, unknown_args)
else:
if unknown_args:
tty.die('unrecognized arguments: %s' % ' '.join(unknown_args))
return_val = command(parser, args)
return_val = command(parser, args)
except SpackError as e:
e.die()
except KeyboardInterrupt:
@@ -187,26 +174,11 @@ def _main(args, unknown_args):
tty.die("Bad return value from command %s: %s"
% (args.command, return_val))
def main(args):
# Just print help and exit if run with no arguments at all
if len(args) == 1:
parser.print_help()
sys.exit(1)
# actually parse the args.
args, unknown = parser.parse_known_args()
if args.profile:
import cProfile
cProfile.runctx('_main(args, unknown)', globals(), locals(),
sort='time')
elif args.pdb:
import pdb
pdb.runctx('_main(args, unknown)', globals(), locals())
else:
_main(args, unknown)
if __name__ == '__main__':
main(sys.argv)
if args.profile:
import cProfile
cProfile.run('main()', sort='time')
elif args.pdb:
import pdb
pdb.run('main()')
else:
main()

View File

@@ -1,68 +0,0 @@
# -------------------------------------------------------------------------
# This is the default spack configuration file.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/config.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/config.yaml
# -------------------------------------------------------------------------
config:
# This is the path to the root of the Spack install tree.
# You can use $spack here to refer to the root of the spack instance.
install_tree: $spack/opt/spack
# Locations where different types of modules should be installed.
module_roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit
# Temporary locations Spack can try to use for builds.
#
# Spack will use the first one it finds that exists and is writable.
# You can use $tempdir to refer to the system default temp directory
# (as returned by tempfile.gettempdir()).
#
# A value of $spack/var/spack/stage indicates that Spack should run
# builds directly inside its install directory without staging them in
# temporary space.
#
# The build stage can be purged with `spack purge --stage`.
build_stage:
- $tempdir
- /nfs/tmp2/$user
- $spack/var/spack/stage
# Cache directory already downloaded source tarballs and archived
# repositories. This can be purged with `spack purge --downloads`.
source_cache: $spack/var/spack/cache
# Cache directory for miscellaneous files, like the package index.
# This can be purged with `spack purge --misc-cache`
misc_cache: ~/.spack/cache
# If this is false, tools like curl that use SSL will not verify
# certifiates. (e.g., curl will use use the -k option)
verify_ssl: true
# If set to true, Spack will always check checksums after downloading
# archives. If false, Spack skips the checksum step.
checksum: true
# If set to true, `spack install` and friends will NOT clean
# potentially harmful variables from the build environment. Use wisely.
dirty: false

View File

@@ -1,18 +0,0 @@
# -------------------------------------------------------------------------
# This file controls default concretization preferences for Spack.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/packages.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/packages.yaml
# -------------------------------------------------------------------------
packages:
all:
compiler: [clang, gcc, intel]

View File

@@ -24,8 +24,6 @@ modules:
- MANPATH
share/man:
- MANPATH
share/aclocal:
- ACLOCAL_PATH
lib:
- LIBRARY_PATH
- LD_LIBRARY_PATH

View File

@@ -15,9 +15,7 @@
# -------------------------------------------------------------------------
packages:
all:
compiler: [gcc, intel, pgi, clang, xl, nag]
providers:
mpi: [openmpi, mpich]
blas: [openblas]
lapack: [openblas]
pil: [py-pillow]

View File

@@ -1,5 +1,4 @@
package_list.rst
command_index.rst
spack*.rst
modules.rst
_build

View File

@@ -2,13 +2,12 @@
#
# You can set these variables from the command line.
SPHINXOPTS = -E
JOBS ?= $(shell python -c 'import multiprocessing; print multiprocessing.cpu_count()')
SPHINXBUILD = sphinx-build -j $(JOBS)
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
export PYTHONPATH := ../../spack:$(PYTHONPATH)
export PYTHONPATH = ../../spack
APIDOC_FILES = spack*.rst
# Internal variables.
@@ -22,6 +21,24 @@ I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
all: html
#
# This autogenerates a package list.
#
package_list:
spack package-list > package_list.rst
#
# Generate a command index
#
command_index:
cp command_index.in command_index.rst
echo >> command_index.rst
grep -ho '.. _spack-.*:' *rst \
| perl -pe 's/.. _([^:]*):/ * :ref:`\1`/' \
| sort >> command_index.rst
custom_targets: package_list command_index
#
# This creates a git repository and commits generated html docs.
# It them pushes the new branch into THIS repository as gh-pages.
@@ -41,20 +58,9 @@ gh-pages: _build/html
git push -f $$root master:gh-pages && \
rm -rf .git
# This version makes gh-pages into a single page that redirects
# to spack.readthedocs.io
gh-pages-redirect:
root="$$(git rev-parse --show-toplevel)" && \
cd _gh_pages_redirect && \
rm -rf .git && \
git init && \
git add . && \
git commit -m "Spack Documentation" && \
git push -f $$root master:gh-pages && \
rm -rf .git
upload:
rsync -avz --rsh=ssh --delete _build/html/ cab:/usr/global/web-pages/lc/www/adept/docs/spack
git push -f origin gh-pages
git push -f github gh-pages
apidoc:
@@ -83,10 +89,10 @@ help:
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -f package_list.rst command_index.rst modules.rst
-rm -f package_list.rst command_index.rst
-rm -rf $(BUILDDIR)/* $(APIDOC_FILES)
html:
html: apidoc custom_targets
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."

View File

@@ -1,10 +0,0 @@
<html>
<head>
<meta http-equiv="refresh" content="0; url=http://spack.readthedocs.io/" />
</head>
<body>
<p>
This page has moved to <a href="http://spack.readthedocs.io/">http://spack.readthedocs.io/</a>
</p>
</body>
</html>

File diff suppressed because it is too large Load Diff

View File

@@ -1,168 +0,0 @@
.. _build-settings:
======================================
Build customization
======================================
Spack allows you to customize how your software is built through the
``packages.yaml`` file. Using it, you can make Spack prefer particular
implementations of virtual dependencies (e.g., compilers, MPI, or BLAS),
or you can make it prefer to build with particular compilers. You can
also tell Spack to use *external* installations of certain software.
At a high level, the ``packages.yaml`` file is structured like this:
.. code-block:: yaml
packages:
package1:
# settings for package1
package2:
# settings for package2
# ...
all:
# settings that apply to all packages.
So you can either set build preferences *specifically* for one package,
or you can specify that certain settings should apply to all packages.
The types of settings you can customize are described in detail below.
Spack's build defaults are in the default
``etc/spack/defaults/packages.yaml`` file. You can override them in
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
details on how this works, see :ref:`configuration-scopes`
.. _sec-external-packages:
-----------------
External Packages
-----------------
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
External packages are configured through the ``packages.yaml`` file found
in a Spack installation's ``etc/spack/`` or a user's ``~/.spack/``
directory. Here's an example of an external configuration:
.. code-block:: yaml
packages:
openmpi:
paths:
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-x86_64-debian7: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with gcc,
one built with gcc and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the the pre-installed OpenMPI in
the given directory. Packages.yaml can also be used to specify modules
Each ``packages.yaml`` begins with a ``packages:`` token, followed
by a list of package names. To specify externals, add a ``paths`` or ``modules``
token under the package name, which lists externals in a
``spec: /path`` or ``spec: module-name`` format. Each spec should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
Each package version and compilers listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not every be built.
The packages configuration can tell Spack to use an external location
for certain package versions, but it does not restrict Spack to using
external packages. In the above example, if an OpenMPI 1.8.4 became
available Spack may choose to start building and linking with that version
rather than continue using the pre-installed OpenMPI versions.
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
.. code-block:: yaml
packages:
openmpi:
paths:
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-x86_64-debian7: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI, and it will instead always rely on a pre-built
OpenMPI. Similar to ``paths``, ``buildable`` is specified as a property under
a package name.
If an external module is specified as not buildable, then Spack will load the
external module into the build environment which can be used for linking.
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
.. _concretization-preferences:
--------------------------
Concretization Preferences
--------------------------
Spack can be configured to prefer certain compilers, package
versions, depends_on, and variants during concretization.
The preferred configuration can be controlled via the
``~/.spack/packages.yaml`` file for user configuations, or the
``etc/spack/packages.yaml`` site configuration.
Here's an example packages.yaml file that sets preferred packages:
.. code-block:: yaml
packages:
opencv:
compiler: [gcc@4.9]
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, gcc@4.6:, intel, clang, pgi]
providers:
mpi: [mvapich, mpich, openmpi]
At a high level, this example is specifying how packages should be
concretized. The opencv package should prefer using gcc 4.9 and
be built with debug options. The gperftools package should prefer version
2.2 over 2.4. Every package on the system should prefer mvapich for
its MPI and gcc 4.4.7 (except for opencv, which overrides this by preferring gcc 4.9).
These options are used to fill in implicit defaults. Any of them can be overwritten
on the command line if explicitly requested.
Each packages.yaml file begins with the string ``packages:`` and
package names are specified on the next level. The special string ``all``
applies settings to each package. Underneath each package name is
one or more components: ``compiler``, ``variants``, ``version``,
or ``providers``. Each component has an ordered list of spec
``constraints``, with earlier entries in the list being preferred over
later entries.
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first
legal concretization rule. Going back to the example, if a user
requests gperftools 2.3 or later, then Spack will install version 2.4
as the 2.4 version of gperftools is preferred over 2.3.
An explicit concretization rule in the preferred section will always
take preference over unlisted concretizations. In the above example,
xlc isn't listed in the compiler list. Every listed compiler from
gcc to pgi will thus be preferred over the xlc compiler.
The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depend_on`` (e.g, mpi) and a list of rules for fulfilling that
dependency.

View File

@@ -0,0 +1,167 @@
Using Spack for CMake-based Development
==========================================
These are instructions on how to use Spack to aid in the development
of a CMake-based project. Spack is used to help find the dependencies
for the project, configure it at development time, and then package it
it in a way that others can install. Using Spack for CMake-based
development consists of three parts:
1. Setting up the CMake build in your software
2. Writing the Spack Package
3. Using it from Spack.
Setting Up the CMake Build
---------------------------------------
You should follow standard CMake conventions in setting up your
software, your CMake build should NOT depend on or require Spack to
build. See here for an example:
https://github.com/citibeth/icebin
Note that there's one exception here to the rule I mentioned above.
In ``CMakeLists.txt``, I have the following line::
include_directories($ENV{CMAKE_TRANSITIVE_INCLUDE_PATH})
This is a hook into Spack, and it ensures that all transitive
dependencies are included in the include path. It's not needed if
everything is in one tree, but it is (sometimes) in the Spack world;
when running without Spack, it has no effect.
Note that this "feature" is controversial, could break with future
versions of GNU ld, and probably not the best to use. The best
practice is that you make sure that anything you #include is listed as
a dependency in your CMakeLists.txt.
To be more specific: if you #inlcude something from package A and an
installed HEADER FILE in A #includes something from package B, then
you should also list B as a dependency in your CMake build. If you
depend on A but header files exported by A do NOT #include things from
B, then you do NOT need to list B as a dependency --- even if linking
to A links in libB.so as well.
I also recommend that you set up your CMake build to use RPATHs
correctly. Not only is this a good idea and nice, but it also ensures
that your package will build the same with or without ``spack
install``.
Writing the Spack Package
---------------------------------------
Now that you have a CMake build, you want to tell Spack how to
configure it. This is done by writing a Spack package for your
software. See here for example:
https://github.com/citibeth/spack/blob/efischer/develop/var/spack/repos/builtin/packages/icebin/package.py
You need to subclass ``CMakePackage``, as is done in this example.
This enables advanced features of Spack for helping you in configuring
your software (keep reading...). Instead of an ``install()`` method
used when subclassing ``Package``, you write ``configure_args()``.
See here for more info on how this works:
https://github.com/LLNL/spack/pull/543/files
NOTE: if your software is not publicly available, you do not need to
set the URL or version. Or you can set up bogus URLs and
versions... whatever causes Spack to not crash.
Using it from Spack
--------------------------------
Now that you have a Spack package, you can get Spack to setup your
CMake project for you. Use the following to setup, configure and
build your project::
cd myproject
spack spconfig myproject@local
mkdir build; cd build
../spconfig.py ..
make
make install
Everything here should look pretty familiar here from a CMake
perspective, except that ``spack spconfig`` creates the file
``spconfig.py``, which calls CMake with arguments appropriate for your
Spack configuration. Think of it as the equivalent to running a bunch
of ``spack location -i`` commands. You will run ``spconfig.py``
instead of running CMake directly.
If your project is publicly available (eg on GitHub), then you can
ALSO use this setup to "just install" a release version without going
through the manual configuration/build step. Just do:
1. Put tag(s) on the version(s) in your GitHub repo you want to be release versions.
2. Set the ``url`` in your ``package.py`` to download a tarball for
the appropriate version. (GitHub will give you a tarball for any
version in the repo, if you tickle it the right way). For example::
https://github.com/citibeth/icebin/tarball/v0.1.0
Set up versions as appropriate in your ``package.py``. (Manually
download the tarball and run ``md5sum`` to determine the
appropriate checksum for it).
3. Now you should be able to say ``spack install myproject@version``
and things "just work."
NOTE... in order to use the features outlined in this post, you
currently need to use the following branch of Spack:
https://github.com/citibeth/spack/tree/efischer/develop
There is a pull request open on this branch (
https://github.com/LLNL/spack/pull/543 ) and we are working to get it
integrated into the main ``develop`` branch.
Activating your Software
-------------------------------------
Once you've built your software, you will want to load it up. You can
use ``spack load mypackage@local`` for that in your ``.bashrc``, but
that is slow. Try stuff like the following instead:
The following command will load the Spack-installed packages needed
for basic Python use of IceBin::
module load `spack module find tcl icebin netcdf cmake@3.5.1`
module load `spack module find --dependencies tcl py-basemap py-giss`
You can speed up shell startup by turning these into ``module load`` commands.
1. Cut-n-paste the script ``make_spackenv``::
#!/bin/sh
#
# Generate commands to load the Spack environment
SPACKENV=$HOME/spackenv.sh
spack module find --shell tcl git icebin@local ibmisc netcdf cmake@3.5.1 >$SPACKENV
spack module find --dependencies --shell tcl py-basemap py-giss >>$SPACKENV
2. Add the following to your ``.bashrc`` file::
source $HOME/spackenv.sh
# Preferentially use your checked-out Python source
export PYTHONPATH=$HOME/icebin/pylib:$PYTHONPATH
3. Run ``sh make_spackenv`` whenever your Spack installation changes (including right now).
Giving Back
-------------------
If your software is publicly available, you should submit the
``package.py`` for it as a pull request to the main Spack GitHub
project. This will ensure that anyone can install your software
(almost) painlessly with a simple ``spack install`` command. See here
for how that has turned into detailed instructions that have
successfully enabled collaborators to install complex software:
https://github.com/citibeth/icebin/blob/develop/README.rst

View File

@@ -1,6 +1,7 @@
=============
Command Index
=============
.. _command_index:
Command index
=================
This is an alphabetical list of commands with links to the places they
appear in the documentation.

View File

@@ -1,27 +1,26 @@
# flake8: noqa
##############################################################################
# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
# Copyright (c) 2013, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
# it under the terms of the GNU General Public License (as published by
# the Free Software Foundation) version 2.1 dated February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
# conditions of the GNU General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
# -*- coding: utf-8 -*-
#
@@ -38,85 +37,26 @@
import sys
import os
import re
import shutil
import subprocess
from glob import glob
from sphinx.apidoc import main as sphinx_apidoc
# -- Spack customizations -----------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('exts'))
sys.path.insert(0, os.path.abspath('../external'))
sys.path.append(os.path.abspath('..'))
# Add the Spack bin directory to the path so that we can use its output in docs.
spack_root = '../../..'
os.environ['SPACK_ROOT'] = spack_root
os.environ['PATH'] += '%s%s/bin' % (os.pathsep, spack_root)
os.environ['PATH'] += os.pathsep + '$SPACK_ROOT/bin'
# Get the spack version for use in the docs
spack_version = subprocess.Popen(
[spack_root + '/bin/spack', '-V'],
stderr=subprocess.PIPE).communicate()[1].strip().split('.')
# Set an environment variable so that colify will print output like it would to
# a terminal.
os.environ['COLIFY_SIZE'] = '25x120'
#
# Generate package list using spack command
#
with open('package_list.rst', 'w') as plist_file:
subprocess.Popen(
[spack_root + '/bin/spack', 'list', '--format=rst'], stdout=plist_file)
#
# Find all the `cmd-spack-*` references and add them to a command index
#
command_names = []
for filename in glob('*rst'):
with open(filename) as f:
for line in f:
match = re.match('.. _(cmd-spack-.*):', line)
if match:
command_names.append(match.group(1).strip())
shutil.copy('command_index.in', 'command_index.rst')
with open('command_index.rst', 'a') as index:
index.write('\n')
for cmd in sorted(command_names):
index.write(' * :ref:`%s`\n' % cmd)
# Run sphinx-apidoc
sphinx_apidoc(['-T', '-o', '.', '../spack'])
os.remove('modules.rst')
#
# Exclude everything in spack.__all__ from indexing. All of these
# symbols are imported from elsewhere in spack; their inclusion in
# __all__ simply allows package authors to use `from spack import *`.
# Excluding them ensures they're only documented in their "real" module.
#
# This also avoids issues where some of these symbols shadow core spack
# modules. Sphinx will complain about duplicate docs when this happens.
#
import fileinput, spack
handling_spack = False
for line in fileinput.input('spack.rst', inplace=1):
if handling_spack:
if not line.startswith(' :noindex:'):
print ' :noindex: %s' % ' '.join(spack.__all__)
handling_spack = False
if line.startswith('.. automodule::'):
handling_spack = (line == '.. automodule:: spack\n')
print line,
os.environ['COLIFY_SIZE'] = '25x80'
# Enable todo items
todo_include_todos = True

View File

@@ -1,149 +0,0 @@
.. _config-yaml:
====================================
Basic settings in ``config.yaml``
====================================
Spack's basic configuration options are set in ``config.yaml``. You can
see the default settings by looking at
``etc/spack/defaults/config.yaml``:
.. literalinclude:: ../../../etc/spack/defaults/config.yaml
:language: yaml
These settings can be overridden in ``etc/spack/config.yaml`` or
``~/.spack/config.yaml``. See :ref:`configuration-scopes` for details.
.. _config-file-variables:
------------------------------
Config file variables
------------------------------
You may notice some variables prefixed with ``$`` in the settings above.
Spack understands several variables that can be used in values of
configuration parameters. They are:
* ``$spack``: path to the prefix of this spack installation
* ``$tempdir``: default system temporary directory (as specified in
Python's `tempfile.tempdir
<https://docs.python.org/2/library/tempfile.html#tempfile.tempdir>`_
variable.
* ``$user``: name of the current user
Note that, as with shell variables, you can write these as ``$varname``
or with braces to distinguish the variable from surrounding characters:
``${varname}``.
--------------------
``install_tree``
--------------------
The location where Spack will install packages and their dependencies.
Default is ``$spack/opt/spack``.
--------------------
``module_roots``
--------------------
Controls where Spack installs generated module files. You can customize
the location for each type of module. e.g.:
.. code-block:: yaml
module_roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit
See :ref:`modules` for details.
--------------------
``build_stage``
--------------------
Spack is designed to run out of a user home directories, and on many
systems the home directory a (slow) network filesystem. On most systems,
building in a temporary filesystem results in faster builds than building
in the home directory. Usually, there is also more space available in
the temporary location than in the home directory. So, Spack tries to
create build stages in temporary space.
By default, Spack's ``build_stage`` is configured like this:
.. code-block:: yaml
build_stage:
- $tempdir
- /nfs/tmp2/$user
- $spack/var/spack/stage
This is an ordered list of paths that Spack should search when trying to
find a temporary directory for the build stage. The list is searched in
order, and Spack will use the first directory to which it has write access.
See :ref:`config-file-variables` for more on ``$tempdir`` and ``$spack``.
When Spack builds a package, it creates a temporary directory within the
``build_stage``, and it creates a symbolic link to that directory in
``$spack/var/spack/stage``. This is used totrack the stage.
After a package is successfully installed, Spack deletes the temporary
directory it used to build. Unsuccessful builds are not deleted, but you
can manually purge them with :ref:`spack purge --stage
<cmd-spack-purge>`.
.. note::
The last item in the list is ``$spack/var/spack/stage``. If this is the
only writable directory in the ``build_stage`` list, Spack will build
*directly* in ``$spack/var/spack/stage`` and will not link to temporary
space.
--------------------
``source_cache``
--------------------
Location to cache downloaded tarballs and repositories. By default these
are stored in ``$spack/var/spack/cache``. These are stored indefinitely
by default. Can be purged with :ref:`spack purge --downloads
<cmd-spack-purge>`.
--------------------
``misc_cache``
--------------------
Temporary directory to store long-lived cache files, such as indices of
packages available in repositories. Defaults to ``~/.spack/cache``. Can
be purged with :ref:`spack purge --misc-cache <cmd-spack-purge>`.
--------------------
``verify_ssl``
--------------------
When set to ``true`` (default) Spack will verify certificates of remote
hosts when making ``ssl`` connections. Set to ``false`` to disable, and
tools like ``curl`` will use their ``--insecure`` options. Disabling
this can expose you to attacks. Use at your own risk.
--------------------
``checksum``
--------------------
When set to ``true``, Spack verifies downloaded source code using a
checksum, and will refuse to build packages that it cannot verify. Set
to ``false`` to disable these checks. Disabling this can expose you to
attacks. Use at your own risk.
--------------------
``dirty``
--------------------
By default, Spack unsets variables in your environment that can change
the way packages build. This includes ``LD_LIBRARY_PATH``, ``CPATH``,
``LIBRARY_PATH``, ``DYLD_LIBRARY_PATH``, and others.
By default, builds are ``clean``, but on some machines, compilers and
other tools may need custom ``LD_LIBRARY_PATH`` setings to run. You can
set ``dirty`` to ``true`` to skip the cleaning step and make all builds
"dirty" by default. Be aware that this will reduce the reproducibility
of builds.

View File

@@ -1,253 +1,236 @@
.. _configuration:
==============================
Configuration Files in Spack
==============================
Configuration
===================================
Spack has many configuration files. Here is a quick list of them, in
case you want to skip directly to specific docs:
.. _temp-space:
* :ref:`compilers.yaml <compiler-config>`
* :ref:`config.yaml <config-yaml>`
* :ref:`mirrors.yaml <mirrors>`
* :ref:`modules.yaml <modules>`
* :ref:`packages.yaml <build-settings>`
* :ref:`repos.yaml <repositories>`
Temporary space
----------------------------
-------------------------
YAML Format
-------------------------
.. warning:: Temporary space configuration will eventually be moved to
configuration files, but currently these settings are in
``lib/spack/spack/__init__.py``
Spack configuration files are written in YAML. We chose YAML because
it's human readable, but also versatile in that it supports dictionaries,
lists, and nested sections. For more details on the format, see `yaml.org
<http://yaml.org>`_ and `libyaml <http://pyyaml.org/wiki/LibYAML>`_.
Here is an example ``config.yaml`` file:
By default, Spack will try to do all of its building in temporary
space. There are two main reasons for this. First, Spack is designed
to run out of a user's home directory, and on may systems the home
directory is network mounted and potentially not a very fast
filesystem. We create build stages in a temporary directory to avoid
this. Second, many systems impose quotas on home directories, and
``/tmp`` or similar directories often have more available space. This
helps conserve space for installations in users' home directories.
You can customize temporary directories by editing
``lib/spack/spack/__init__.py``. Specifically, find this part of the file:
.. code-block:: python
# Whether to build in tmp space or directly in the stage_path.
# If this is true, then spack will make stage directories in
# a tmp filesystem, and it will symlink them into stage_path.
use_tmp_stage = True
# Locations to use for staging and building, in order of preference
# Use a %u to add a username to the stage paths here, in case this
# is a shared filesystem. Spack will use the first of these paths
# that it can create.
tmp_dirs = ['/nfs/tmp2/%u/spack-stage',
'/var/tmp/%u/spack-stage',
'/tmp/%u/spack-stage']
The ``use_tmp_stage`` variable controls whether Spack builds
**directly** inside the ``var/spack/`` directory. Normally, Spack
will try to find a temporary directory for a build, then it *symlinks*
that temporary directory into ``var/spack/`` so that you can keep
track of what temporary directories Spack is using.
The ``tmp_dirs`` variable is a list of paths Spack should search when
trying to find a temporary directory. They can optionally contain a
``%u``, which will substitute the current user's name into the path.
The list is searched in order, and Spack will create a temporary stage
in the first directory it finds to which it has write access. Add
more elements to the list to indicate where your own site's temporary
directory is.
.. _sec-external_packages:
External Packages
----------------------------
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
External packages are configured through the ``packages.yaml`` file found
in a Spack installation's ``etc/spack/`` or a user's ``~/.spack/``
directory. Here's an example of an external configuration:
.. code-block:: yaml
config:
install_tree: $spack/opt/spack
module_roots:
lmod: $spack/share/spack/lmod
build_stage:
- $tempdir
- /nfs/tmp2/$user
packages:
openmpi:
paths:
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-x86_64-debian7: /opt/openmpi-1.6.5-intel
Each spack configuration files is nested under a top-level section
corresponding to its name. So, ``config.yaml`` starts with ``config:``,
and ``mirrors.yaml`` starts with ``mirrors:``, etc.
This example lists three installations of OpenMPI, one built with gcc,
one built with gcc and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the the pre-installed OpenMPI in
the given directory. Packages.yaml can also be used to specify modules
.. _configuration-scopes:
Each ``packages.yaml`` begins with a ``packages:`` token, followed
by a list of package names. To specify externals, add a ``paths`` or ``modules``
token under the package name, which lists externals in a
``spec: /path`` or ``spec: module-name`` format. Each spec should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
-------------------------
Configuration Scopes
-------------------------
Each package version and compilers listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not every be built.
Spack pulls configuration data from files in several directories. There
are three configuration scopes. From lowest to highest:
The packages configuration can tell Spack to use an external location
for certain package versions, but it does not restrict Spack to using
external packages. In the above example, if an OpenMPI 1.8.4 became
available Spack may choose to start building and linking with that version
rather than continue using the pre-installed OpenMPI versions.
1. **defaults**: Stored in ``$(prefix)/etc/spack/defaults/``. These are
the "factory" settings. Users should generally not modify the settings
here, but should override them in other configuration scopes. The
defaults here will change from version to version of Spack.
2. **site**: Stored in ``$(prefix)/etc/spack/``. Settings here affect
only *this instance* of Spack, and they override defaults. The site
scope can can be used for per-project settings (one spack instance per
project) or for site-wide settings on a multi-user machine (e.g., for
a common spack instance).
3. **user**: Stored in the home directory: ``~/.spack/``. These settings
affect all instances of Spack and take the highest precedence.
Each configuration directory may contain several configuration files,
such as ``config.yaml``, ``compilers.yaml``, or ``mirrors.yaml``. When
configurations conflict, settings from higher-precedence scopes override
lower-precedence settings.
Commands that modify scopes (e.g., ``spack compilers``, ``spack repo``,
etc.) take a ``--scope=<name>`` parameter that you can use to control
which scope is modified. By default they modify the highest-precedence
scope.
.. _platform-scopes:
-------------------------
Platform-specific scopes
-------------------------
For each scope above, there can *also* be platform-specific settings.
For example, on Blue Gene/Q machines, Spack needs to know the location of
cross-compilers for the compute nodes. This configuration is in
``etc/spack/defaults/bgq/compilers.yaml``. It will take precedence over
settings in the ``defaults`` scope, but can still be overridden by
settings in ``site``, ``site/bgq``, ``user``, or ``user/bgq``. So, the
full scope precedence is:
1. ``defaults``
2. ``defaults/<platform>``
3. ``site``
4. ``site/<platform>``
5. ``user``
6. ``user/<platform>``
You can get the name to use for ``<platform>`` by running ``spack arch
--platform``.
-------------------------
Scope precedence
-------------------------
When spack queries for configuration parameters, it searches in
higher-precedence scopes first. So, settings in a higher-precedence file
can override those with the same key in a lower-precedence one. For
list-valued settings, Spack *prepends* higher-precedence settings to
lower-precedence settings. Completely ignoring higher-level configuration
options is supported with the ``::`` notation for keys (see
:ref:`config-overrides` below).
^^^^^^^^^^^^^^^^^^^^^^^^
Simple keys
^^^^^^^^^^^^^^^^^^^^^^^^
Let's look at an example of overriding a single key in a Spack file. If
your configurations look like this:
**defaults** scope:
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
.. code-block:: yaml
config:
install_tree: $spack/opt/spack
module_roots:
lmod: $spack/share/spack/lmod
build_stage:
- $tempdir
- /nfs/tmp2/$user
packages:
openmpi:
paths:
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-x86_64-debian7: /opt/openmpi-1.6.5-intel
buildable: False
**site** scope:
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI, and it will instead always rely on a pre-built
OpenMPI. Similar to ``paths``, ``buildable`` is specified as a property under
a package name.
.. code-block:: yaml
If an external module is specified as not buildable, then Spack will load the
external module into the build environment which can be used for linking.
config:
install_tree: /some/other/directory
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
Spack will only override ``install_tree`` in the ``config`` section, and
will take the site preferences for other settings. You can see the
final, combined configuration with the ``spack config get <configtype>``
command:
.. code-block:: console
:emphasize-lines: 3
Concretization Preferences
--------------------------------
$ spack config get config
config:
install_tree: /some/other/directory
module_roots:
lmod: $spack/share/spack/lmod
build_stage:
- $tempdir
- /nfs/tmp2/$user
$ _
Spack can be configured to prefer certain compilers, package
versions, depends_on, and variants during concretization.
The preferred configuration can be controlled via the
``~/.spack/packages.yaml`` file for user configuations, or the
``etc/spack/packages.yaml`` site configuration.
.. _config-overrides:
^^^^^^^^^^^^^^^^^^^^^^^^^^
Overriding entire sections
^^^^^^^^^^^^^^^^^^^^^^^^^^
Here's an example packages.yaml file that sets preferred packages:
Above, the site ``config.yaml`` only overrides specific settings in the
default ``config.yaml``. Sometimes, it is useful to *completely*
override lower-precedence settings. To do this, you can use *two* colons
at the end of a key in a configuration file. For example, if the
**site** ``config.yaml`` above looks like this:
.. code-block:: sh
.. code-block:: yaml
:emphasize-lines: 1
packages:
opencv:
compiler: [gcc@4.9]
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, gcc@4.6:, intel, clang, pgi]
providers:
mpi: [mvapich, mpich, openmpi]
config::
install_tree: /some/other/directory
Spack will ignore all lower-precedence configuration under the
``config::`` section:
At a high level, this example is specifying how packages should be
concretized. The opencv package should prefer using gcc 4.9 and
be built with debug options. The gperftools package should prefer version
2.2 over 2.4. Every package on the system should prefer mvapich for
its MPI and gcc 4.4.7 (except for opencv, which overrides this by preferring gcc 4.9).
These options are used to fill in implicit defaults. Any of them can be overwritten
on the command line if explicitly requested.
.. code-block:: console
Each packages.yaml file begins with the string ``packages:`` and
package names are specified on the next level. The special string ``all``
applies settings to each package. Underneath each package name is
one or more components: ``compiler``, ``variants``, ``version``,
or ``providers``. Each component has an ordered list of spec
``constraints``, with earlier entries in the list being preferred over
later entries.
$ spack config get config
config:
install_tree: /some/other/directory
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first
legal concretization rule. Going back to the example, if a user
requests gperftools 2.3 or later, then Spack will install version 2.4
as the 2.4 version of gperftools is preferred over 2.3.
^^^^^^^^^^^^^^^^^^^^^^
List-valued settings
^^^^^^^^^^^^^^^^^^^^^^
An explicit concretization rule in the preferred section will always
take preference over unlisted concretizations. In the above example,
xlc isn't listed in the compiler list. Every listed compiler from
gcc to pgi will thus be preferred over the xlc compiler.
Let's revisit the ``config.yaml`` example one more time. The
``build_stage`` setting's value is an ordered list of directories:
The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depend_on`` (e.g, mpi) and a list of rules for fulfilling that
dependency.
**defaults**
.. code-block:: yaml
Profiling
------------------
build_stage:
- $tempdir
- /nfs/tmp2/$user
Spack has some limited built-in support for profiling, and can report
statistics using standard Python timing tools. To use this feature,
supply ``-p`` to Spack on the command line, before any subcommands.
Suppose the user configuration adds its *own* list of ``build_stage``
paths:
.. _spack-p:
**user**
``spack -p``
~~~~~~~~~~~~~~~~~
.. code-block:: yaml
``spack -p`` output looks like this:
build_stage:
- /lustre-scratch/$user
- ~/mystage
.. code-block:: sh
Spack will first look at the paths in the site ``config.yaml``, then the
paths in the user's ``~/.spack/config.yaml``. The list in the
higher-precedence scope is *prepended* to the defaults. ``spack config
get config`` shows the result:
$ spack -p graph dyninst
o dyninst
|\
| |\
| o | libdwarf
|/ /
o | libelf
/
o boost
.. code-block:: console
:emphasize-lines: 7-10
307670 function calls (305943 primitive calls) in 0.127 seconds
$ spack config get config
config:
install_tree: /some/other/directory
module_roots:
lmod: $spack/share/spack/lmod
build_stage:
- /lustre-scratch/$user
- ~/mystage
- $tempdir
- /nfs/tmp2/$user
$ _
Ordered by: internal time
As in :ref:`config-overrides`, the higher-precedence scope can
*completely* override the lower-precedence scope using `::`. So if the
user config looked like this:
ncalls tottime percall cumtime percall filename:lineno(function)
853 0.021 0.000 0.066 0.000 inspect.py:472(getmodule)
51197 0.011 0.000 0.018 0.000 inspect.py:51(ismodule)
73961 0.010 0.000 0.010 0.000 {isinstance}
1762 0.006 0.000 0.053 0.000 inspect.py:440(getsourcefile)
32075 0.006 0.000 0.006 0.000 {hasattr}
1760 0.004 0.000 0.004 0.000 {posix.stat}
2240 0.004 0.000 0.004 0.000 {posix.lstat}
2602 0.004 0.000 0.011 0.000 inspect.py:398(getfile)
771 0.004 0.000 0.077 0.000 inspect.py:518(findsource)
2656 0.004 0.000 0.004 0.000 {method 'match' of '_sre.SRE_Pattern' objects}
30772 0.003 0.000 0.003 0.000 {method 'get' of 'dict' objects}
...
**user**
.. code-block:: yaml
:emphasize-lines: 1
build_stage::
- /lustre-scratch/$user
- ~/mystage
The merged configuration would look like this:
.. code-block:: console
:emphasize-lines: 7-8
$ spack config get config
config:
install_tree: /some/other/directory
module_roots:
lmod: $spack/share/spack/lmod
build_stage:
- /lustre-scratch/$user
- ~/mystage
$ _
The bottom of the output shows the top most time consuming functions,
slowest on top. The profiling support is from Python's built-in tool,
`cProfile
<https://docs.python.org/2/library/profile.html#module-cProfile>`_.

View File

@@ -1,522 +0,0 @@
.. _contribution-guide:
==================
Contribution Guide
==================
This guide is intended for developers or administrators who want to
contribute a new package, feature, or bugfix to Spack.
It assumes that you have at least some familiarity with Git VCS and Github.
The guide will show a few examples of contributing workflows and discuss
the granularity of pull-requests (PRs). It will also discuss the tests your
PR must pass in order to be accepted into Spack.
First, what is a PR? Quoting `Bitbucket's tutorials <https://www.atlassian.com/git/tutorials/making-a-pull-request/>`_:
Pull requests are a mechanism for a developer to notify team members that
they have **completed a feature**. The pull request is more than just a
notification—its a dedicated forum for discussing the proposed feature.
Important is **completed feature**. The changes one proposes in a PR should
correspond to one feature/bugfix/extension/etc. One can create PRs with
changes relevant to different ideas, however reviewing such PRs becomes tedious
and error prone. If possible, try to follow the **one-PR-one-package/feature** rule.
Spack uses a rough approximation of the `Git Flow <http://nvie.com/posts/a-successful-git-branching-model/>`_
branching model. The develop branch contains the latest contributions, and
master is always tagged and points to the latest stable release. Therefore, when
you send your request, make ``develop`` the destination branch on the
`Spack repository <https://github.com/LLNL/spack>`_.
----------------------
Continuous Integration
----------------------
Spack uses `Travis CI <https://travis-ci.org/LLNL/spack>`_ for Continuous Integration
testing. This means that every time you submit a pull request, a series of tests will
be run to make sure you didn't accidentally introduce any bugs into Spack. Your PR
will not be accepted until it passes all of these tests. While you can certainly wait
for the results of these tests after submitting a PR, we recommend that you run them
locally to speed up the review process.
If you take a look in ``$SPACK_ROOT/.travis.yml``, you'll notice that we test
against Python 2.6 and 2.7. We currently perform 3 types of tests:
^^^^^^^^^^
Unit Tests
^^^^^^^^^^
Unit tests ensure that core Spack features like fetching or spec resolution are
working as expected. If your PR only adds new packages or modifies existing ones,
there's very little chance that your changes could cause the unit tests to fail.
However, if you make changes to Spack's core libraries, you should run the unit
tests to make sure you didn't break anything.
Since they test things like fetching from VCS repos, the unit tests require
`git <https://git-scm.com/>`_, `mercurial <https://www.mercurial-scm.org/>`_,
and `subversion <https://subversion.apache.org/>`_ to run. Make sure these are
installed on your system and can be found in your ``PATH``. All of these can be
installed with Spack or with your system package manager.
To run *all* of the unit tests, use:
.. code-block:: console
$ spack test
These tests may take several minutes to complete. If you know you are only
modifying a single Spack feature, you can run a single unit test at a time:
.. code-block:: console
$ spack test architecture
This allows you to develop iteratively: make a change, test that change, make
another change, test that change, etc. To get a list of all available unit
tests, run:
.. command-output:: spack test --collect-only
Unit tests are crucial to making sure bugs aren't introduced into Spack. If you
are modifying core Spack libraries or adding new functionality, please consider
adding new unit tests or strengthening existing tests.
.. note::
There is also a ``run-unit-tests`` script in ``share/spack/qa`` that
runs the unit tests. Afterwards, it reports back to Coverage with the
percentage of Spack that is covered by unit tests. This script is
designed for Travis CI. If you want to run the unit tests yourself, we
suggest you use ``spack test``.
^^^^^^^^^^^^
Flake8 Tests
^^^^^^^^^^^^
Spack uses `Flake8 <http://flake8.pycqa.org/en/latest/>`_ to test for
`PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_ conformance. PEP 8 is
a series of style guides for Python that provide suggestions for everything
from variable naming to indentation. In order to limit the number of PRs that
were mostly style changes, we decided to enforce PEP 8 conformance. Your PR
needs to comply with PEP 8 in order to be accepted.
Testing for PEP 8 compliance is easy. Simply run the ``spack flake8``
command:
.. code-block:: console
$ spack flake8
``spack flake8`` has a couple advantages over running ``flake8`` by hand:
#. It only tests files that you have modified since branching off of
``develop``.
#. It works regardless of what directory you are in.
#. It automatically adds approved exemptions from the ``flake8``
checks. For example, URLs are often longer than 80 characters, so we
exempt them from line length checks. We also exempt lines that start
with "homepage", "url", "version", "variant", "depends_on", and
"extends" in ``package.py`` files.
More approved flake8 exemptions can be found
`here <https://github.com/LLNL/spack/blob/develop/.flake8>`_.
If all is well, you'll see something like this:
.. code-block:: console
$ run-flake8-tests
Dependencies found.
=======================================================
flake8: running flake8 code checks on spack.
Modified files:
var/spack/repos/builtin/packages/hdf5/package.py
var/spack/repos/builtin/packages/hdf/package.py
var/spack/repos/builtin/packages/netcdf/package.py
=======================================================
Flake8 checks were clean.
However, if you aren't compliant with PEP 8, flake8 will complain:
.. code-block:: console
var/spack/repos/builtin/packages/netcdf/package.py:26: [F401] 'os' imported but unused
var/spack/repos/builtin/packages/netcdf/package.py:61: [E303] too many blank lines (2)
var/spack/repos/builtin/packages/netcdf/package.py:106: [E501] line too long (92 > 79 characters)
Flake8 found errors.
Most of the error messages are straightforward, but if you don't understand what
they mean, just ask questions about them when you submit your PR. The line numbers
will change if you add or delete lines, so simply run ``run-flake8-tests`` again
to update them.
.. tip::
Try fixing flake8 errors in reverse order. This eliminates the need for
multiple runs of ``flake8`` just to re-compute line numbers and makes it
much easier to fix errors directly off of the Travis output.
.. warning::
Flake8 requires setuptools in order to run. If you installed ``py-flake8``
with Spack, make sure to add ``py-setuptools`` to your ``PYTHONPATH``.
Otherwise, you will get an error message like:
.. code-block:: console
Traceback (most recent call last):
File: "/usr/bin/flake8", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
^^^^^^^^^^^^^^^^^^^
Documentation Tests
^^^^^^^^^^^^^^^^^^^
Spack uses `Sphinx <http://www.sphinx-doc.org/en/stable/>`_ to build its
documentation. In order to prevent things like broken links and missing imports,
we added documentation tests that build the documentation and fail if there
are any warning or error messages.
Building the documentation requires several dependencies, all of which can be
installed with Spack:
* sphinx
* graphviz
* git
* mercurial
* subversion
.. warning::
Sphinx has `several required dependencies <https://github.com/LLNL/spack/blob/develop/var/spack/repos/builtin/packages/py-sphinx/package.py>`_.
If you installed ``py-sphinx`` with Spack, make sure to add all of these
dependencies to your ``PYTHONPATH``. The easiest way to do this is to run
``spack activate py-sphinx`` so that all of the dependencies are symlinked
to a central location. If you see an error message like:
.. code-block:: console
Traceback (most recent call last):
File: "/usr/bin/flake8", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
that means Sphinx couldn't find setuptools in your ``PYTHONPATH``.
Once all of the dependencies are installed, you can try building the documentation:
.. code-block:: console
$ cd "$SPACK_ROOT/lib/spack/docs"
$ make clean
$ make
If you see any warning or error messages, you will have to correct those before
your PR is accepted.
.. note::
There is also a ``run-doc-tests`` script in the Quality Assurance directory.
The only difference between running this script and running ``make`` by hand
is that the script will exit immediately if it encounters an error or warning.
This is necessary for Travis CI. If you made a lot of documentation tests, it
is much quicker to run ``make`` by hand so that you can see all of the warnings
at once.
If you are editing the documentation, you should obviously be running the
documentation tests. But even if you are simply adding a new package, your
changes could cause the documentation tests to fail:
.. code-block:: console
package_list.rst:8745: WARNING: Block quote ends without a blank line; unexpected unindent.
At first, this error message will mean nothing to you, since you didn't edit
that file. Until you look at line 8745 of the file in question:
.. code-block:: rst
Description:
NetCDF is a set of software libraries and self-describing, machine-
independent data formats that support the creation, access, and sharing
of array-oriented scientific data.
Our documentation includes :ref:`a list of all Spack packages <package-list>`.
If you add a new package, its docstring is added to this page. The problem in
this case was that the docstring looked like:
.. code-block:: python
class Netcdf(Package):
"""
NetCDF is a set of software libraries and self-describing,
machine-independent data formats that support the creation,
access, and sharing of array-oriented scientific data.
"""
Docstrings cannot start with a newline character, or else Sphinx will complain.
Instead, they should look like:
.. code-block:: python
class Netcdf(Package):
"""NetCDF is a set of software libraries and self-describing,
machine-independent data formats that support the creation,
access, and sharing of array-oriented scientific data."""
Documentation changes can result in much more obfuscated warning messages.
If you don't understand what they mean, feel free to ask when you submit
your PR.
-------------
Git Workflows
-------------
Spack is still in the beta stages of development. Most of our users run off of
the develop branch, and fixes and new features are constantly being merged. So
how do you keep up-to-date with upstream while maintaining your own local
differences and contributing PRs to Spack?
^^^^^^^^^
Branching
^^^^^^^^^
The easiest way to contribute a pull request is to make all of your changes on
new branches. Make sure your ``develop`` is up-to-date and create a new branch
off of it:
.. code-block:: console
$ git checkout develop
$ git pull upstream develop
$ git branch <descriptive_branch_name>
$ git checkout <descriptive_branch_name>
Here we assume that the local ``develop`` branch tracks the upstream develop
branch of Spack. This is not a requirement and you could also do the same with
remote branches. But for some it is more convenient to have a local branch that
tracks upstream.
Normally we prefer that commits pertaining to a package ``<package-name>`` have
a message ``<package-name>: descriptive message``. It is important to add
descriptive message so that others, who might be looking at your changes later
(in a year or maybe two), would understand the rationale behind them.
Now, you can make your changes while keeping the ``develop`` branch pure.
Edit a few files and commit them by running:
.. code-block:: console
$ git add <files_to_be_part_of_the_commit>
$ git commit --message <descriptive_message_of_this_particular_commit>
Next, push it to your remote fork and create a PR:
.. code-block:: console
$ git push origin <descriptive_branch_name> --set-upstream
GitHub provides a `tutorial <https://help.github.com/articles/about-pull-requests/>`_
on how to file a pull request. When you send the request, make ``develop`` the
destination branch.
If you need this change immediately and don't have time to wait for your PR to
be merged, you can always work on this branch. But if you have multiple PRs,
another option is to maintain a Frankenstein branch that combines all of your
other branches:
.. code-block:: console
$ git co develop
$ git branch <your_modified_develop_branch>
$ git checkout <your_modified_develop_branch>
$ git merge <descriptive_branch_name>
This can be done with each new PR you submit. Just make sure to keep this local
branch up-to-date with upstream ``develop`` too.
^^^^^^^^^^^^^^
Cherry-Picking
^^^^^^^^^^^^^^
What if you made some changes to your local modified develop branch and already
committed them, but later decided to contribute them to Spack? You can use
cherry-picking to create a new branch with only these commits.
First, check out your local modified develop branch:
.. code-block:: console
$ git checkout <your_modified_develop_branch>
Now, get the hashes of the commits you want from the output of:
.. code-block:: console
$ git log
Next, create a new branch off of upstream ``develop`` and copy the commits
that you want in your PR:
.. code-block:: console
$ git checkout develop
$ git pull upstream develop
$ git branch <descriptive_branch_name>
$ git checkout <descriptive_branch_name>
$ git cherry-pick <hash>
$ git push origin <descriptive_branch_name> --set-upstream
Now you can create a PR from the web-interface of GitHub. The net result is as
follows:
#. You patched your local version of Spack and can use it further.
#. You "cherry-picked" these changes in a stand-alone branch and submitted it
as a PR upstream.
Should you have several commits to contribute, you could follow the same
procedure by getting hashes of all of them and cherry-picking to the PR branch.
.. note::
It is important that whenever you change something that might be of
importance upstream, create a pull request as soon as possible. Do not wait
for weeks/months to do this, because:
#. you might forget why you modified certain files
#. it could get difficult to isolate this change into a stand-alone clean PR.
^^^^^^^^
Rebasing
^^^^^^^^
Other developers are constantly making contributions to Spack, possibly on the
same files that your PR changed. If their PR is merged before yours, it can
create a merge conflict. This means that your PR can no longer be automatically
merged without a chance of breaking your changes. In this case, you will be
asked to rebase on top of the latest upstream ``develop``.
First, make sure your develop branch is up-to-date:
.. code-block:: console
$ git checkout develop
$ git pull upstream develop
Now, we need to switch to the branch you submitted for your PR and rebase it
on top of develop:
.. code-block:: console
$ git checkout <descriptive_branch_name>
$ git rebase develop
Git will likely ask you to resolve conflicts. Edit the file that it says can't
be merged automatically and resolve the conflict. Then, run:
.. code-block:: console
$ git add <file_that_could_not_be_merged>
$ git rebase --continue
You may have to repeat this process multiple times until all conflicts are resolved.
Once this is done, simply force push your rebased branch to your remote fork:
.. code-block:: console
$ git push --force origin <descriptive_branch_name>
^^^^^^^^^^^^^^^^^^^^^^^^^
Rebasing with cherry-pick
^^^^^^^^^^^^^^^^^^^^^^^^^
You can also perform a rebase using ``cherry-pick``. First, create a temporary
backup branch:
.. code-block:: console
$ git checkout <descriptive_branch_name>
$ git branch tmp
If anything goes wrong, you can always go back to your ``tmp`` branch.
Now, look at the logs and save the hashes of any commits you would like to keep:
.. code-block:: console
$ git log
Next, go back to the original branch and reset it to ``develop``.
Before doing so, make sure that you local ``develop`` branch is up-to-date
with upstream:
.. code-block:: console
$ git checkout develop
$ git pull upstream develop
$ git checkout <descriptive_branch_name>
$ git reset --hard develop
Now you can cherry-pick relevant commits:
.. code-block:: console
$ git cherry-pick <hash1>
$ git cherry-pick <hash2>
Push the modified branch to your fork:
.. code-block:: console
$ git push --force origin <descriptive_branch_name>
If everything looks good, delete the backup branch:
.. code-block:: console
$ git branch --delete --force tmp
^^^^^^^^^^^^^^^^^^
Re-writing History
^^^^^^^^^^^^^^^^^^
Sometimes you may end up on a branch that has diverged so much from develop
that it cannot easily be rebased. If the current commits history is more of
an experimental nature and only the net result is important, you may rewrite
the history.
First, merge upstream ``develop`` and reset you branch to it. On the branch
in question, run:
.. code-block:: console
$ git merge develop
$ git reset develop
At this point your branch will point to the same commit as develop and
thereby the two are indistinguishable. However, all the files that were
previously modified will stay as such. In other words, you do not lose the
changes you made. Changes can be reviewed by looking at diffs:
.. code-block:: console
$ git status
$ git diff
The next step is to rewrite the history by adding files and creating commits:
.. code-block:: console
$ git add <files_to_be_part_of_commit>
$ git commit --message <descriptive_message>
After all changed files are committed, you can push the branch to your fork
and create a PR:
.. code-block:: console
$ git push origin --set-upstream

View File

@@ -1,8 +1,7 @@
.. _developer_guide:
===============
Developer Guide
===============
=====================
This guide is intended for people who want to work on Spack itself.
If you just want to develop packages, see the :ref:`packaging-guide`.
@@ -12,18 +11,17 @@ It is assumed that you've read the :ref:`basic-usage` and
concepts discussed there. If you're not, we recommend reading those
first.
--------
Overview
--------
-----------------------
Spack is designed with three separate roles in mind:
#. **Users**, who need to install software *without* knowing all the
details about how it is built.
#. **Packagers** who know how a particular software package is
built and encode this information in package files.
#. **Developers** who work on Spack, add new features, and try to
make the jobs of packagers and users easier.
#. **Users**, who need to install software *without* knowing all the
details about how it is built.
#. **Packagers** who know how a particular software package is
built and encode this information in package files.
#. **Developers** who work on Spack, add new features, and try to
make the jobs of packagers and users easier.
Users could be end users installing software in their home directory,
or administrators installing software to a shared directory on a
@@ -43,9 +41,9 @@ specification.
This gets us to the two key concepts in Spack's software design:
#. **Specs**: expressions for describing builds of software, and
#. **Packages**: Python modules that build software according to a
spec.
#. **Specs**: expressions for describing builds of software, and
#. **Packages**: Python modules that build software according to a
spec.
A package is a template for building particular software, and a spec
as a descriptor for one or more instances of that template. Users
@@ -65,75 +63,75 @@ building the software off to the package object. The rest of this
document describes all the pieces that come together to make that
happen.
-------------------
Directory Structure
-------------------
-------------------------
So that you can familiarize yourself with the project, we'll start
with a high level view of Spack's directory structure:
with a high level view of Spack's directory structure::
.. code-block:: none
spack/ <- installation root
bin/
spack <- main spack executable
spack/ <- installation root
bin/
spack <- main spack executable
etc/
spack/ <- Spack config files.
Can be overridden by files in ~/.spack.
etc/
spack/ <- Spack config files.
Can be overridden by files in ~/.spack.
var/
spack/ <- build & stage directories
repos/ <- contains package repositories
builtin/ <- pkg repository that comes with Spack
repo.yaml <- descriptor for the builtin repository
packages/ <- directories under here contain packages
cache/ <- saves resources downloaded during installs
var/
spack/ <- build & stage directories
repos/ <- contains package repositories
builtin/ <- pkg repository that comes with Spack
repo.yaml <- descriptor for the builtin repository
packages/ <- directories under here contain packages
cache/ <- saves resources downloaded during installs
opt/
spack/ <- packages are installed here
opt/
spack/ <- packages are installed here
lib/
spack/
docs/ <- source for this documentation
env/ <- compiler wrappers for build environment
lib/
spack/
docs/ <- source for this documentation
env/ <- compiler wrappers for build environment
external/ <- external libs included in Spack distro
llnl/ <- some general-use libraries
external/ <- external libs included in Spack distro
llnl/ <- some general-use libraries
spack/ <- spack module; contains Python code
cmd/ <- each file in here is a spack subcommand
compilers/ <- compiler description files
test/ <- unit test modules
util/ <- common code
spack/ <- spack module; contains Python code
cmd/ <- each file in here is a spack subcommand
compilers/ <- compiler description files
test/ <- unit test modules
util/ <- common code
Spack is designed so that it could live within a `standard UNIX
directory hierarchy <http://linux.die.net/man/7/hier>`_, so ``lib``,
``var``, and ``opt`` all contain a ``spack`` subdirectory in case
Spack is installed alongside other software. Most of the interesting
parts of Spack live in ``lib/spack``.
parts of Spack live in ``lib/spack``. Files under ``var`` are created
as needed, so there is no ``var`` directory when you initially clone
Spack from the repository.
Spack has *one* directory layout and there is no install process.
Most Python programs don't look like this (they use distutils, ``setup.py``,
etc.) but we wanted to make Spack *very* easy to use. The simple layout
spares users from the need to install Spack into a Python environment.
Many users don't have write access to a Python installation, and installing
an entire new instance of Python to bootstrap Spack would be very complicated.
version and the source code. Most Python programs don't look like
this (they use distutils, ``setup.py``, etc.) but we wanted to make
Spack *very* easy to use. The simple layout spares users from the
need to install Spack into a Python environment. Many users don't
have write access to a Python installation, and installing an entire
new instance of Python to bootstrap Spack would be very complicated.
Users should not have to install install a big, complicated package to
use the thing that's supposed to spare them from the details of big,
complicated packages. The end result is that Spack works out of the
box: clone it and add ``bin`` to your PATH and you're ready to go.
--------------
Code Structure
--------------
-------------------------
This section gives an overview of the various Python modules in Spack,
grouped by functionality.
^^^^^^^^^^^^^^^^^^^^^^^
Package-related modules
^^^^^^^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:mod:`spack.package`
Contains the :class:`Package <spack.package.Package>` class, which
@@ -160,9 +158,9 @@ Package-related modules
decorator, which allows :ref:`multimethods <multimethods>` in
packages.
^^^^^^^^^^^^^^^^^^^^
Spec-related modules
^^^^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:mod:`spack.spec`
Contains :class:`Spec <spack.spec.Spec>` and :class:`SpecParser
@@ -210,9 +208,9 @@ Spec-related modules
Not yet implemented. Should eventually have architecture
descriptions for cross-compiling.
^^^^^^^^^^^^^^^^^
Build environment
^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:mod:`spack.stage`
Handles creating temporary directories for builds.
@@ -226,17 +224,15 @@ Build environment
Create more implementations of this to change the hierarchy and
naming scheme in ``$spack_prefix/opt``
^^^^^^^^^^^^^^^^^
Spack Subcommands
^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:mod:`spack.cmd`
Each module in this package implements a Spack subcommand. See
:ref:`writing commands <writing-commands>` for details.
^^^^^^^^^^
Unit tests
^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:mod:`spack.test`
Implements Spack's test suite. Add a module and put its name in
@@ -246,100 +242,78 @@ Unit tests
This is a fake package hierarchy used to mock up packages for
Spack's test suite.
^^^^^^^^^^^^^
Other Modules
^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:mod:`spack.globals`
Includes global settings for Spack. the default policy classes for
things like :ref:`temporary space <temp-space>` and
:ref:`concretization <concretization-policies>`.
:mod:`spack.tty`
Basic output functions for all of the messages Spack writes to the
terminal.
:mod:`spack.color`
Implements a color formatting syntax used by ``spack.tty``.
:mod:`spack.url`
URL parsing, for deducing names and versions of packages from
tarball URLs.
:mod:`spack.util`
In this package are a number of utility modules for the rest of
Spack.
:mod:`spack.error`
:class:`SpackError <spack.error.SpackError>`, the base class for
Spack's exception hierarchy.
:mod:`llnl.util.tty`
Basic output functions for all of the messages Spack writes to the
terminal.
:mod:`llnl.util.tty.color`
Implements a color formatting syntax used by ``spack.tty``.
:mod:`llnl.util`
In this package are a number of utility modules for the rest of
Spack.
------------
Spec objects
------------
-------------------------
---------------
Package objects
---------------
-------------------------
Most spack commands
look something like this:
#. Parse an abstract spec (or specs) from the command line,
#. *Normalize* the spec based on information in package files,
#. *Concretize* the spec according to some customizable policies,
#. Instantiate a package based on the spec, and
#. Call methods (e.g., ``install()``) on the package object.
Most spack commands look something like this:
#. Parse an abstract spec (or specs) from the command line,
#. *Normalize* the spec based on information in package files,
#. *Concretize* the spec according to some customizable policies,
#. Instantiate a package based on the spec, and
#. Call methods (e.g., ``install()``) on the package object.
The information in Package files is used at all stages in this
process.
Conceptually, packages are overloaded. They contain:
-------------
Conceptually, packages are overloaded. They contain
Stage objects
-------------
-------------------------
.. _writing-commands:
----------------
Writing commands
----------------
-------------------------
----------
Unit tests
----------
-------------------------
------------
Unit testing
------------
-------------------------
------------------
Developer commands
------------------
-------------------------
^^^^^^^^^^^^^
``spack doc``
^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^
``spack test``
^^^^^^^^^^^^^^
---------
Profiling
---------
Spack has some limited built-in support for profiling, and can report
statistics using standard Python timing tools. To use this feature,
supply ``--profile`` to Spack on the command line, before any subcommands.
.. _spack-p:
^^^^^^^^^^^^^^^^^^^
``spack --profile``
^^^^^^^^^^^^^^^^^^^
``spack --profile`` output looks like this:
.. command-output:: spack --profile graph dyninst
:ellipsis: 25
The bottom of the output shows the top most time consuming functions,
slowest on top. The profiling support is from Python's built-in tool,
`cProfile
<https://docs.python.org/2/library/profile.html#module-cProfile>`_.
~~~~~~~~~~~~~~~~~

View File

@@ -1,3 +1,27 @@
##############################################################################
# Copyright (c) 2013, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License (as published by
# the Free Software Foundation) version 2.1 dated February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
# -*- coding: utf-8 -*-
"""
sphinxcontrib

View File

@@ -1,3 +1,27 @@
##############################################################################
# Copyright (c) 2013, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License (as published by
# the Free Software Foundation) version 2.1 dated February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
# -*- coding: utf-8 -*-
# Copyright (c) 2010, 2011, 2012, Sebastian Wiesner <lunaryorn@gmail.com>
# All rights reserved.

View File

@@ -1,32 +1,29 @@
================
Feature Overview
================
Feature overview
==================
This is a high-level overview of features that make Spack different
from other `package managers
<http://en.wikipedia.org/wiki/Package_management_system>`_ and `port
systems <http://en.wikipedia.org/wiki/Ports_collection>`_.
---------------------------
Simple package installation
---------------------------
----------------------------
Installing the default version of a package is simple. This will install
the latest version of the ``mpileaks`` package and all of its dependencies:
.. code-block:: console
.. code-block:: sh
$ spack install mpileaks
--------------------------------
Custom versions & configurations
--------------------------------
-------------------------------------------
Spack allows installation to be customized. Users can specify the
version, build compiler, compile-time options, and cross-compile
platform, all on the command line.
.. code-block:: console
.. code-block:: sh
# Install a particular version by appending @
$ spack install mpileaks@1.1.2
@@ -41,7 +38,7 @@ platform, all on the command line.
$ spack install mpileaks@1.1.2 %gcc@4.7.3 +debug
# Add compiler flags using the conventional names
$ spack install mpileaks@1.1.2 %gcc@4.7.3 cppflags="-O3 -floop-block"
$ spack install mpileaks@1.1.2 %gcc@4.7.3 cppflags=\"-O3 -floop-block\"
# Cross-compile for a different architecture with arch=
$ spack install mpileaks@1.1.2 arch=bgqos_0
@@ -50,39 +47,37 @@ Users can specify as many or few options as they care about. Spack
will fill in the unspecified values with sensible defaults. The two listed
syntaxes for variants are identical when the value is boolean.
----------------------
Customize dependencies
----------------------
-------------------------------------
Spack allows *dependencies* of a particular installation to be
customized extensively. Suppose that ``mpileaks`` depends indirectly
on ``libelf`` and ``libdwarf``. Using ``^``, users can add custom
configurations for the dependencies:
.. code-block:: console
.. code-block:: sh
# Install mpileaks and link it with specific versions of libelf and libdwarf
$ spack install mpileaks@1.1.2 %gcc@4.7.3 +debug ^libelf@0.8.12 ^libdwarf@20130729+debug
------------------------
Non-destructive installs
------------------------
-------------------------------------
Spack installs every unique package/dependency configuration into its
own prefix, so new installs will not break existing ones.
-------------------------------
Packages can peacefully coexist
-------------------------------
-------------------------------------
Spack avoids library misconfiguration by using ``RPATH`` to link
dependencies. When a user links a library or runs a program, it is
tied to the dependencies it was built with, so there is no need to
manipulate ``LD_LIBRARY_PATH`` at runtime.
-------------------------
Creating packages is easy
-------------------------
-------------------------------------
To create a new packages, all Spack needs is a URL for the source
archive. The ``spack create`` command will create a boilerplate
@@ -91,7 +86,7 @@ in pure Python.
For example, this command:
.. code-block:: console
.. code-block:: sh
$ spack create http://www.mr511.de/software/libelf-0.8.13.tar.gz
@@ -101,26 +96,16 @@ creates a simple python file:
from spack import *
class Libelf(Package):
"""FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here.
homepage = "http://www.example.com"
homepage = "http://www.example.com/"
url = "http://www.mr511.de/software/libelf-0.8.13.tar.gz"
version('0.8.13', '4136d7b4c04df68b686570afa26988ac')
# FIXME: Add dependencies if required.
# depends_on('foo')
def install(self, spec, prefix):
# FIXME: Modify the configure line to suit your build system here.
configure('--prefix={0}'.format(prefix))
# FIXME: Add logic to build and install here.
def install(self, prefix):
configure("--prefix=%s" % prefix)
make()
make('install')
make("install")
It doesn't take much python coding to get from there to a working
package:

File diff suppressed because it is too large Load Diff

View File

@@ -3,9 +3,8 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
===================
Spack Documentation
===================
=================================
Spack is a package management tool designed to support multiple
versions and configurations of software on a wide variety of platforms
@@ -28,7 +27,7 @@ Get spack from the `github repository
<https://github.com/llnl/spack>`_ and install your first
package:
.. code-block:: console
.. code-block:: sh
$ git clone https://github.com/llnl/spack.git
$ cd spack/bin
@@ -37,40 +36,25 @@ package:
If you're new to spack and want to start using it, see :doc:`getting_started`,
or refer to the full manual below.
Table of Contents
---------------------
.. toctree::
:maxdepth: 2
:caption: Basics
features
getting_started
basic_usage
workflows
tutorial_sc16
.. toctree::
:maxdepth: 2
:caption: Reference
configuration
config_yaml
build_settings
packaging_guide
application_developer_support
mirrors
module_file_support
repositories
configuration
developer_guide
case_studies
command_index
package_list
.. toctree::
:maxdepth: 2
:caption: Contributing
contribution_guide
packaging_guide
developer_guide
API Docs <spack>
==================
Indices and tables
==================

View File

@@ -1,8 +1,7 @@
.. _mirrors:
=======
Mirrors
=======
============================
Some sites may not have access to the internet for fetching packages.
These sites will need a local repository of tarballs from which they
@@ -11,29 +10,27 @@ mirror is a URL that points to a directory, either on the local
filesystem or on some server, containing tarballs for all of Spack's
packages.
Here's an example of a mirror's directory structure:
Here's an example of a mirror's directory structure::
.. code-block:: none
mirror/
cmake/
cmake-2.8.10.2.tar.gz
dyninst/
dyninst-8.1.1.tgz
dyninst-8.1.2.tgz
libdwarf/
libdwarf-20130126.tar.gz
libdwarf-20130207.tar.gz
libdwarf-20130729.tar.gz
libelf/
libelf-0.8.12.tar.gz
libelf-0.8.13.tar.gz
libunwind/
libunwind-1.1.tar.gz
mpich/
mpich-3.0.4.tar.gz
mvapich2/
mvapich2-1.9.tgz
mirror/
cmake/
cmake-2.8.10.2.tar.gz
dyninst/
dyninst-8.1.1.tgz
dyninst-8.1.2.tgz
libdwarf/
libdwarf-20130126.tar.gz
libdwarf-20130207.tar.gz
libdwarf-20130729.tar.gz
libelf/
libelf-0.8.12.tar.gz
libelf-0.8.13.tar.gz
libunwind/
libunwind-1.1.tar.gz
mpich/
mpich-3.0.4.tar.gz
mvapich2/
mvapich2-1.9.tgz
The structure is very simple. There is a top-level directory. The
second level directories are named after packages, and the third level
@@ -52,16 +49,27 @@ contains tarballs for each package, named after each package.
not standardize on a particular compression algorithm, because this
would potentially require expanding and re-compressing each archive.
.. _cmd-spack-mirror:
.. _spack-mirror:
----------------
``spack mirror``
----------------
----------------------------
Mirrors are managed with the ``spack mirror`` command. The help for
``spack mirror`` looks like this:
``spack mirror`` looks like this::
.. command-output:: spack help mirror
$ spack mirror -h
usage: spack mirror [-h] SUBCOMMAND ...
positional arguments:
SUBCOMMAND
create Create a directory to be used as a spack mirror, and fill
it with package archives.
add Add a mirror to Spack.
remove Remove a mirror by name.
list Print out available mirrors to the console.
optional arguments:
-h, --help show this help message and exit
The ``create`` command actually builds a mirror by fetching all of its
packages from the internet and checksumming them.
@@ -71,9 +79,8 @@ control the URL(s) from which Spack downloads its packages.
.. _spack-mirror-create:
-----------------------
``spack mirror create``
-----------------------
----------------------------
You can create a mirror using the ``spack mirror create`` command, assuming
you're on a machine where you can access the internet.
@@ -82,7 +89,8 @@ The command will iterate through all of Spack's packages and download
the safe ones into a directory structure like the one above. Here is
what it looks like:
.. code-block:: console
.. code-block:: bash
$ spack mirror create libelf libdwarf
==> Created new mirror in spack-mirror-2014-06-24
@@ -116,31 +124,25 @@ what it looks like:
Once this is done, you can tar up the ``spack-mirror-2014-06-24`` directory and
copy it over to the machine you want it hosted on.
^^^^^^^^^^^^^^^^^^^
Custom package sets
^^^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~
Normally, ``spack mirror create`` downloads all the archives it has
checksums for. If you want to only create a mirror for a subset of
packages, you can do that by supplying a list of package specs on the
command line after ``spack mirror create``. For example, this
command:
command::
.. code-block:: console
$ spack mirror create libelf@0.8.12: boost@1.44:
$ spack mirror create libelf@0.8.12: boost@1.44:
Will create a mirror for libelf versions greater than or equal to
0.8.12 and boost versions greater than or equal to 1.44.
^^^^^^^^^^^^
Mirror files
^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~
If you have a *very* large number of packages you want to mirror, you
can supply a file with specs in it, one per line:
.. code-block:: console
can supply a file with specs in it, one per line::
$ cat specs.txt
libdwarf
@@ -148,7 +150,7 @@ can supply a file with specs in it, one per line:
boost@1.44:
boost@1.39.0
...
$ spack mirror create --file specs.txt
$ spack mirror create -f specs.txt
...
This is useful if there is a specific suite of software managed by
@@ -156,69 +158,57 @@ your site.
.. _spack-mirror-add:
--------------------
``spack mirror add``
--------------------
----------------------------
Once you have a mirror, you need to let spack know about it. This is
relatively simple. First, figure out the URL for the mirror. If it's
a file, you can use a file URL like this one:
a file, you can use a file URL like this one::
.. code-block:: none
file://~/spack-mirror-2014-06-24
file:///Users/gamblin2/spack-mirror-2014-06-24
That points to the directory on the local filesystem. If it were on a
web server, you could use a URL like this one:
https://example.com/some/web-hosted/directory/spack-mirror-2014-06-24
https://example.com/some/web-hosted/directory/spack-mirror-2014-06-24
Spack will use the URL as the root for all of the packages it fetches.
You can tell your Spack installation to use that mirror like this:
.. code-block:: console
.. code-block:: bash
$ spack mirror add local_filesystem file://~/spack-mirror-2014-06-24
$ spack mirror add local_filesystem file:///Users/gamblin2/spack-mirror-2014-06-24
Each mirror has a name so that you can refer to it again later.
.. _spack-mirror-list:
---------------------
``spack mirror list``
---------------------
----------------------------
To see all the mirrors Spack knows about, run ``spack mirror list``:
.. code-block:: console
To see all the mirrors Spack knows about, run ``spack mirror list``::
$ spack mirror list
local_filesystem file://~/spack-mirror-2014-06-24
local_filesystem file:///Users/gamblin2/spack-mirror-2014-06-24
.. _spack-mirror-remove:
-----------------------
``spack mirror remove``
-----------------------
----------------------------
To remove a mirror by name, run:
.. code-block:: console
To remove a mirror by name::
$ spack mirror remove local_filesystem
$ spack mirror list
==> No mirrors configured.
-----------------
Mirror precedence
-----------------
----------------------------
Adding a mirror really adds a line in ``~/.spack/mirrors.yaml``:
.. code-block:: yaml
Adding a mirror really adds a line in ``~/.spack/mirrors.yaml``::
mirrors:
local_filesystem: file://~/spack-mirror-2014-06-24
local_filesystem: file:///Users/gamblin2/spack-mirror-2014-06-24
remote_server: https://example.com/some/web-hosted/directory/spack-mirror-2014-06-24
If you want to change the order in which mirrors are searched for
@@ -227,19 +217,18 @@ search the topmost mirror first and the bottom-most mirror last.
.. _caching:
-------------------
Local Default Cache
-------------------
----------------------------
Spack caches resources that are downloaded as part of installs. The cache is
a valid spack mirror: it uses the same directory structure and naming scheme
as other Spack mirrors (so it can be copied anywhere and referenced with a URL
like other mirrors). The mirror is maintained locally (within the Spack
installation directory) at :file:`var/spack/cache/`. It is always enabled (and
is always searched first when attempting to retrieve files for an installation)
but can be cleared with :ref:`purge <cmd-spack-purge>`; the cache directory can also
be deleted manually without issue.
like other mirrors). The mirror is maintained locally (within the Spack
installation directory) at :file:`var/spack/cache/`. It is always enabled (and
is always searched first when attempting to retrieve files for an installation)
but can be cleared with :ref:`purge <spack-purge>`; the cache directory can also
be deleted manually without issue.
Caching includes retrieved tarball archives and source control repositories, but
only resources with an associated digest or commit ID (e.g. a revision number
only resources with an associated digest or commit ID (e.g. a revision number
for SVN) will be cached.

View File

@@ -1,682 +0,0 @@
.. _modules:
=======
Modules
=======
The use of module systems to manage user environment in a controlled way
is a common practice at HPC centers that is often embraced also by individual
programmers on their development machines. To support this common practice
Spack provides integration with `Environment Modules
<http://modules.sourceforge.net/>`_ , `LMod
<http://lmod.readthedocs.io/en/latest/>`_ and `Dotkit <https://computing.llnl.gov/?set=jobs&page=dotkit>`_ by:
* generating module files after a successful installation
* providing commands that can leverage the spec syntax to manipulate modules
In the following you will see how to activate shell support for commands in Spack
that requires it, and discover what benefits this may bring with respect to deal
directly with automatically generated module files.
.. note::
If your machine does not already have a module system installed,
we advise you to use either Environment Modules or LMod. See :ref:`InstallEnvironmentModules`
for more details.
.. _shell-support:
-------------
Shell support
-------------
You can enable shell support by sourcing the appropriate setup file
in the ``$SPACK_ROOT/share/spack`` directory.
For ``bash`` or ``ksh`` users:
.. code-block:: console
$ . ${SPACK_ROOT}/share/spack/setup-env.sh
For ``csh`` and ``tcsh`` instead:
.. code-block:: console
$ source $SPACK_ROOT/share/spack/setup-env.csh
.. note::
You can put the source line in your ``.bashrc`` or ``.cshrc`` to
have Spack's shell support available on the command line at any login.
----------------------------
Using module files via Spack
----------------------------
If you have shell support enabled you should be able to run either
``module avail`` or ``use -l spack`` to see what module/dotkit files have
been installed. Here is sample output of those programs, showing lots
of installed packages.
.. code-block:: console
$ module avail
------- ~/spack/share/spack/modules/linux-debian7-x86_64 --------
adept-utils@1.0%gcc@4.4.7-5adef8da libelf@0.8.13%gcc@4.4.7
automaded@1.0%gcc@4.4.7-d9691bb0 libelf@0.8.13%intel@15.0.0
boost@1.55.0%gcc@4.4.7 mpc@1.0.2%gcc@4.4.7-559607f5
callpath@1.0.1%gcc@4.4.7-5dce4318 mpfr@3.1.2%gcc@4.4.7
dyninst@8.1.2%gcc@4.4.7-b040c20e mpich@3.0.4%gcc@4.4.7
gcc@4.9.1%gcc@4.4.7-93ab98c5 mpich@3.0.4%gcc@4.9.0
gmp@6.0.0a%gcc@4.4.7 mrnet@4.1.0%gcc@4.4.7-72b7881d
graphlib@2.0.0%gcc@4.4.7 netgauge@2.4.6%gcc@4.9.0-27912b7b
launchmon@1.0.1%gcc@4.4.7 stat@2.1.0%gcc@4.4.7-51101207
libNBC@1.1.1%gcc@4.9.0-27912b7b sundials@2.5.0%gcc@4.9.0-27912b7b
libdwarf@20130729%gcc@4.4.7-b52fac98
.. code-block:: console
$ use -l spack
spack ----------
adept-utils@1.0%gcc@4.4.7-5adef8da - adept-utils @1.0
automaded@1.0%gcc@4.4.7-d9691bb0 - automaded @1.0
boost@1.55.0%gcc@4.4.7 - boost @1.55.0
callpath@1.0.1%gcc@4.4.7-5dce4318 - callpath @1.0.1
dyninst@8.1.2%gcc@4.4.7-b040c20e - dyninst @8.1.2
gmp@6.0.0a%gcc@4.4.7 - gmp @6.0.0a
libNBC@1.1.1%gcc@4.9.0-27912b7b - libNBC @1.1.1
libdwarf@20130729%gcc@4.4.7-b52fac98 - libdwarf @20130729
libelf@0.8.13%gcc@4.4.7 - libelf @0.8.13
libelf@0.8.13%intel@15.0.0 - libelf @0.8.13
mpc@1.0.2%gcc@4.4.7-559607f5 - mpc @1.0.2
mpfr@3.1.2%gcc@4.4.7 - mpfr @3.1.2
mpich@3.0.4%gcc@4.4.7 - mpich @3.0.4
mpich@3.0.4%gcc@4.9.0 - mpich @3.0.4
netgauge@2.4.6%gcc@4.9.0-27912b7b - netgauge @2.4.6
sundials@2.5.0%gcc@4.9.0-27912b7b - sundials @2.5.0
The names here should look familiar, they're the same ones from
``spack find``. You *can* use the names here directly. For example,
you could type either of these commands to load the callpath module:
.. code-block:: console
$ use callpath@1.0.1%gcc@4.4.7-5dce4318
.. code-block:: console
$ module load callpath@1.0.1%gcc@4.4.7-5dce4318
.. _cmd-spack-load:
^^^^^^^^^^^^^^^^^^^^^^^
``spack load / unload``
^^^^^^^^^^^^^^^^^^^^^^^
Neither of these is particularly pretty, easy to remember, or
easy to type. Luckily, Spack has its own interface for using modules
and dotkits. You can use the same spec syntax you're used to:
========================= ==========================
Environment Modules Dotkit
========================= ==========================
``spack load <spec>`` ``spack use <spec>``
``spack unload <spec>`` ``spack unuse <spec>``
========================= ==========================
And you can use the same shortened names you use everywhere else in
Spack. For example, this will add the ``mpich`` package built with
``gcc`` to your path:
.. code-block:: console
$ spack install mpich %gcc@4.4.7
# ... wait for install ...
$ spack use mpich %gcc@4.4.7
Prepending: mpich@3.0.4%gcc@4.4.7 (ok)
$ which mpicc
~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/mpich@3.0.4/bin/mpicc
Or, similarly with modules, you could type:
.. code-block:: console
$ spack load mpich %gcc@4.4.7
These commands will add appropriate directories to your ``PATH``,
``MANPATH``, ``CPATH``, and ``LD_LIBRARY_PATH``. When you no longer
want to use a package, you can type unload or unuse similarly:
.. code-block:: console
$ spack unload mpich %gcc@4.4.7 # modules
$ spack unuse mpich %gcc@4.4.7 # dotkit
.. note::
These ``use``, ``unuse``, ``load``, and ``unload`` subcommands are
only available if you have enabled Spack's shell support *and* you
have dotkit or modules installed on your machine.
^^^^^^^^^^^^^^^^^^^^^^
Ambiguous module names
^^^^^^^^^^^^^^^^^^^^^^
If a spec used with load/unload or use/unuse is ambiguous (i.e. more
than one installed package matches it), then Spack will warn you:
.. code-block:: console
$ spack load libelf
==> Error: Multiple matches for spec libelf. Choose one:
libelf@0.8.13%gcc@4.4.7 arch=linux-debian7-x86_64
libelf@0.8.13%intel@15.0.0 arch=linux-debian7-x86_64
You can either type the ``spack load`` command again with a fully
qualified argument, or you can add just enough extra constraints to
identify one package. For example, above, the key differentiator is
that one ``libelf`` is built with the Intel compiler, while the other
used ``gcc``. You could therefore just type:
.. code-block:: console
$ spack load libelf %intel
To identify just the one built with the Intel compiler.
.. _extensions:
.. _cmd-spack-module-loads:
^^^^^^^^^^^^^^^^^^^^^^
``spack module loads``
^^^^^^^^^^^^^^^^^^^^^^
In some cases, it is desirable to load not just a module, but also all
the modules it depends on. This is not required for most modules
because Spack builds binaries with RPATH support. However, not all
packages use RPATH to find their dependencies: this can be true in
particular for Python extensions, which are currently *not* built with
RPATH.
Scripts to load modules recursively may be made with the command:
.. code-block:: console
$ spack module loads --dependencies <spec>
An equivalent alternative is:
.. code-block :: console
$ source <( spack module loads --dependencies <spec> )
.. warning::
The ``spack load`` command does not currently accept the
``--dependencies`` flag. Use ``spack module loads`` instead, for
now.
.. See #1662
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Module Commands for Shell Scripts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Although Spack is flexible, the ``module`` command is much faster.
This could become an issue when emitting a series of ``spack load``
commands inside a shell script. By adding the ``--shell`` flag,
``spack module find`` may also be used to generate code that can be
cut-and-pasted into a shell script. For example:
.. code-block:: console
$ spack module loads --dependencies py-numpy git
# bzip2@1.0.6%gcc@4.9.3=linux-x86_64
module load bzip2-1.0.6-gcc-4.9.3-ktnrhkrmbbtlvnagfatrarzjojmkvzsx
# ncurses@6.0%gcc@4.9.3=linux-x86_64
module load ncurses-6.0-gcc-4.9.3-kaazyneh3bjkfnalunchyqtygoe2mncv
# zlib@1.2.8%gcc@4.9.3=linux-x86_64
module load zlib-1.2.8-gcc-4.9.3-v3ufwaahjnviyvgjcelo36nywx2ufj7z
# sqlite@3.8.5%gcc@4.9.3=linux-x86_64
module load sqlite-3.8.5-gcc-4.9.3-a3eediswgd5f3rmto7g3szoew5nhehbr
# readline@6.3%gcc@4.9.3=linux-x86_64
module load readline-6.3-gcc-4.9.3-se6r3lsycrwxyhreg4lqirp6xixxejh3
# python@3.5.1%gcc@4.9.3=linux-x86_64
module load python-3.5.1-gcc-4.9.3-5q5rsrtjld4u6jiicuvtnx52m7tfhegi
# py-setuptools@20.5%gcc@4.9.3=linux-x86_64
module load py-setuptools-20.5-gcc-4.9.3-4qr2suj6p6glepnedmwhl4f62x64wxw2
# py-nose@1.3.7%gcc@4.9.3=linux-x86_64
module load py-nose-1.3.7-gcc-4.9.3-pwhtjw2dvdvfzjwuuztkzr7b4l6zepli
# openblas@0.2.17%gcc@4.9.3+shared=linux-x86_64
module load openblas-0.2.17-gcc-4.9.3-pw6rmlom7apfsnjtzfttyayzc7nx5e7y
# py-numpy@1.11.0%gcc@4.9.3+blas+lapack=linux-x86_64
module load py-numpy-1.11.0-gcc-4.9.3-mulodttw5pcyjufva4htsktwty4qd52r
# curl@7.47.1%gcc@4.9.3=linux-x86_64
module load curl-7.47.1-gcc-4.9.3-ohz3fwsepm3b462p5lnaquv7op7naqbi
# autoconf@2.69%gcc@4.9.3=linux-x86_64
module load autoconf-2.69-gcc-4.9.3-bkibjqhgqm5e3o423ogfv2y3o6h2uoq4
# cmake@3.5.0%gcc@4.9.3~doc+ncurses+openssl~qt=linux-x86_64
module load cmake-3.5.0-gcc-4.9.3-x7xnsklmgwla3ubfgzppamtbqk5rwn7t
# expat@2.1.0%gcc@4.9.3=linux-x86_64
module load expat-2.1.0-gcc-4.9.3-6pkz2ucnk2e62imwakejjvbv6egncppd
# git@2.8.0-rc2%gcc@4.9.3+curl+expat=linux-x86_64
module load git-2.8.0-rc2-gcc-4.9.3-3bib4hqtnv5xjjoq5ugt3inblt4xrgkd
The script may be further edited by removing unnecessary modules.
^^^^^^^^^^^^^^^
Module Prefixes
^^^^^^^^^^^^^^^
On some systems, modules are automatically prefixed with a certain
string; ``spack module loads`` needs to know about that prefix when it
issues ``module load`` commands. Add the ``--prefix`` option to your
``spack module loads`` commands if this is necessary.
For example, consider the following on one system:
.. code-block:: console
$ module avail
linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y
$ spack module loads antlr # WRONG!
# antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
module load antlr-2.7.7-gcc-5.3.0-bdpl46y
$ spack module loads --prefix linux-SuSE11-x86_64/ antlr
# antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y
----------------------------
Auto-generating Module Files
----------------------------
Module files are generated by post-install hooks after the successful
installation of a package. The following table summarizes the essential
information associated with the different file formats
that can be generated by Spack:
+-----------------------------+--------------------+-------------------------------+----------------------+
| | **Hook name** | **Default root directory** | **Compatible tools** |
+=============================+====================+===============================+======================+
| **Dotkit** | ``dotkit`` | share/spack/dotkit | DotKit |
+-----------------------------+--------------------+-------------------------------+----------------------+
| **TCL - Non-Hierarchical** | ``tcl`` | share/spack/modules | Env. Modules/LMod |
+-----------------------------+--------------------+-------------------------------+----------------------+
| **Lua - Hierarchical** | ``lmod`` | share/spack/lmod | LMod |
+-----------------------------+--------------------+-------------------------------+----------------------+
Though Spack ships with sensible defaults for the generation of module files,
one can customize many aspects of it to accommodate package or site specific needs.
These customizations are enabled by either:
1. overriding certain callback APIs in the Python packages
2. writing specific rules in the ``modules.yaml`` configuration file
The former method fits best cases that are site independent, e.g. injecting variables
from language interpreters into their extensions. The latter instead permits to
fine tune the content, naming and creation of module files to meet site specific conventions.
^^^^^^^^^^^^^^^^^^^^
``Package`` file API
^^^^^^^^^^^^^^^^^^^^
There are two methods that can be overridden in any ``package.py`` to affect the
content of generated module files. The first one is:
.. code-block:: python
def setup_environment(self, spack_env, run_env):
"""Set up the compile and runtime environments for a package."""
pass
and can alter the content of *the same package where it is overridden*
by adding actions to ``run_env``. The second method is:
.. code-block:: python
def setup_dependent_environment(self, spack_env, run_env, dependent_spec):
"""Set up the environment of packages that depend on this one"""
pass
and has similar effects on module file of dependees. Even in this case
``run_env`` must be filled with the desired list of environment modifications.
.. note::
The ``r`` package and callback APIs
A typical example in which overriding both methods prove to be useful
is given by the ``r`` package. This package installs libraries and headers
in non-standard locations and it is possible to prepend the appropriate directory
to the corresponding environment variables:
================== =================================
LIBRARY_PATH ``self.prefix/rlib/R/lib``
LD_LIBRARY_PATH ``self.prefix/rlib/R/lib``
CPATH ``self.prefix/rlib/R/include``
================== =================================
with the following snippet:
.. literalinclude:: ../../../var/spack/repos/builtin/packages/r/package.py
:pyobject: R.setup_environment
The ``r`` package also knows which environment variable should be modified
to make language extensions provided by other packages available, and modifies
it appropriately in the override of the second method:
.. literalinclude:: ../../../var/spack/repos/builtin/packages/r/package.py
:lines: 128-129,146-151
.. _modules-yaml:
---------------------------------
Configuration in ``modules.yaml``
---------------------------------
The name of the configuration file that controls module generation behavior
is ``modules.yaml``. The default configuration:
.. literalinclude:: ../../../etc/spack/defaults/modules.yaml
:language: yaml
activates generation for ``tcl`` and ``dotkit`` module files and inspects
the installation folder of each package for the presence of a set of subdirectories
(``bin``, ``man``, ``share/man``, etc.). If any is found its full path is prepended
to the environment variables listed below the folder name.
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Activation of other systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Any other module file generator shipped with Spack can be activated adding it to the
list under the ``enable`` key in the module file. Currently the only generator that
is not activated by default is ``lmod``, which produces hierarchical lua module files.
For each module system that can be enabled a finer configuration is possible.
Directives that are aimed at driving the generation of a particular type of module files
should be listed under a top level key that corresponds to the generator being
customized:
.. code-block:: yaml
modules:
enable:
- tcl
- dotkit
- lmod
tcl:
# contains environment modules specific customizations
dotkit:
# contains dotkit specific customizations
lmod:
# contains lmod specific customizations
All these module sections allow for both:
1. global directives that usually affect the whole layout of modules or the naming scheme
2. directives that affect only a set of packages and modify their content
For the latter point in particular it is possible to use anonymous specs
to select an appropriate set of packages on which the modifications should be applied.
.. _anonymous_specs:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Selection by anonymous specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The procedure to select packages using anonymous specs is a natural
extension of using them to install packages, the only difference being
that specs in this case **are not required to have a root package**.
Consider for instance this snippet:
.. code-block:: yaml
modules:
tcl:
# The keyword `all` selects every package
all:
environment:
set:
BAR: 'bar'
# This anonymous spec selects any package that
# depends on openmpi. The double colon at the
# end clears the set of rules that matched so far.
^openmpi::
environment:
set:
BAR: 'baz'
# Selects any zlib package
zlib:
environment:
prepend_path:
LD_LIBRARY_PATH: 'foo'
# Selects zlib compiled with gcc@4.8
zlib%gcc@4.8:
environment:
unset:
- FOOBAR
During module file generation, the configuration above will instruct
Spack to set the environment variable ``BAR=bar`` for every module,
unless the associated spec satisfies ``^openmpi`` in which case ``BAR=baz``.
In addition in any spec that satisfies ``zlib`` the value ``foo`` will be
prepended to ``LD_LIBRARY_PATH`` and in any spec that satisfies ``zlib%gcc@4.8``
the variable ``FOOBAR`` will be unset.
.. note::
Order does matter
The modifications associated with the ``all`` keyword are always evaluated
first, no matter where they appear in the configuration file. All the other
spec constraints are instead evaluated top to bottom.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Blacklist or whitelist the generation of specific module files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Anonymous specs are also used to prevent module files from being written or
to force them to be written. A common case for that at HPC centers is to hide
from users all of the software that needs to be built with system compilers.
Suppose for instance to have ``gcc@4.4.7`` provided by your system. Then
with a configuration file like this one:
.. code-block:: yaml
modules:
tcl:
whitelist: ['gcc', 'llvm'] # Whitelist will have precedence over blacklist
blacklist: ['%gcc@4.4.7'] # Assuming gcc@4.4.7 is the system compiler
you will skip the generation of module files for any package that
is compiled with ``gcc@4.4.7``, with the exception of any ``gcc``
or any ``llvm`` installation.
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Customize the naming scheme
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The names of environment modules generated by spack are not always easy to
fully comprehend due to the long hash in the name. There are two module
configuration options to help with that. The first is a global setting to
adjust the hash length. It can be set anywhere from 0 to 32 and has a default
length of 7. This is the representation of the hash in the module file name and
does not affect the size of the package hash. Be aware that the smaller the
hash length the more likely naming conflicts will occur. The following snippet
shows how to set hash length in the module file names:
.. code-block:: yaml
modules:
tcl:
hash_length: 7
To help make module names more readable, and to help alleviate name conflicts
with a short hash, one can use the ``suffixes`` option in the modules
configuration file. This option will add strings to modules that match a spec.
For instance, the following config options,
.. code-block:: yaml
modules:
tcl:
all:
suffixes:
^python@2.7.12: 'python-2.7.12'
^openblas: 'openblas'
will add a ``python-2.7.12`` version string to any packages compiled with
python matching the spec, ``python@2.7.12``. This is useful to know which
version of python a set of python extensions is associated with. Likewise, the
``openblas`` string is attached to any program that has openblas in the spec,
most likely via the ``+blas`` variant specification.
.. note::
TCL module files
A modification that is specific to ``tcl`` module files is the possibility
to change the naming scheme of modules.
.. code-block:: yaml
modules:
tcl:
naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}'
all:
conflict: ['${PACKAGE}', 'intel/14.0.1']
will create module files that will conflict with ``intel/14.0.1`` and with the
base directory of the same module, effectively preventing the possibility to
load two or more versions of the same software at the same time. The tokens
that are available for use in this directive are the same understood by
the ``Spec.format`` method.
.. note::
LMod hierarchical module files
When ``lmod`` is activated Spack will generate a set of hierarchical lua module
files that are understood by LMod. The generated hierarchy always contains the
three layers ``Core`` / ``Compiler`` / ``MPI`` but can be further extended to
any other virtual dependency present in Spack. A case that could be useful in
practice is for instance:
.. code-block:: yaml
modules:
enable:
- lmod
lmod:
core_compilers: ['gcc@4.8']
hierarchical_scheme: ['lapack']
that will generate a hierarchy in which the ``lapack`` layer is treated as the ``mpi``
one. This allows a site to build the same libraries or applications against different
implementations of ``mpi`` and ``lapack``, and let LMod switch safely from one to the
other.
.. warning::
Deep hierarchies and ``lmod spider``
For hierarchies that are deeper than three layers ``lmod spider`` may have some issues.
See `this discussion on the LMod project <https://github.com/TACC/Lmod/issues/114>`_.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Filter out environment modifications
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Modifications to certain environment variables in module files are generated by
default, for instance by prefix inspections in the default configuration file.
There are cases though where some of these modifications are unwanted.
Suppose you need to avoid having ``CPATH`` and ``LIBRARY_PATH``
modified by your ``dotkit`` modules:
.. code-block:: yaml
modules:
dotkit:
all:
filter:
# Exclude changes to any of these variables
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
The configuration above will generate dotkit module files that will not contain
modifications to either ``CPATH`` or ``LIBRARY_PATH`` and environment module
files that instead will contain these modifications.
^^^^^^^^^^^^^^^^^^^^^
Autoload dependencies
^^^^^^^^^^^^^^^^^^^^^
In some cases it can be useful to have module files directly autoload
their dependencies. This may be the case for Python extensions, if not
activated using ``spack activate``:
.. code-block:: yaml
modules:
tcl:
^python:
autoload: 'direct'
The configuration file above will produce module files that will
automatically load their direct dependencies. The allowed values for the
``autoload`` statement are either ``none``, ``direct`` or ``all``.
.. note::
TCL prerequisites
In the ``tcl`` section of the configuration file it is possible to use
the ``prerequisites`` directive that accepts the same values as
``autoload``. It will produce module files that have a ``prereq``
statement instead of automatically loading other modules.
------------------------
Maintaining Module Files
------------------------
Spack not only provides great flexibility in the generation of module files
and in the customization of both their layout and content, but also ships with
a tool to ease the burden of their maintenance in production environments.
This tool is the ``spack module`` command:
.. command-output:: spack module --help
.. _cmd-spack-module-refresh:
^^^^^^^^^^^^^^^^^^^^^^^^
``spack module refresh``
^^^^^^^^^^^^^^^^^^^^^^^^
The command that regenerates module files to update their content or
their layout is ``module refresh``:
.. command-output:: spack module refresh --help
A set of packages can be selected using anonymous specs for the optional
``constraint`` positional argument. The argument ``--module-type`` identifies
the type of module files to refresh. Optionally the entire tree can be deleted
before regeneration if the change in layout is radical.
.. _cmd-spack-module-rm:
^^^^^^^^^^^^^^^^^^^
``spack module rm``
^^^^^^^^^^^^^^^^^^^
If instead what you need is just to delete a few module files, then the right
command is ``module rm``:
.. command-output:: spack module rm --help
.. note::
We care about your module files!
Every modification done on modules
that are already existing will ask for a confirmation by default. If
the command is used in a script it is possible though to pass the
``-y`` argument, that will skip this safety measure.

File diff suppressed because it is too large Load Diff

View File

@@ -1,456 +0,0 @@
.. _repositories:
=============================
Package Repositories
=============================
Spack comes with over 1,000 built-in package recipes in
``var/spack/repos/builtin/``. This is a **package repository** -- a
directory that Spack searches when it needs to find a package by name.
You may need to maintain packages for restricted, proprietary or
experimental software separately from the built-in repository. Spack
allows you to configure local repositories using either the
``repos.yaml`` or the ``spack repo`` command.
A package repository a directory structured like this::
repo/
repo.yaml
packages/
hdf5/
package.py
mpich/
package.py
mpich-1.9-bugfix.patch
trilinos/
package.py
...
The top-level ``repo.yaml`` file contains configuration metadata for the
repository, and the ``packages`` directory contains subdirectories for
each package in the repository. Each package directory contains a
``package.py`` file and any patches or other files needed to build the
package.
Package repositories allow you to:
1. Maintain your own packages separately from Spack;
2. Share your packages (e.g. by hosting them in a shared file system),
without committing them to the built-in Spack package repository; and
3. Override built-in Spack packages with your own implementation.
Packages in a separate repository can also *depend on* built-in Spack
packages. So, you can leverage existing recipes without re-implementing
them in your own repository.
---------------------
``repos.yaml``
---------------------
Spack uses the ``repos.yaml`` file in ``~/.spack`` (and :ref:`elsewhere
<configuration>`) to find repositories. Note that the ``repos.yaml``
configuration file is distinct from the ``repo.yaml`` file in each
repository. For more on the YAML format, and on how configuration file
precedence works in Spack, see :ref:`configuration <configuration>`.
The default ``etc/spack/defaults/repos.yaml`` file looks like this:
.. code-block:: yaml
repos:
- $spack/var/spack/repos/builtin
The file starts with ``repos:`` and contains a single ordered list of
paths to repositories. Each path is on a separate line starting with
``-``. You can add a repository by inserting another path into the list:
.. code-block:: yaml
repos:
- /opt/local-repo
- $spack/var/spack/repos/builtin
When Spack interprets a spec, e.g. ``mpich`` in ``spack install mpich``,
it searches these repositories in order (first to last) to resolve each
package name. In this example, Spack will look for the following
packages and use the first valid file:
1. ``/opt/local-repo/packages/mpich/package.py``
2. ``$spack/var/spack/repos/builtin/packages/mpich/package.py``
.. note::
Currently, Spack can only use repositories in the file system. We plan
to eventually support URLs in ``repos.yaml``, so that you can easily
point to remote package repositories, but that is not yet implemented.
---------------------
Namespaces
---------------------
Every repository in Spack has an associated **namespace** defined in its
top-level ``repo.yaml`` file. If you look at
``var/spack/repos/builtin/repo.yaml`` in the built-in repository, you'll
see that its namespace is ``builtin``:
.. code-block:: console
$ cat var/spack/repos/builtin/repo.yaml
repo:
namespace: builtin
Spack records the repository namespace of each installed package. For
example, if you install the ``mpich`` package from the ``builtin`` repo,
Spack records its fully qualified name as ``builtin.mpich``. This
accomplishes two things:
1. You can have packages with the same name from different namespaces
installed at once.
1. You can easily determine which repository a package came from after it
is installed (more :ref:`below <namespace-example>`).
.. note::
It may seem redundant for a repository to have both a namespace and a
path, but repository *paths* may change over time, or, as mentioned
above, a locally hosted repository path may eventually be hosted at
some remote URL.
Namespaces are designed to allow *package authors* to associate a
unique identifier with their packages, so that the package can be
identified even if the repository moves. This is why the namespace is
determined by the ``repo.yaml`` file in the repository rather than the
local ``repos.yaml`` configuration: the *repository maintainer* sets
the name.
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Uniqueness
^^^^^^^^^^^^^^^^^^^^^^^^^^^
You should choose a namespace that uniquely identifies your package
repository. For example, if you make a repository for packages written
by your organization, you could use your organization's name. You can
also nest namespaces using periods, so you could identify a repository by
a sub-organization. For example, LLNL might use a namespace for its
internal repositories like ``llnl``. Packages from the Physical & Life
Sciences directorate (PLS) might use the ``llnl.pls`` namespace, and
packages created by the Computation directorate might use ``llnl.comp``.
Spack cannot ensure that every repository is named uniquely, but it will
prevent you from registering two repositories with the same namespace at
the same time. If you try to add a repository that has the same name as
an existing one, e.g. ``builtin``, Spack will print a warning message.
.. _namespace-example:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Namespace example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Suppose that LLNL maintains its own version of ``mpich``, separate from
Spack's built-in ``mpich`` package, and suppose you've installed both
LLNL's and Spack's ``mpich`` packages. If you just use ``spack find``,
you won't see a difference between these two packages:
.. code-block:: console
$ spack find
==> 2 installed packages.
-- linux-rhel6-x86_64 / gcc@4.4.7 -------------
mpich@3.2 mpich@3.2
However, if you use ``spack find -N``, Spack will display the packages
with their namespaces:
.. code-block:: console
$ spack find -N
==> 2 installed packages.
-- linux-rhel6-x86_64 / gcc@4.4.7 -------------
builtin.mpich@3.2 llnl.comp.mpich@3.2
Now you know which one is LLNL's special version, and which one is the
built-in Spack package. As you might guess, packages that are identical
except for their namespace will still have different hashes:
.. code-block:: console
$ spack find -lN
==> 2 installed packages.
-- linux-rhel6-x86_64 / gcc@4.4.7 -------------
c35p3gc builtin.mpich@3.2 itoqmox llnl.comp.mpich@3.2
All Spack commands that take a package :ref:`spec <sec-specs>` can also
accept a fully qualified spec with a namespace. This means you can use
the namespace to be more specific when designating, e.g., which package
you want to uninstall:
.. code-block:: console
spack uninstall llnl.comp.mpich
----------------------------
Overriding built-in packages
----------------------------
Spack's search semantics mean that you can make your own implementation
of a built-in Spack package (like ``mpich``), put it in a repository, and
use it to override the built-in package. As long as the repository
containing your ``mpich`` is earlier any other in ``repos.yaml``, any
built-in package that depends on ``mpich`` will be use the one in your
repository.
Suppose you have three repositories: the builtin Spack repo
(``builtin``), a shared repo for your institution (e.g., ``llnl``), and a
repo containing your own prototype packages (``proto``). Suppose they
contain packages as follows:
+--------------+------------------------------------+-----------------------------+
| Namespace | Path to repo | Packages |
+==============+====================================+=============================+
| ``proto`` | ``~/proto`` | ``mpich`` |
+--------------+------------------------------------+-----------------------------+
| ``llnl`` | ``/usr/local/llnl`` | ``hdf5`` |
+--------------+------------------------------------+-----------------------------+
| ``builtin`` | ``$spack/var/spack/repos/builtin`` | ``mpich``, ``hdf5``, others |
+--------------+------------------------------------+-----------------------------+
Suppose that ``hdf5`` depends on ``mpich``. You can override the
built-in ``hdf5`` by adding the ``llnl`` repo to ``repos.yaml``:
.. code-block:: yaml
repos:
- /usr/local/llnl
- $spack/var/spack/repos/builtin
``spack install hdf5`` will install ``llnl.hdf5 ^builtin.mpich``.
If, instead, ``repos.yaml`` looks like this:
.. code-block:: yaml
repos:
- ~/proto
- /usr/local/llnl
- $spack/var/spack/repos/builtin
``spack install hdf5`` will install ``llnl.hdf5 ^proto.mpich``.
Any unqualified package name will be resolved by searching ``repos.yaml``
from the first entry to the last. You can force a particular
repository's package by using a fully qualified name. For example, if
your ``repos.yaml`` is as above, and you want ``builtin.mpich`` instead
of ``proto.mpich``, you can write::
spack install hdf5 ^builtin.mpich
which will install ``llnl.hdf5 ^builtin.mpich``.
Similarly, you can force the ``builtin.hdf5`` like this::
spack install builtin.hdf5 ^builtin.mpich
This will not search ``repos.yaml`` at all, as the ``builtin`` repo is
specified in both cases. It will install ``builtin.hdf5
^builtin.mpich``.
If you want to see which repositories will be used in a build *before*
you install it, you can use ``spack spec -N``:
.. code-block:: console
$ spack spec -N hdf5
Input spec
--------------------------------
hdf5
Normalized
--------------------------------
hdf5
^zlib@1.1.2:
Concretized
--------------------------------
builtin.hdf5@1.10.0-patch1%clang@7.0.2-apple+cxx~debug+fortran+mpi+shared~szip~threadsafe arch=darwin-elcapitan-x86_64
^builtin.openmpi@2.0.1%clang@7.0.2-apple~mxm~pmi~psm~psm2~slurm~sqlite3~thread_multiple~tm~verbs+vt arch=darwin-elcapitan-x86_64
^builtin.hwloc@1.11.4%clang@7.0.2-apple arch=darwin-elcapitan-x86_64
^builtin.libpciaccess@0.13.4%clang@7.0.2-apple arch=darwin-elcapitan-x86_64
^builtin.libtool@2.4.6%clang@7.0.2-apple arch=darwin-elcapitan-x86_64
^builtin.m4@1.4.17%clang@7.0.2-apple+sigsegv arch=darwin-elcapitan-x86_64
^builtin.libsigsegv@2.10%clang@7.0.2-apple arch=darwin-elcapitan-x86_64
^builtin.pkg-config@0.29.1%clang@7.0.2-apple+internal_glib arch=darwin-elcapitan-x86_64
^builtin.util-macros@1.19.0%clang@7.0.2-apple arch=darwin-elcapitan-x86_64
^builtin.zlib@1.2.8%clang@7.0.2-apple+pic arch=darwin-elcapitan-x86_64
.. warning::
You *can* use a fully qualified package name in a ``depends_on``
directive in a ``package.py`` file, like so::
depends_on('proto.hdf5')
This is *not* recommended, as it makes it very difficult for
multiple repos to be composed and shared. A ``package.py`` like this
will fail if the ``proto`` repository is not registered in
``repos.yaml``.
.. _cmd-spack-repo:
--------------------------
``spack repo``
--------------------------
Spack's :ref:`configuration system <configuration>` allows repository
settings to come from ``repos.yaml`` files in many locations. If you
want to see the repositories registered as a result of all configuration
files, use ``spack repo list``.
^^^^^^^^^^^^^^^^^^^
``spack repo list``
^^^^^^^^^^^^^^^^^^^
.. code-block:: console
$ spack repo list
==> 2 package repositories.
myrepo ~/myrepo
builtin ~/spack/var/spack/repos/builtin
Each repository is listed with its associated namespace. To get the raw,
merged YAML from all configuration files, use ``spack config get repos``:
.. code-block:: console
$ spack config get repos
repos:srepos:
- ~/myrepo
- $spack/var/spack/repos/builtin
mNote that, unlike ``spack repo list``, this does not include the
namespace, which is read from each repo's ``repo.yaml``.
^^^^^^^^^^^^^^^^^^^^^
``spack repo create``
^^^^^^^^^^^^^^^^^^^^^
To make your own repository, you don't need to construct a directory
yourself; you can use the ``spack repo create`` command.
.. code-block:: console
$ spack repo create myrepo
==> Created repo with namespace 'myrepo'.
==> To register it with spack, run this command:
spack repo add ~/myrepo
$ ls myrepo
packages/ repo.yaml
$ cat myrepo/repo.yaml
repo:
namespace: 'myrepo'
By default, the namespace of a new repo matches its directory's name.
You can supply a custom namespace with a second argument, e.g.:
.. code-block:: console
$ spack repo create myrepo llnl.comp
==> Created repo with namespace 'llnl.comp'.
==> To register it with spack, run this command:
spack repo add ~/myrepo
$ cat myrepo/repo.yaml
repo:
namespace: 'llnl.comp'
^^^^^^^^^^^^^^^^^^
``spack repo add``
^^^^^^^^^^^^^^^^^^
Once your repository is created, you can register it with Spack with
``spack repo add``:
.. code-block:: console
$ spack repo add ./myrepo
==> Added repo with namespace 'llnl.comp'.
$ spack repo list
==> 2 package repositories.
llnl.comp ~/myrepo
builtin ~/spack/var/spack/repos/builtin
This simply adds the repo to your ``repos.yaml`` file.
Once a repository is registered like this, you should be able to see its
packages' names in the output of ``spack list``, and you should be able
to build them using ``spack install <name>`` as you would with any
built-in package.
^^^^^^^^^^^^^^^^^^^^^
``spack repo remove``
^^^^^^^^^^^^^^^^^^^^^
You can remove an already-registered repository with ``spack repo rm``.
This will work whether you pass the repository's namespace *or* its
path.
By namespace:
.. code-block:: console
$ spack repo rm llnl.comp
==> Removed repository ~/myrepo with namespace 'llnl.comp'.
$ spack repo list
==> 1 package repository.
builtin ~/spack/var/spack/repos/builtin
By path:
.. code-block:: console
$ spack repo rm ~/myrepo
==> Removed repository ~/myrepo
$ spack repo list
==> 1 package repository.
builtin ~/spack/var/spack/repos/builtin
--------------------------------
Repo namespaces and Python
--------------------------------
You may have noticed that namespace notation for repositories is similar
to the notation for namespaces in Python. As it turns out, you *can*
treat Spack repositories like Python packages; this is how they are
implemented.
You could, for example, extend a ``builtin`` package in your own
repository:
.. code-block:: python
from spack.pkg.builtin.mpich import Mpich
class MyPackage(Mpich):
...
Spack repo namespaces are actually Python namespaces tacked on under
``spack.pkg``. The search semantics of ``repos.yaml`` are actually
implemented using Python's built-in `sys.path
<https://docs.python.org/2/library/sys.html#sys.path>`_ search. The
:py:mod:`spack.repository` module implements a custom `Python importer
<https://docs.python.org/2/library/imp.html>`_.
.. warning::
The mechanism for extending packages is not yet extensively tested,
and extending packages across repositories imposes inter-repo
dependencies, which may be hard to manage. Use this feature at your
own risk, but let us know if you have a use case for it.

View File

@@ -0,0 +1,576 @@
Spack Workflows
===============================
The process of using Spack involves building packages, running
binaries from those packages, and developing software that depends on
those packages. For example, one might use Spack to build the
`netcdf` package, use `spack load` to run the `ncdump` binary, and
finally, write a small C program to read/write a particular NetCDF file.
Spack supports a variety of workflows to suit a variety of situaions
and user preferences, there is no single way to do all these things.
This chapter demonstrates different workflows that have been
developed, pointing out the pros and cons of them.
Definitions
############
First some basic definitions.
Package, Concrete Spec, Installed Package
------------------------------------------
In Spack, a package is an abstract recipe to build one piece of software.
Spack packages may be used to build, in principle, any version of that
software with any set of variants. Examples of packages include
``curl`` and ``zlib``.
A package may be *instantiated* to produce a concrete spec; one
possible realization of a particular package, out of combinatorially
many other realizations. For example, here is a concrete spec
instantiated from ``curl``:
.. code-block:: sh
curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
^openssl@system%gcc@5.3.0 arch=linux-SuSE11-x86_64
^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
Spack's core concretization algorithm generates concrete specs by
instantiating packages from its repo, based on a set of "hints",
including user input and the ``packages.yaml`` file. This algorithm
may be accessed at any time with the ``spack spec`` command. For
example:
.. code-block:: sh
$ spack spec curl
curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
^openssl@system%gcc@5.3.0 arch=linux-SuSE11-x86_64
^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
Every time Spack installs a package, that installation corresponds to
a concrete spec. Only a vanishingly small fraction of possible
concrete specs will be installed at any one Spack site.
Consistent Sets
----------------
A set of Spack specs is said to be *consistent* if each package is
only instantiated one way within it --- that is, if two specs in the
set have the same package, then they must also have the same version,
variant, compiler, etc. For example, the following set is consistent:
.. code-block:: sh
curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
^openssl@system%gcc@5.3.0 arch=linux-SuSE11-x86_64
^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
The following set is not consistent:
.. code-block:: sh
curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
^openssl@system%gcc@5.3.0 arch=linux-SuSE11-x86_64
^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
zlib@1.2.7%gcc@5.3.0 arch=linux-SuSE11-x86_64
The compatibility of a set of installed packages determines what may
be done with it. It is always possible to ``spack load`` any set of
installed packages, whether or not they are consistent, and run their
binaries from the command line. However, a set of installed packages
can only be linked together in one binary if it is consistent.
If the user produces a series of `spack spec` or `spack load`
commands, in general there is no guarantee of consistency between
them. Spack's concretization procedure guarantees that the results of
any *single* `spack spec` call will be consistent. Therefore, the
best way to ensure a consistent set of specs is to create a Spack
package with dependencies, and then instantiate that package. We will
use this technique below.
Building Packages
##################
Suppose you are tasked with installing a set of software packages on a
system in order to support one application -- both a core application
program, plus software to prepare input and analyze output. The
required software might be summed up as a series of ``spack install``
commands in a script. If needed, this script can always be run again
in the future. For example:
.. code-block::
spack install modele-utils
spack install emacs
spack install ncview
spack install nco
spack install modele-control
spack install py-numpy
In most cases, this script will not correctly install software
according to your specific needs: choices need to be made for
variants, versions and virtual dependency choices may be needed. It
*is* possible to specify these choices by extending specs on the
command line; however, the same choices must be specified repeatedly.
For example, if you wish to use ``openmpi`` to satisfy the ``mpi``
dependency, then ``^openmpi`` will have to appear on *every* ``spack
install`` line that uses MPI. It can get repetitve fast.
Custimizing Spack installation options is easier to do in the
``~/.spack/packages.yaml`` file. In this file, you can specify
preferred versions and variants to use for packages. For exmaple:
.. code-block:: yaml
packages:
python:
version: [3.5.1]
modele-utils:
version: [cmake]
everytrace:
version: [develop]
eigen:
variants: ~suitesparse
netcdf:
variants: +mpi
all:
compiler: [gcc@5.3.0]
providers:
mpi: [openmpi]
blas: [openblas]
lapack: [openblas]
This approach will work as long as you are building packages for just
one application.
Multiple Applications
-----------------------
Suppose instead you're building multiple inconsistent applications.
For example, users want package A to be built with ``openmpi`` and
package B with ``mpich`` --- but still share many other lower-level
dependencies. In this case, a single ``packages.yaml`` file will not
work. Plans are to implement *per-project* ``packages.yaml`` files.
In the meantime, one could write shell scripts to switch
``packages.yaml`` between multiple versions as needed, using symlinks.
Combinatorial Sets
--------------------------
Suppose that you are now tasked with systematically building many
incompatible versions of packages. For example, you need to build
``petsc`` 9 times for 3 different MPI implementations on 3 different
compilers, in order to support user needs. In this case, you will
need to either create 9 different ``packages.yaml`` files; or more
likely, create 9 different ``spack install`` command lines with the
correct options in the spec.
Loading Packages
#################
Once Spack packages have been built, the next step is to use them. As
with buiding packages, there are many ways to use them, depending on
the use case.
Simple Loads
--------------
Suppose that Spack has been used to install a set of command-line
programs, which users now wish to use. One can in principle put a
number of ``spack load`` commands into ``.bashrc``, for example:
.. code-block::
spack load modele-utils
spack load emacs
spack load ncview
spack load nco
spack load modele-control
Although simple load scripts like this are useful in many cases, the
have some drawbacks:
1. The set of modules loaded by them will in general not be
consistent. They are a decent way to load commands to be called
from command shells. See below for better ways to assemble a
consistent set of packages for building application programs.
2. The ``spack spec`` and ``spack install`` commands use a
sophisticated concretization algorithm that chooses the "best"
among several options, taking into account ``packages.yaml`` file.
The ``spack load`` and ``spack module loads`` commands, on the
other thand, are not very smart: if the user-supplied spec matches
more than one installed package, then ``spack module loads`` will
fail. This may change in the future. For now, the workaround is to
be more specific on any ``spack module loads`` lines that fail.
Cached Simple Loads
----------------------
Anoter problem with using `spack load` is, it is slow; a typical user
environment could take several seconds to load, and would not be
appropriate to put into ``.bashrc`` directly. It is preferable to use
a series of ``spack module loads`` commands to pre-compute which
modules to load. These can be put in a script that is run whenever
installed Spack packages change. For example:
.. code-block:: sh
#!/bin/sh
#
# Generate module load commands in ~/env/spackenv
cat <<EOF | /bin/sh >$HOME/env/spackenv
FIND='spack module loads --prefix linux-SuSE11-x86_64/'
\$FIND modele-utils
\$FIND emacs
\$FIND ncview
\$FIND nco
\$FIND modele-control
EOF
The output of this file is written in ``~/env/spackenv``:
.. code-block:: sh
# binutils@2.25%gcc@5.3.0+gold~krellpatch~libiberty arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/binutils-2.25-gcc-5.3.0-6w5d2t4
# python@2.7.12%gcc@5.3.0~tk~ucs4 arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/python-2.7.12-gcc-5.3.0-2azoju2
# ncview@2.1.7%gcc@5.3.0 arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/ncview-2.1.7-gcc-5.3.0-uw3knq2
# nco@4.5.5%gcc@5.3.0 arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/nco-4.5.5-gcc-5.3.0-7aqmimu
# modele-control@develop%gcc@5.3.0 arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/modele-control-develop-gcc-5.3.0-7rddsij
# zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/zlib-1.2.8-gcc-5.3.0-fe5onbi
# curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/curl-7.50.1-gcc-5.3.0-4vlev55
# hdf5@1.10.0-patch1%gcc@5.3.0+cxx~debug+fortran+mpi+shared~szip~threadsafe arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/hdf5-1.10.0-patch1-gcc-5.3.0-pwnsr4w
# netcdf@4.4.1%gcc@5.3.0~hdf4+mpi arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/netcdf-4.4.1-gcc-5.3.0-rl5canv
# netcdf-fortran@4.4.4%gcc@5.3.0 arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/netcdf-fortran-4.4.4-gcc-5.3.0-stdk2xq
# modele-utils@cmake%gcc@5.3.0+aux+diags+ic arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/modele-utils-cmake-gcc-5.3.0-idyjul5
# everytrace@develop%gcc@5.3.0+fortran+mpi arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/everytrace-develop-gcc-5.3.0-p5wmb25
Users may now put ``source ~/env/spackenv`` into ``.bashrc``.
.. note ::
Some module systems put a prefix on the names of modules created
by Spack. For example, that prefix is ``linux-SuSE11-x86_64/`` in
the above case. If a prefix is not needed, you may omit the
``--prefix`` flag from ``spack module loads``.
Transitive Dependencies
---------------------------
In the script above, each ``spack module loads`` command generates a
*single* ``module load`` line. Transitive dependencies do not usually
need to be load, only modules the user needs in in ``$PATH``. This is
because Spack builds binaries with RPATH. Spack's RPATH policy has
some nice features:
1. Modules for multiple inconsistent applications may be loaded
simultaneously. In the above example (Multiple Applications),
package A and package B can coexist together in the user's $PATH,
even though they use different MPIs.
2. RPATH eliminates a whole class of strange errors that can happen
in non-RPATH binaries when the wrong ``LD_LIBRARY_PATH`` is
loaded.
3. Recursive module systems such as LMod are not necessary.
4. Modules are not needed at all to execute binaries. If a path to a
binary is known, it may be executed. For example, the path for a
Spack-built compiler can be given to an IDE without requiring the
IDE to load that compiler's module.
Unfortunately, Spacks' RPATH support does not work in all case. For example:
1. Software comes in many forms --- not just compiled ELF binaries,
but also as interpreted code in Python, R, JVM bytecode, etc.
Those systems almost universally use an environment variable
analogous to ``LD_LIBRARY_PATH`` to dynamically load libraries.
2. Although Spack generally builds binaries with RPATH, it does not
currently do so for compiled Python extensions (for example,
``py-numpy``). Any libraries that these extensions depend on
(``openblas`` in this case, for example) must be specified in the
``LD_LIBRARY_PATH``.`
3. In some cases, Spack-generated binaries end up without a
functional RPATH for no discernable reason.
In cases where RPATH support doesn't make things "just work," it can
be necessary to load a module's dependencies as well as the module
itself. This is done by adding the ``--dependencies`` flag to the
``spack module loads`` command. For example, the following line,
added to the script above, would be used to load Numpy, along with
core Python, Setup TOols and a number of other packages:
.. code-block:: sh
\$FIND --dependencies py-numpy
Extension Packages
---------------------
Extensions (::ref:`packaging_extension` section) may be used as as an
alternative to loading Python packages directly. If extensions are
activated, then ``spack load python`` will also load all the
extensions activated for the given ``python``. However, Spack
extensions have two potential drawbacks:
1. Activated packages that involve compiled C extensions may still
need their dependencies to be loaded manually. For example,
``spack load openblas`` might be required to make ``py-numpy``
work.
2. Extensions "break" a core feature of Spack, which is that multiple
versions of a package can co-exist side-by-side. For example,
suppose you wish to run a Python in two different environments but
the same basic Python --- one with ``py-numpy@1.7`` and one with
``py-numpy@1.8``. Spack extensions will not support this potential
debugging use case.
Filesystem Views
-------------------------------
The above
.. Maybe this is not the right location for this documentation.
The Spack installation area allows for many package installation trees
to coexist and gives the user choices as to what versions and variants
of packages to use. To use them, the user must rely on a way to
aggregate a subset of those packages. The section on Environment
Modules gives one good way to do that which relies on setting various
environment variables. An alternative way to aggregate is through
**filesystem views**.
A filesystem view is a single directory tree which is the union of the
directory hierarchies of the individual package installation trees
that have been included. The files of the view's installed packages
are brought into the view by symbolic or hard links back to their
location in the original Spack installation area. As the view is
formed, any clashes due to a file having the exact same path in its
package installation tree are handled in a first-come-first-served
basis and a warning is printed. Packages and their dependencies can
be both added and removed. During removal, empty directories will be
purged. These operations can be limited to pertain to just the
packages listed by the user or to exclude specific dependencies and
they allow for software installed outside of Spack to coexist inside
the filesystem view tree.
By its nature, a filesystem view represents a particular choice of one
set of packages among all the versions and variants that are available
in the Spack installation area. It is thus equivalent to the
directory hiearchy that might exist under ``/usr/local``. While this
limits a view to including only one version/variant of any package, it
provides the benefits of having a simpler and traditional layout which
may be used without any particular knowledge that its packages were
built by Spack.
Views can be used for a variety of purposes including:
- A central installation in a traditional layout, eg ``/usr/local`` maintained over time by the sysadmin.
- A self-contained installation area which may for the basis of a top-level atomic versioning scheme, eg ``/opt/pro`` vs ``/opt/dev``.
- Providing an atomic and monolithic binary distribution, eg for delivery as a single tarball.
- Producing ephemeral testing or developing environments.
Using Filesystem Views
~~~~~~~~~~~~~~~~~~~~~~
A filesystem view is created and packages are linked in by the ``spack
view`` command's ``symlink`` and ``hardlink`` sub-commands. The
``spack view remove`` command can be used to unlink some or all of the
filesystem view.
The following example creates a filesystem view based
on an installed ``cmake`` package and then removes from the view the
files in the ``cmake`` package while retaining its dependencies.
.. code-block:: sh
$ spack view -v symlink myview cmake@3.5.2
==> Linking package: "ncurses"
==> Linking package: "zlib"
==> Linking package: "openssl"
==> Linking package: "cmake"
$ ls myview/
bin doc etc include lib share
$ ls myview/bin/
captoinfo clear cpack ctest infotocap openssl tabs toe tset
ccmake cmake c_rehash infocmp ncurses6-config reset tic tput
$ spack view -v -d false rm myview cmake@3.5.2
==> Removing package: "cmake"
$ ls myview/bin/
captoinfo c_rehash infotocap openssl tabs toe tset
clear infocmp ncurses6-config reset tic tput
Limitations of Filesystem Views
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes some limitations that should be considered in
using filesystems views.
Filesystem views are merely organizational. The binary executable
programs, shared libraries and other build products found in a view
are mere links into the "real" Spack installation area. If a view is
built with symbolic links it requires the Spack-installed package to
be kept in place. Building a view with hardlinks removes this
requirement but any internal paths (eg, rpath or ``#!`` interpreter
specifications) will still require the Spack-installed package files
to be in place.
.. FIXME: reference the relocation work of Hegner and Gartung.
As described above, when a view is built only a single instance of a
file may exist in the unified filesystem tree. If more than one
package provides a file at the same path (relative to its own root)
then it is the first package added to the view that "wins". A warning
is printed and it is up to the user to determine if the conflict
matters.
It is up to the user to assure a consistent view is produced. In
particular if the user excludes packages, limits the following of
dependencies or removes packages the view may become inconsistent. In
particular, if two packages require the same sub-tree of dependencies,
removing one package (recursively) will remove its dependencies and
leave the other package broken.
Build System Configuration Support
----------------------------------
Imagine a developer creating a CMake-based (or Autotools) project in a local
directory, which depends on libraries A-Z. Once Spack has installed
those dependencies, one would like to run ``cmake`` with appropriate
command line and environment so CMake can find them. The ``spack
setup`` command does this conveniently, producing a CMake
configuration that is essentially the same as how Spack *would have*
configured the project. This can be demonstrated with a usage
example:
.. code-block:: bash
cd myproject
spack setup myproject@local
mkdir build; cd build
../spconfig.py ..
make
make install
Notes:
* Spack must have ``myproject/package.py`` in its repository for
this to work.
* ``spack setup`` produces the executable script ``spconfig.py`` in
the local directory, and also creates the module file for the
package. ``spconfig.py`` is normally run from the user's
out-of-source build directory.
* The version number given to ``spack setup`` is arbitrary, just
like ``spack diy``. ``myproject/package.py`` does not need to
have any valid downloadable versions listed (typical when a
project is new).
* spconfig.py produces a CMake configuration that *does not* use the
Spack wrappers. Any resulting binaries *will not* use RPATH,
unless the user has enabled it. This is recommended for
development purposes, not production.
* ``spconfig.py`` is human readable, and can serve as a developer
reference of what dependencies are being used.
* ``make install`` installs the package into the Spack repository,
where it may be used by other Spack packages.
* CMake-generated makefiles re-run CMake in some circumstances. Use
of ``spconfig.py`` breaks this behavior, requiring the developer
to manually re-run ``spconfig.py`` when a ``CMakeLists.txt`` file
has changed.
CMakePackage
~~~~~~~~~~~~
In order ot enable ``spack setup`` functionality, the author of
``myproject/package.py`` must subclass from ``CMakePackage`` instead
of the standard ``Package`` superclass. Because CMake is
standardized, the packager does not need to tell Spack how to run
``cmake; make; make install``. Instead the packager only needs to
create (optional) methods ``configure_args()`` and ``configure_env()``, which
provide the arguments (as a list) and extra environment variables (as
a dict) to provide to the ``cmake`` command. Usually, these will
translate variant flags into CMake definitions. For example:
.. code-block:: python
def configure_args(self):
spec = self.spec
return [
'-DUSE_EVERYTRACE=%s' % ('YES' if '+everytrace' in spec else 'NO'),
'-DBUILD_PYTHON=%s' % ('YES' if '+python' in spec else 'NO'),
'-DBUILD_GRIDGEN=%s' % ('YES' if '+gridgen' in spec else 'NO'),
'-DBUILD_COUPLER=%s' % ('YES' if '+coupler' in spec else 'NO'),
'-DUSE_PISM=%s' % ('YES' if '+pism' in spec else 'NO')]
If needed, a packager may also override methods defined in
``StagedPackage`` (see below).
StagedPackage
~~~~~~~~~~~~~
``CMakePackage`` is implemented by subclassing the ``StagedPackage``
superclass, which breaks down the standard ``Package.install()``
method into several sub-stages: ``setup``, ``configure``, ``build``
and ``install``. Details:
* Instead of implementing the standard ``install()`` method, package
authors implement the methods for the sub-stages
``install_setup()``, ``install_configure()``,
``install_build()``, and ``install_install()``.
* The ``spack install`` command runs the sub-stages ``configure``,
``build`` and ``install`` in order. (The ``setup`` stage is
not run by default; see below).
* The ``spack setup`` command runs the sub-stages ``setup``
and a dummy install (to create the module file).
* The sub-stage install methods take no arguments (other than
``self``). The arguments ``spec`` and ``prefix`` to the standard
``install()`` method may be accessed via ``self.spec`` and
``self.prefix``.
GNU Autotools
~~~~~~~~~~~~~
The ``setup`` functionality is currently only available for
CMake-based packages. Extending this functionality to GNU
Autotools-based packages would be easy (and should be done by a
developer who actively uses Autotools). Packages that use
non-standard build systems can gain ``setup`` functionality by
subclassing ``StagedPackage`` directly.

View File

@@ -1,63 +0,0 @@
##############################################################################
# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
#
# This is a template package file for Spack. We've put "FIXME"
# next to all the things you'll want to change. Once you've handled
# them, you can save this file and test your package like this:
#
# spack install mpileaks
#
# You can edit this file again by typing:
#
# spack edit mpileaks
#
# See the Spack documentation for more information on packaging.
# If you submit this package back to Spack as a pull request,
# please first remove this boilerplate and all FIXME comments.
#
from spack import *
class Mpileaks(AutotoolsPackage):
"""FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here.
homepage = "http://www.example.com"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
# FIXME: Add dependencies if required.
# depends_on('m4', type='build')
# depends_on('autoconf', type='build')
# depends_on('automake', type='build')
# depends_on('libtool', type='build')
# depends_on('foo')
def configure_args(self):
# FIXME: Add arguments other than --prefix
# FIXME: If not needed delete the function
args = []
return args

View File

@@ -1,48 +0,0 @@
##############################################################################
# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
from spack import *
class Mpileaks(AutotoolsPackage):
"""Tool to detect and report MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
# FIXME: Add dependencies if required.
# depends_on('m4', type='build')
# depends_on('autoconf', type='build')
# depends_on('automake', type='build')
# depends_on('libtool', type='build')
# depends_on('foo')
def configure_args(self):
# FIXME: Add arguments other than --prefix
# FIXME: If not needed delete the function
args = []
return args

View File

@@ -1,45 +0,0 @@
##############################################################################
# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
from spack import *
class Mpileaks(AutotoolsPackage):
"""Tool to detect and report MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
depends_on('mpi')
depends_on('adept-utils')
depends_on('callpath')
def configure_args(self):
# FIXME: Add arguments other than --prefix
# FIXME: If not needed delete the function
args = []
return args

View File

@@ -1,43 +0,0 @@
##############################################################################
# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
from spack import *
class Mpileaks(AutotoolsPackage):
"""Tool to detect and report MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
depends_on('mpi')
depends_on('adept-utils')
depends_on('callpath')
def configure_args(self):
args = ['--with-adept-utils=%s' % self.spec['adept-utils'].prefix,
'--with-callpath=%s' % self.spec['callpath'].prefix]
return args

View File

@@ -1,50 +0,0 @@
##############################################################################
# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
from spack import *
class Mpileaks(AutotoolsPackage):
"""Tool to detect and report MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
variant('stackstart', default=0, description='Specify the number of stack frames to truncate.')
depends_on('mpi')
depends_on('adept-utils')
depends_on('callpath')
def configure_args(self):
args = ['--with-adept-utils=%s' % self.spec['adept-utils'].prefix,
'--with-callpath=%s' % self.spec['callpath'].prefix]
stackstart = int(self.spec.variants['stackstart'].value)
if stackstart:
args.extend(['--with-stack-start-c=%s' % stackstart,
'--with-stack-start-fortran=%s' % stackstart])
return args

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

View File

@@ -1,48 +0,0 @@
.. _spack-101:
=============================
Tutorial: Spack 101
=============================
This is a 3-hour introduction to Spack with lectures and live demos. It
was presented as a tutorial at `Supercomputing 2016
<http://sc16.supercomputing.org>`_. You can use these materials to teach
a course on Spack at your own site, or you can just skip ahead and read
the live demo scripts to see how Spack is used in practice.
.. _sc16-slides:
.. rubric:: Slides
.. figure:: tutorial/sc16-tutorial-slide-preview.png
:target: http://llnl.github.io/spack/files/Spack-SC16-Tutorial.pdf
:height: 72px
:align: left
:alt: Slide Preview
`Download Slides <http://llnl.github.io/spack/files/Spack-SC16-Tutorial.pdf>`_.
**Full citation:** Todd Gamblin, Massimiliano Culpo, Gregory Becker, Matt
Legendre, Greg Lee, Elizabeth Fischer, and Benedikt Hegner.
`Managing HPC Software Complexity with Spack
<http://sc16.supercomputing.org/presentation/?id=tut166&sess=sess209>`_.
Tutorial presented at Supercomputing 2016. November 13, 2016, Salt Lake
City, UT, USA.
.. _sc16-live-demos:
.. rubric:: Live Demos
These scripts will take you step-by-step through basic Spack tasks. They
correspond to sections in the slides above.
1. :ref:`basics-tutorial`
2. :ref:`packaging-tutorial`
3. :ref:`modules-tutorial`
Full contents:
.. toctree::
tutorial_sc16_spack_basics
tutorial_sc16_packaging
tutorial_sc16_modules

View File

@@ -1,982 +0,0 @@
.. _modules-tutorial:
=============================
Module Configuration Tutorial
=============================
This tutorial will guide you through the customization of both
content and naming of module files generated by Spack.
Starting from the default Spack settings you will add an increasing
number of directives to the ``modules.yaml`` configuration file to
satisfy a number of constraints that mimic those that you may encounter
in a typical production environment at HPC sites.
Even though the focus will be for the most part on customizing
TCL non-hierarchical module files, everything
you'll see applies also to other kinds of module files generated by Spack.
The generation of Lua hierarchical
module files will be addressed at the end of the tutorial,
and you'll see that with minor modifications
to an existing ``modules.yaml`` written for TCL
non-hierarchical modules you'll get almost
for free the possibility to try a hierarchical layout.
Let's start!
.. _module_file_tutorial_prerequisites:
-------------
Prerequisites
-------------
Before proceeding further ensure:
- you have LMod or Environment Modules available
- have :ref:`shell support <shell-support>` activated in Spack
If you need to install Lmod or Environment module you can refer
to the documentation :ref:`here <InstallEnvironmentModules>`.
^^^^^^^^^^^^^^^^^^
Add a new compiler
^^^^^^^^^^^^^^^^^^
Spack automatically scans the environment to search for available
compilers on first use. On a Ubuntu 14.04 a fresh clone will show
something like this:
.. code-block:: console
$ uname -a
Linux nuvolari 4.4.0-45-generic #66~14.04.1-Ubuntu SMP Wed Oct 19 15:05:38 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$ spack compilers
==> Available compilers
-- gcc ----------------------------------------------------------
gcc@4.8
For the purpose of building a limited set of packages with some features
that will help showcasing the capabilities of
module customization the first thing we need is to build a new compiler:
.. code-block:: console
$ spack install gcc@6.2.0
# ...
# Wait a long time
# ...
Then we can use shell support for modules to add it to the list of known compilers:
.. code-block:: console
# The name of the generated module may vary
$ module load gcc-6.2.0-gcc-4.8-twd5nqg
$ spack compiler add
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml
gcc@6.2.0
$ spack compilers
==> Available compilers
-- gcc ----------------------------------------------------------
gcc@6.2.0 gcc@4.8
Note that the final 7 digits hash at the end of the generated module may vary depending
on architecture or package version.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Build software that will be used in the tutorial
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Next you should install a few modules that will be used in the tutorial:
.. code-block:: console
$ spack install netlib-scalapack ^openmpi ^openblas
# ...
The packages you need to install are:
- ``netlib-scalapack ^openmpi ^openblas``
- ``netlib-scalapack ^mpich ^openblas``
- ``netlib-scalapack ^openmpi ^netlib-lapack``
- ``netlib-scalapack ^mpich ^netlib-lapack``
- ``py-scipy ^openblas``
In the end your environment should look something like:
.. code-block:: console
$ module avail
------------------------------------------------------------------------ ~/spack/share/spack/modules/linux-Ubuntu14-x86_64 ------------------------------------------------------------------------
binutils-2.27-gcc-4.8-dz3xevw libpciaccess-0.13.4-gcc-6.2.0-eo2siet lzo-2.09-gcc-6.2.0-jcngz72 netlib-scalapack-2.0.2-gcc-6.2.0-wnimqhw python-2.7.12-gcc-6.2.0-qu7rc5p
bzip2-1.0.6-gcc-6.2.0-csoc2mq libsigsegv-2.10-gcc-4.8-avb6azw m4-1.4.17-gcc-4.8-iggewke netlib-scalapack-2.0.2-gcc-6.2.0-wojunhq sqlite-3.8.5-gcc-6.2.0-td3zfe7
cmake-3.5.2-gcc-6.2.0-6poypqg libsigsegv-2.10-gcc-6.2.0-g3qpmbi m4-1.4.17-gcc-6.2.0-lhgqa6s nettle-3.2-gcc-6.2.0-djdthlh tcl-8.6.5-gcc-4.8-atddxu7
curl-7.50.3-gcc-6.2.0-2ffacqm libtool-2.4.6-gcc-6.2.0-kiepac6 mpc-1.0.3-gcc-4.8-lylv7lk openblas-0.2.19-gcc-6.2.0-js33umc util-macros-1.19.0-gcc-6.2.0-uoukuqk
expat-2.2.0-gcc-6.2.0-bxqnjar libxml2-2.9.4-gcc-6.2.0-3k4ykbe mpfr-3.1.4-gcc-4.8-bldfx3w openmpi-2.0.1-gcc-6.2.0-s3qbtby xz-5.2.2-gcc-6.2.0-t5lk6in
gcc-6.2.0-gcc-4.8-twd5nqg lmod-6.4.5-gcc-4.8-7v7bh7b mpich-3.2-gcc-6.2.0-5n5xoep openssl-1.0.2j-gcc-6.2.0-hibnfda zlib-1.2.8-gcc-4.8-bds4ies
gmp-6.1.1-gcc-4.8-uq52e2n lua-5.3.2-gcc-4.8-xozf2hx ncurses-6.0-gcc-4.8-u62fit4 pkg-config-0.29.1-gcc-6.2.0-rslsgcs zlib-1.2.8-gcc-6.2.0-asydrba
gmp-6.1.1-gcc-6.2.0-3cfh3hi lua-luafilesystem-1_6_3-gcc-4.8-sbzejlz ncurses-6.0-gcc-6.2.0-7tb426s py-nose-1.3.7-gcc-6.2.0-4gl5c42
hwloc-1.11.4-gcc-6.2.0-3ostwel lua-luaposix-33.4.0-gcc-4.8-xf7y2p5 netlib-lapack-3.6.1-gcc-6.2.0-mirer2l py-numpy-1.11.1-gcc-6.2.0-i3rpk4e
isl-0.14-gcc-4.8-cq73t5m lz4-131-gcc-6.2.0-cagoem4 netlib-scalapack-2.0.2-gcc-6.2.0-6bqlxqy py-scipy-0.18.1-gcc-6.2.0-e6uljfi
libarchive-3.2.1-gcc-6.2.0-2b54aos lzma-4.32.7-gcc-6.2.0-sfmeynw netlib-scalapack-2.0.2-gcc-6.2.0-hpqb3dp py-setuptools-25.2.0-gcc-6.2.0-hkqauaa
------------------------------------------------
Filter unwanted modifications to the environment
------------------------------------------------
The non-hierarchical TCL module files that have been generated so far
follow the default rules for module generation, which are given
:ref:`here <modules-yaml>` in the reference part of the manual. Taking a
look at the ``gcc`` module you'll see something like:
.. code-block:: console
$ module show gcc-6.2.0-gcc-4.8-twd5nqg
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/spack/share/spack/modules/linux-Ubuntu14-x86_64/gcc-6.2.0-gcc-4.8-twd5nqg:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
whatis("gcc @6.2.0 ")
prepend_path("PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/bin")
prepend_path("CMAKE_PREFIX_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/")
prepend_path("MANPATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/share/man")
prepend_path("PKG_CONFIG_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/lib64/pkgconfig")
prepend_path("LIBRARY_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/lib64")
prepend_path("LD_LIBRARY_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/lib64")
prepend_path("CPATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/include")
help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C,
Fortran, and Java.
]])
As expected, a few environment variables representing paths will be modified
by the modules according to the default prefix inspection rules.
Consider now the case that your site has decided that e.g. ``CPATH`` and
``LIBRARY_PATH`` modifications should not be present in module files. What you can
do to abide by the rules is to create a configuration file ``~/.spack/modules.yaml``
with the following content:
.. code-block:: yaml
modules:
tcl:
all:
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
Next you should regenerate all the module files:
.. code-block:: console
$ spack module refresh --module-type tcl
==> You are about to regenerate tcl module files for:
-- linux-Ubuntu14-x86_64 / gcc@4.8 ------------------------------
dz3xevw binutils@2.27 uq52e2n gmp@6.1.1 avb6azw libsigsegv@2.10 xozf2hx lua@5.3.2 xf7y2p5 lua-luaposix@33.4.0 lylv7lk mpc@1.0.3 u62fit4 ncurses@6.0 bds4ies zlib@1.2.8
twd5nqg gcc@6.2.0 cq73t5m isl@0.14 7v7bh7b lmod@6.4.5 sbzejlz lua-luafilesystem@1_6_3 iggewke m4@1.4.17 bldfx3w mpfr@3.1.4 atddxu7 tcl@8.6.5
...
==> Do you want to proceed ? [y/n]
y
==> Regenerating tcl module files
If you take a look now at the module for ``gcc`` you'll see that the unwanted
paths have disappeared:
.. code-block:: console
$ module show gcc-6.2.0-gcc-4.8-twd5nqg
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/spack/share/spack/modules/linux-Ubuntu14-x86_64/gcc-6.2.0-gcc-4.8-twd5nqg:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
whatis("gcc @6.2.0 ")
prepend_path("PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/bin")
prepend_path("CMAKE_PREFIX_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/")
prepend_path("MANPATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/share/man")
prepend_path("PKG_CONFIG_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/lib64/pkgconfig")
prepend_path("LD_LIBRARY_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/lib64")
help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C,
Fortran, and Java.
]])
----------------------------------------------
Prevent some module files from being generated
----------------------------------------------
Another common request at many sites is to avoid exposing software that
is only needed as an intermediate step when building a newer stack.
Let's try to prevent the generation of
module files for anything that is compiled with ``gcc@4.8`` (the OS provided compiler).
To do this you should add a ``blacklist`` keyword to the configuration file:
.. code-block:: yaml
:emphasize-lines: 3,4
modules:
tcl:
blacklist:
- '%gcc@4.8'
all:
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
and regenerate the module files:
.. code-block:: console
$ spack module refresh --module-type tcl --delete-tree
==> You are about to regenerate tcl module files for:
-- linux-Ubuntu14-x86_64 / gcc@4.8 ------------------------------
dz3xevw binutils@2.27 uq52e2n gmp@6.1.1 avb6azw libsigsegv@2.10 xozf2hx lua@5.3.2 xf7y2p5 lua-luaposix@33.4.0 lylv7lk mpc@1.0.3 u62fit4 ncurses@6.0 bds4ies zlib@1.2.8
twd5nqg gcc@6.2.0 cq73t5m isl@0.14 7v7bh7b lmod@6.4.5 sbzejlz lua-luafilesystem@1_6_3 iggewke m4@1.4.17 bldfx3w mpfr@3.1.4 atddxu7 tcl@8.6.5
-- linux-Ubuntu14-x86_64 / gcc@6.2.0 ----------------------------
csoc2mq bzip2@1.0.6 2b54aos libarchive@3.2.1 sfmeynw lzma@4.32.7 wnimqhw netlib-scalapack@2.0.2 s3qbtby openmpi@2.0.1 hkqauaa py-setuptools@25.2.0
6poypqg cmake@3.5.2 eo2siet libpciaccess@0.13.4 jcngz72 lzo@2.09 6bqlxqy netlib-scalapack@2.0.2 hibnfda openssl@1.0.2j qu7rc5p python@2.7.12
2ffacqm curl@7.50.3 g3qpmbi libsigsegv@2.10 lhgqa6s m4@1.4.17 wojunhq netlib-scalapack@2.0.2 rslsgcs pkg-config@0.29.1 td3zfe7 sqlite@3.8.5
bxqnjar expat@2.2.0 kiepac6 libtool@2.4.6 5n5xoep mpich@3.2 hpqb3dp netlib-scalapack@2.0.2 4gl5c42 py-nose@1.3.7 uoukuqk util-macros@1.19.0
3cfh3hi gmp@6.1.1 3k4ykbe libxml2@2.9.4 7tb426s ncurses@6.0 djdthlh nettle@3.2 i3rpk4e py-numpy@1.11.1 t5lk6in xz@5.2.2
3ostwel hwloc@1.11.4 cagoem4 lz4@131 mirer2l netlib-lapack@3.6.1 js33umc openblas@0.2.19 e6uljfi py-scipy@0.18.1 asydrba zlib@1.2.8
==> Do you want to proceed ? [y/n]
y
$ module avail
------------------------------------------------------------------------ ~/spack/share/spack/modules/linux-Ubuntu14-x86_64 ------------------------------------------------------------------------
bzip2-1.0.6-gcc-6.2.0-csoc2mq libsigsegv-2.10-gcc-6.2.0-g3qpmbi ncurses-6.0-gcc-6.2.0-7tb426s openmpi-2.0.1-gcc-6.2.0-s3qbtby sqlite-3.8.5-gcc-6.2.0-td3zfe7
cmake-3.5.2-gcc-6.2.0-6poypqg libtool-2.4.6-gcc-6.2.0-kiepac6 netlib-lapack-3.6.1-gcc-6.2.0-mirer2l openssl-1.0.2j-gcc-6.2.0-hibnfda util-macros-1.19.0-gcc-6.2.0-uoukuqk
curl-7.50.3-gcc-6.2.0-2ffacqm libxml2-2.9.4-gcc-6.2.0-3k4ykbe netlib-scalapack-2.0.2-gcc-6.2.0-6bqlxqy pkg-config-0.29.1-gcc-6.2.0-rslsgcs xz-5.2.2-gcc-6.2.0-t5lk6in
expat-2.2.0-gcc-6.2.0-bxqnjar lz4-131-gcc-6.2.0-cagoem4 netlib-scalapack-2.0.2-gcc-6.2.0-hpqb3dp py-nose-1.3.7-gcc-6.2.0-4gl5c42 zlib-1.2.8-gcc-6.2.0-asydrba
gmp-6.1.1-gcc-6.2.0-3cfh3hi lzma-4.32.7-gcc-6.2.0-sfmeynw netlib-scalapack-2.0.2-gcc-6.2.0-wnimqhw py-numpy-1.11.1-gcc-6.2.0-i3rpk4e
hwloc-1.11.4-gcc-6.2.0-3ostwel lzo-2.09-gcc-6.2.0-jcngz72 netlib-scalapack-2.0.2-gcc-6.2.0-wojunhq py-scipy-0.18.1-gcc-6.2.0-e6uljfi
libarchive-3.2.1-gcc-6.2.0-2b54aos m4-1.4.17-gcc-6.2.0-lhgqa6s nettle-3.2-gcc-6.2.0-djdthlh py-setuptools-25.2.0-gcc-6.2.0-hkqauaa
libpciaccess-0.13.4-gcc-6.2.0-eo2siet mpich-3.2-gcc-6.2.0-5n5xoep openblas-0.2.19-gcc-6.2.0-js33umc python-2.7.12-gcc-6.2.0-qu7rc5p
This time it is convenient to pass the option ``--delete-tree`` to the command that
regenerates the module files to instruct it to delete the existing tree and regenerate
a new one instead of overwriting the files in the existing directory.
If you pay careful attention you'll see though that we went too far in blacklisting modules:
the module for ``gcc@6.2.0`` disappeared as it was bootstrapped with ``gcc@4.8``. To specify
exceptions to the blacklist rules you can use ``whitelist``:
.. code-block:: yaml
:emphasize-lines: 3,4
modules:
tcl:
whitelist:
- gcc
blacklist:
- '%gcc@4.8'
all:
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
``whitelist`` rules always have precedence over ``blacklist`` rules. If you regenerate the modules again:
.. code-block:: console
$ spack module refresh --module-type tcl -y
you'll see that now the module for ``gcc@6.2.0`` has reappeared:
.. code-block:: console
$ module avail gcc-6.2.0-gcc-4.8-twd5nqg
------------------------------------------------------------------------ ~/spack/share/spack/modules/linux-Ubuntu14-x86_64 ------------------------------------------------------------------------
gcc-6.2.0-gcc-4.8-twd5nqg
-------------------------
Change module file naming
-------------------------
The next step in making module files more user-friendly is to
improve their naming scheme.
To reduce the length of the hash or remove it altogether you can
use the ``hash_length`` keyword in the configuration file:
.. TODO: give reasons to remove hashes if they are not evident enough?
.. code-block:: yaml
:emphasize-lines: 3
modules:
tcl:
hash_length: 0
whitelist:
- gcc
blacklist:
- '%gcc@4.8'
all:
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
If you try to regenerate the module files now you will get an error:
.. code-block:: console
$ spack module refresh --module-type tcl --delete-tree -y
==> Error: Name clashes detected in module files:
file : ~/spack/share/spack/modules/linux-Ubuntu14-x86_64/netlib-scalapack-2.0.2-gcc-6.2.0
spec : netlib-scalapack@2.0.2%gcc@6.2.0~fpic+shared arch=linux-Ubuntu14-x86_64
spec : netlib-scalapack@2.0.2%gcc@6.2.0~fpic+shared arch=linux-Ubuntu14-x86_64
spec : netlib-scalapack@2.0.2%gcc@6.2.0~fpic+shared arch=linux-Ubuntu14-x86_64
spec : netlib-scalapack@2.0.2%gcc@6.2.0~fpic+shared arch=linux-Ubuntu14-x86_64
==> Error: Operation aborted
.. note::
We try to check for errors upfront!
In Spack we check for errors upfront whenever possible, so don't worry about your module files:
as a name clash was detected nothing has been changed on disk.
The problem here is that without
the hashes the four different flavors of ``netlib-scalapack`` map to the same module file
name. We have the possibility to add suffixes to differentiate them:
.. code-block:: yaml
:emphasize-lines: 9-11,14-17
modules:
tcl:
hash_length: 0
whitelist:
- gcc
blacklist:
- '%gcc@4.8'
all:
suffixes:
'^openblas': openblas
'^netlib-lapack': netlib
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
netlib-scalapack:
suffixes:
'^openmpi': openmpi
'^mpich': mpich
As you can see it is possible to specify rules that applies only to a
restricted set of packages using :ref:`anonymous specs <anonymous_specs>`.
Regenerating module files now we obtain:
.. code-block:: console
$ spack module refresh --module-type tcl --delete-tree -y
==> Regenerating tcl module files
$ module avail
------------------------------------------------------------------------ ~/spack/share/spack/modules/linux-Ubuntu14-x86_64 ------------------------------------------------------------------------
bzip2-1.0.6-gcc-6.2.0 libpciaccess-0.13.4-gcc-6.2.0 mpich-3.2-gcc-6.2.0 openblas-0.2.19-gcc-6.2.0 python-2.7.12-gcc-6.2.0
cmake-3.5.2-gcc-6.2.0 libsigsegv-2.10-gcc-6.2.0 ncurses-6.0-gcc-6.2.0 openmpi-2.0.1-gcc-6.2.0 sqlite-3.8.5-gcc-6.2.0
curl-7.50.3-gcc-6.2.0 libtool-2.4.6-gcc-6.2.0 netlib-lapack-3.6.1-gcc-6.2.0 openssl-1.0.2j-gcc-6.2.0 util-macros-1.19.0-gcc-6.2.0
expat-2.2.0-gcc-6.2.0 libxml2-2.9.4-gcc-6.2.0 netlib-scalapack-2.0.2-gcc-6.2.0-netlib-mpich pkg-config-0.29.1-gcc-6.2.0 xz-5.2.2-gcc-6.2.0
gcc-6.2.0-gcc-4.8 lz4-131-gcc-6.2.0 netlib-scalapack-2.0.2-gcc-6.2.0-netlib-openmpi py-nose-1.3.7-gcc-6.2.0 zlib-1.2.8-gcc-6.2.0
gmp-6.1.1-gcc-6.2.0 lzma-4.32.7-gcc-6.2.0 netlib-scalapack-2.0.2-gcc-6.2.0-openblas-mpich py-numpy-1.11.1-gcc-6.2.0-openblas
hwloc-1.11.4-gcc-6.2.0 lzo-2.09-gcc-6.2.0 netlib-scalapack-2.0.2-gcc-6.2.0-openblas-openmpi py-scipy-0.18.1-gcc-6.2.0-openblas
libarchive-3.2.1-gcc-6.2.0 m4-1.4.17-gcc-6.2.0 nettle-3.2-gcc-6.2.0 py-setuptools-25.2.0-gcc-6.2.0
Finally we can set a ``naming_scheme`` to prevent users from loading
modules that refer to different flavors of the same library/application:
.. code-block:: yaml
:emphasize-lines: 4,10,11
modules:
tcl:
hash_length: 0
naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}'
whitelist:
- gcc
blacklist:
- '%gcc@4.8'
all:
conflict:
- '${PACKAGE}'
suffixes:
'^openblas': openblas
'^netlib-lapack': netlib
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
netlib-scalapack:
suffixes:
'^openmpi': openmpi
'^mpich': mpich
The final result should look like:
.. code-block:: console
$ module avail
------------------------------------------------------------------------ ~/spack/share/spack/modules/linux-Ubuntu14-x86_64 ------------------------------------------------------------------------
bzip2/1.0.6-gcc-6.2.0 libpciaccess/0.13.4-gcc-6.2.0 mpich/3.2-gcc-6.2.0 openblas/0.2.19-gcc-6.2.0 python/2.7.12-gcc-6.2.0
cmake/3.5.2-gcc-6.2.0 libsigsegv/2.10-gcc-6.2.0 ncurses/6.0-gcc-6.2.0 openmpi/2.0.1-gcc-6.2.0 sqlite/3.8.5-gcc-6.2.0
curl/7.50.3-gcc-6.2.0 libtool/2.4.6-gcc-6.2.0 netlib-lapack/3.6.1-gcc-6.2.0 openssl/1.0.2j-gcc-6.2.0 util-macros/1.19.0-gcc-6.2.0
expat/2.2.0-gcc-6.2.0 libxml2/2.9.4-gcc-6.2.0 netlib-scalapack/2.0.2-gcc-6.2.0-netlib-mpich pkg-config/0.29.1-gcc-6.2.0 xz/5.2.2-gcc-6.2.0
gcc/6.2.0-gcc-4.8 lz4/131-gcc-6.2.0 netlib-scalapack/2.0.2-gcc-6.2.0-netlib-openmpi py-nose/1.3.7-gcc-6.2.0 zlib/1.2.8-gcc-6.2.0
gmp/6.1.1-gcc-6.2.0 lzma/4.32.7-gcc-6.2.0 netlib-scalapack/2.0.2-gcc-6.2.0-openblas-mpich py-numpy/1.11.1-gcc-6.2.0-openblas
hwloc/1.11.4-gcc-6.2.0 lzo/2.09-gcc-6.2.0 netlib-scalapack/2.0.2-gcc-6.2.0-openblas-openmpi (D) py-scipy/0.18.1-gcc-6.2.0-openblas
libarchive/3.2.1-gcc-6.2.0 m4/1.4.17-gcc-6.2.0 nettle/3.2-gcc-6.2.0 py-setuptools/25.2.0-gcc-6.2.0
.. note::
TCL specific directive
The directives ``naming_scheme`` and ``conflict`` are TCL specific and do not apply
to the ``dotkit`` or ``lmod`` sections in the configuration file.
------------------------------------
Add custom environment modifications
------------------------------------
At many sites it is customary to set an environment variable in a
package's module file that points to the folder in which the package
is installed. You can achieve this with Spack by adding an
``environment`` directive to the configuration file:
.. code-block:: yaml
:emphasize-lines: 17-19
modules:
tcl:
hash_length: 0
naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}'
whitelist:
- gcc
blacklist:
- '%gcc@4.8'
all:
conflict:
- '${PACKAGE}'
suffixes:
'^openblas': openblas
'^netlib-lapack': netlib
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
environment:
set:
'${PACKAGE}_ROOT': '${PREFIX}'
netlib-scalapack:
suffixes:
'^openmpi': openmpi
'^mpich': mpich
There are many variable tokens available to use in the ``environment``
and ``naming_scheme`` directives, such as ``${PACKAGE}``,
``${VERSION}``, etc. (see the :meth:`~spack.spec.Spec.format` API
documentation for the complete list).
Regenerating the module files should result in something like:
.. code-block:: console
:emphasize-lines: 14
$ spack module refresh -y --module-type tcl
==> Regenerating tcl module files
$ module show gcc
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/spack/share/spack/modules/linux-Ubuntu14-x86_64/gcc/6.2.0-gcc-4.8:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
whatis("gcc @6.2.0 ")
prepend_path("PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/bin")
prepend_path("CMAKE_PREFIX_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/")
prepend_path("MANPATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/share/man")
prepend_path("PKG_CONFIG_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/lib64/pkgconfig")
prepend_path("LD_LIBRARY_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/lib64")
setenv("GCC_ROOT","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u")
conflict("gcc")
help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C,
Fortran, and Java.
]])
As you see the ``gcc`` module has the environment variable ``GCC_ROOT`` set.
Sometimes it's also useful to apply environment modifications selectively and target
only certain packages. You can, for instance set the common variables ``CC``, ``CXX``,
etc. in the ``gcc`` module file and apply other custom modifications to the
``openmpi`` modules as follows:
.. code-block:: yaml
:emphasize-lines: 20-32
modules:
tcl:
hash_length: 0
naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}'
whitelist:
- gcc
blacklist:
- '%gcc@4.8'
all:
conflict:
- '${PACKAGE}'
suffixes:
'^openblas': openblas
'^netlib-lapack': netlib
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
environment:
set:
'${PACKAGE}_ROOT': '${PREFIX}'
gcc:
environment:
set:
CC: gcc
CXX: g++
FC: gfortran
F90: gfortran
F77: gfortran
openmpi:
environment:
set:
SLURM_MPI_TYPE: pmi2
OMPI_MCA_btl_openib_warn_default_gid_prefix: '0'
netlib-scalapack:
suffixes:
'^openmpi': openmpi
'^mpich': mpich
This time we will be more selective and regenerate only the ``gcc`` and
``openmpi`` module files:
.. code-block:: console
$ spack module refresh -y --module-type tcl gcc
==> Regenerating tcl module files
$ spack module refresh -y --module-type tcl openmpi
==> Regenerating tcl module files
$ module show gcc
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/spack/share/spack/modules/linux-Ubuntu14-x86_64/gcc/6.2.0-gcc-4.8:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
whatis("gcc @6.2.0 ")
prepend_path("PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/bin")
prepend_path("CMAKE_PREFIX_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/")
prepend_path("MANPATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/share/man")
prepend_path("PKG_CONFIG_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/lib64/pkgconfig")
prepend_path("LD_LIBRARY_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u/lib64")
setenv("GCC_ROOT","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-4.8/gcc-6.2.0-twd5nqg33hrrssqclcfi5k42eccwxz5u")
setenv("CC","gcc")
setenv("CXX","g++")
setenv("F90","gfortran")
setenv("FC","gfortran")
setenv("F77","gfortran")
conflict("gcc")
help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C,
Fortran, and Java.
]])
$ module show openmpi
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/spack/share/spack/modules/linux-Ubuntu14-x86_64/openmpi/2.0.1-gcc-6.2.0:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
whatis("openmpi @2.0.1 ")
prepend_path("PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-6.2.0/openmpi-2.0.1-s3qbtbyh3y5y4gkchmhcuak7th44l53w/bin")
prepend_path("CMAKE_PREFIX_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-6.2.0/openmpi-2.0.1-s3qbtbyh3y5y4gkchmhcuak7th44l53w/")
prepend_path("LD_LIBRARY_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-6.2.0/openmpi-2.0.1-s3qbtbyh3y5y4gkchmhcuak7th44l53w/lib")
prepend_path("PKG_CONFIG_PATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-6.2.0/openmpi-2.0.1-s3qbtbyh3y5y4gkchmhcuak7th44l53w/lib/pkgconfig")
prepend_path("MANPATH","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-6.2.0/openmpi-2.0.1-s3qbtbyh3y5y4gkchmhcuak7th44l53w/share/man")
setenv("SLURM_MPI_TYPE","pmi2")
setenv("OMPI_MCA_BTL_OPENIB_WARN_DEFAULT_GID_PREFIX","0")
setenv("OPENMPI_ROOT","~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-6.2.0/openmpi-2.0.1-s3qbtbyh3y5y4gkchmhcuak7th44l53w")
conflict("openmpi")
help([[The Open MPI Project is an open source Message Passing Interface
implementation that is developed and maintained by a consortium of
academic, research, and industry partners. Open MPI is therefore able to
combine the expertise, technologies, and resources from all across the
High Performance Computing community in order to build the best MPI
library available. Open MPI offers advantages for system and software
vendors, application developers and computer science researchers.
]])
---------------------
Autoload dependencies
---------------------
Spack can also generate module files that contain code to load the
dependencies automatically. You can, for instance generate python
modules that load their dependencies by adding the ``autoload``
directive and assigning it the value ``direct``:
.. code-block:: yaml
:emphasize-lines: 37,38
modules:
tcl:
hash_length: 0
naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}'
whitelist:
- gcc
blacklist:
- '%gcc@4.8'
all:
conflict:
- '${PACKAGE}'
suffixes:
'^openblas': openblas
'^netlib-lapack': netlib
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
environment:
set:
'${PACKAGE}_ROOT': '${PREFIX}'
gcc:
environment:
set:
CC: gcc
CXX: g++
FC: gfortran
F90: gfortran
F77: gfortran
openmpi:
environment:
set:
SLURM_MPI_TYPE: pmi2
OMPI_MCA_btl_openib_warn_default_gid_prefix: '0'
netlib-scalapack:
suffixes:
'^openmpi': openmpi
'^mpich': mpich
^python:
autoload: 'direct'
and regenerating the module files for every package that depends on ``python``:
.. code-block:: console
$ spack module refresh -y --module-type tcl ^python
==> Regenerating tcl module files
Now the ``py-scipy`` module will be:
.. code-block:: tcl
#%Module1.0
## Module file created by spack (https://github.com/LLNL/spack) on 2016-11-02 20:53:21.283547
##
## py-scipy@0.18.1%gcc@6.2.0 arch=linux-Ubuntu14-x86_64-e6uljfi
##
module-whatis "py-scipy @0.18.1"
proc ModulesHelp { } {
puts stderr "SciPy (pronounced "Sigh Pie") is a Scientific Library for Python. It"
puts stderr "provides many user-friendly and efficient numerical routines such as"
puts stderr "routines for numerical integration and optimization."
}
if ![ is-loaded python/2.7.12-gcc-6.2.0 ] {
puts stderr "Autoloading python/2.7.12-gcc-6.2.0"
module load python/2.7.12-gcc-6.2.0
}
if ![ is-loaded openblas/0.2.19-gcc-6.2.0 ] {
puts stderr "Autoloading openblas/0.2.19-gcc-6.2.0"
module load openblas/0.2.19-gcc-6.2.0
}
if ![ is-loaded py-numpy/1.11.1-gcc-6.2.0-openblas ] {
puts stderr "Autoloading py-numpy/1.11.1-gcc-6.2.0-openblas"
module load py-numpy/1.11.1-gcc-6.2.0-openblas
}
prepend-path CMAKE_PREFIX_PATH "~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-6.2.0/py-scipy-0.18.1-e6uljfiffgym4xvj6wveevqxfqnfb3gh/"
prepend-path LD_LIBRARY_PATH "~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-6.2.0/py-scipy-0.18.1-e6uljfiffgym4xvj6wveevqxfqnfb3gh/lib"
prepend-path PYTHONPATH "~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-6.2.0/py-scipy-0.18.1-e6uljfiffgym4xvj6wveevqxfqnfb3gh/lib/python2.7/site-packages"
setenv PY_SCIPY_ROOT "~/spack/opt/spack/linux-Ubuntu14-x86_64/gcc-6.2.0/py-scipy-0.18.1-e6uljfiffgym4xvj6wveevqxfqnfb3gh"
conflict py-scipy
and will contain code to autoload all the dependencies:
.. code-block:: console
$ module load py-scipy
Autoloading python/2.7.12-gcc-6.2.0
Autoloading openblas/0.2.19-gcc-6.2.0
Autoloading py-numpy/1.11.1-gcc-6.2.0-openblas
-----------------------------
Lua hierarchical module files
-----------------------------
In the final part of this tutorial you will modify ``modules.yaml`` to generate
Lua hierarchical module files. You will see that most of the directives used before
are also valid in the ``lmod`` context.
^^^^^^^^^^^^^^^^^
Core/Compiler/MPI
^^^^^^^^^^^^^^^^^
.. warning::
Only LMod supports Lua hierarchical module files
For this part of the tutorial you need to be using LMod to
manage your environment.
The most common hierarchy is the so called ``Core/Compiler/MPI``. To have an idea
how a hierarchy is organized you may refer to the
`Lmod guide <https://www.tacc.utexas.edu/research-development/tacc-projects/lmod/user-guide/module-hierarchy>`_.
Since ``lmod`` is not enabled by default, you need to add it to the list of
enabled module file generators. The other things you need to do are:
- change the ``tcl`` tag to ``lmod``
- remove ``tcl`` specific directives (``naming_scheme`` and ``conflict``)
- set which compilers are considered ``core``
- remove the ``mpi`` related suffixes (as they will be substituted by hierarchies)
After modifications the configuration file will be:
.. code-block:: yaml
:emphasize-lines: 2-6
modules:
enable::
- lmod
lmod:
core_compilers:
- 'gcc@4.8'
hash_length: 0
whitelist:
- gcc
blacklist:
- '%gcc@4.8'
all:
suffixes:
'^openblas': openblas
'^netlib-lapack': netlib
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
environment:
set:
'${PACKAGE}_ROOT': '${PREFIX}'
gcc:
environment:
set:
CC: gcc
CXX: g++
FC: gfortran
F90: gfortran
F77: gfortran
openmpi:
environment:
set:
SLURM_MPI_TYPE: pmi2
OMPI_MCA_btl_openib_warn_default_gid_prefix: '0'
.. note::
The double colon
The double colon after ``enable`` is intentional and it serves the
purpose of overriding the default list of enabled generators so
that only ``lmod`` will be active (see :ref:`the reference
manual <config-overrides>` for a more detailed explanation of
config scopes).
The directive ``core_compilers`` accepts a list of compilers : everything built
using these compilers will create a module in the ``Core`` part of the hierarchy. It is
common practice to put the OS provided compilers in the list and only build common utilities
and other compilers in ``Core``.
If you regenerate the module files
.. code-block:: console
$ spack module refresh --module-type lmod --delete-tree -y
and update ``MODULEPATH`` to point to the ``Core`` folder, and
list the available modules, you'll see:
.. code-block:: console
$ module unuse ~/spack/share/spack/modules/linux-Ubuntu14-x86_64
$ module use ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/Core
$ module avail
----------------------------------------------------------------------- ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/Core -----------------------------------------------------------------------
gcc/6.2.0
The only module visible now is ``gcc``. Loading that you will make
visible the ``Compiler`` part of the software stack that was built with ``gcc/6.2.0``:
.. code-block:: console
$ module load gcc
$ module avail
-------------------------------------------------------------------- ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/gcc/6.2.0 ---------------------------------------------------------------------
binutils/2.27 curl/7.50.3 hwloc/1.11.4 libtool/2.4.6 lzo/2.09 netlib-lapack/3.6.1 openssl/1.0.2j py-scipy/0.18.1-openblas util-macros/1.19.0
bison/3.0.4 expat/2.2.0 libarchive/3.2.1 libxml2/2.9.4 m4/1.4.17 nettle/3.2 pkg-config/0.29.1 py-setuptools/25.2.0 xz/5.2.2
bzip2/1.0.6 flex/2.6.0 libpciaccess/0.13.4 lz4/131 mpich/3.2 openblas/0.2.19 py-nose/1.3.7 python/2.7.12 zlib/1.2.8
cmake/3.6.1 gmp/6.1.1 libsigsegv/2.10 lzma/4.32.7 ncurses/6.0 openmpi/2.0.1 py-numpy/1.11.1-openblas sqlite/3.8.5
----------------------------------------------------------------------- ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/Core -----------------------------------------------------------------------
gcc/6.2.0 (L)
The same holds true for the ``MPI`` part of the stack, that you can enable by loading
either ``mpich`` or ``openmpi``. The nice features of LMod will become evident
once you'll try switching among different stacks:
.. code-block:: console
$ module load mpich
$ module avail
----------------------------------------------------------- ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/mpich/3.2-5n5xoep/gcc/6.2.0 ------------------------------------------------------------
netlib-scalapack/2.0.2-netlib netlib-scalapack/2.0.2-openblas (D)
-------------------------------------------------------------------- ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/gcc/6.2.0 ---------------------------------------------------------------------
binutils/2.27 curl/7.50.3 hwloc/1.11.4 libtool/2.4.6 lzo/2.09 netlib-lapack/3.6.1 openssl/1.0.2j py-scipy/0.18.1-openblas util-macros/1.19.0
bison/3.0.4 expat/2.2.0 libarchive/3.2.1 libxml2/2.9.4 m4/1.4.17 nettle/3.2 pkg-config/0.29.1 py-setuptools/25.2.0 xz/5.2.2
bzip2/1.0.6 flex/2.6.0 libpciaccess/0.13.4 lz4/131 mpich/3.2 (L) openblas/0.2.19 py-nose/1.3.7 python/2.7.12 zlib/1.2.8
cmake/3.6.1 gmp/6.1.1 libsigsegv/2.10 lzma/4.32.7 ncurses/6.0 openmpi/2.0.1 py-numpy/1.11.1-openblas sqlite/3.8.5
----------------------------------------------------------------------- ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/Core -----------------------------------------------------------------------
gcc/6.2.0 (L)
$ module load openblas netlib-scalapack/2.0.2-openblas
$ module list
Currently Loaded Modules:
1) gcc/6.2.0 2) mpich/3.2 3) openblas/0.2.19 4) netlib-scalapack/2.0.2-openblas
$ module load openmpi
Lmod is automatically replacing "mpich/3.2" with "openmpi/2.0.1"
Due to MODULEPATH changes the following have been reloaded:
1) netlib-scalapack/2.0.2-openblas
This layout is already a great improvement over the usual non-hierarchical layout,
but it still has an asymmetry: ``LAPACK`` providers are semantically the same as ``MPI``
providers, but they are still not part of the hierarchy. We'll see a possible solution
next.
.. Activate lmod and turn the previous modifications into lmod:
Add core compilers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Extend the hierarchy to other virtual providers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. warning::
This is an experimental feature
Having a hierarchy deeper than ``Core``/``Compiler``/``MPI`` is an experimental
feature, still not fully supported by ``module spider``,
see `here <https://github.com/TACC/Lmod/issues/114>`_. Furthermore its use
with hierarchies more complex than ``Core``/``Compiler``/``MPI``/``LAPACK``
has not been thoroughly tested in production environments.
Spack permits you to generate Lua hierarchical module files where users
can add an arbitrary list of virtual providers to the triplet
``Core``/``Compiler``/``MPI``. A configuration file like:
.. code-block:: yaml
:emphasize-lines: 7,8
modules:
enable::
- lmod
lmod:
core_compilers:
- 'gcc@4.8'
hierarchical_scheme:
- lapack
hash_length: 0
whitelist:
- gcc
blacklist:
- '%gcc@4.8'
- readline
all:
filter:
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
environment:
set:
'${PACKAGE}_ROOT': '${PREFIX}'
gcc:
environment:
set:
CC: gcc
CXX: g++
FC: gfortran
F90: gfortran
F77: gfortran
openmpi:
environment:
set:
SLURM_MPI_TYPE: pmi2
OMPI_MCA_btl_openib_warn_default_gid_prefix: '0'
will add ``lapack`` providers to the mix. After the usual regeneration of module files:
.. code-block:: console
$ module purge
$ spack module refresh --module-type lmod --delete-tree -y
==> Regenerating lmod module files
you will have something like:
.. code-block:: console
$ module load gcc
$ module load openblas
$ module load openmpi
$ module avail
--------------------------------------------- ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/openblas/0.2.19-js33umc/openmpi/2.0.1-s3qbtby/gcc/6.2.0 ----------------------------------------------
netlib-scalapack/2.0.2
-------------------------------------------------------- ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/openblas/0.2.19-js33umc/gcc/6.2.0 ---------------------------------------------------------
py-numpy/1.11.1 py-scipy/0.18.1
-------------------------------------------------------------------- ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/gcc/6.2.0 ---------------------------------------------------------------------
binutils/2.27 curl/7.50.3 hwloc/1.11.4 libtool/2.4.6 lzo/2.09 netlib-lapack/3.6.1 openssl/1.0.2j python/2.7.12 zlib/1.2.8
bison/3.0.4 expat/2.2.0 libarchive/3.2.1 libxml2/2.9.4 m4/1.4.17 nettle/3.2 pkg-config/0.29.1 sqlite/3.8.5
bzip2/1.0.6 flex/2.6.0 libpciaccess/0.13.4 lz4/131 mpich/3.2 openblas/0.2.19 (L) py-nose/1.3.7 util-macros/1.19.0
cmake/3.6.1 gmp/6.1.1 libsigsegv/2.10 lzma/4.32.7 ncurses/6.0 openmpi/2.0.1 (L) py-setuptools/25.2.0 xz/5.2.2
----------------------------------------------------------------------- ~/spack/share/spack/lmod/linux-Ubuntu14-x86_64/Core -----------------------------------------------------------------------
gcc/6.2.0 (L)
Now both the ``MPI`` and the ``LAPACK`` providers are handled by LMod as hierarchies:
.. code-block:: console
$ module load py-numpy netlib-scalapack
$ module load mpich
Lmod is automatically replacing "openmpi/2.0.1" with "mpich/3.2"
Due to MODULEPATH changes the following have been reloaded:
1) netlib-scalapack/2.0.2
$ module load netlib-lapack
Lmod is automatically replacing "openblas/0.2.19" with "netlib-lapack/3.6.1"
Inactive Modules:
1) py-numpy
Due to MODULEPATH changes the following have been reloaded:
1) netlib-scalapack/2.0.2
making the use of tags to differentiate them unnecessary.
Note that because we only compiled ``py-numpy`` with ``openblas`` the module
is made inactive when we switch the ``LAPACK`` provider. The user
environment will now be consistent by design!

View File

@@ -1,462 +0,0 @@
.. _packaging-tutorial:
=========================
Package Creation Tutorial
=========================
This tutorial will walk you through the steps behind building a simple
package installation script. We'll focus building an mpileaks package,
which is a MPI debugging tool. By creating a package file we're
essentially giving Spack a recipe for how to build a particular piece of
software. We're describing some of the software's dependencies, where to
find the package, what commands and options are used to build the package
from source, and more. Once we've specified a package's recipe, we can
ask Spack to build that package in many different ways.
This tutorial assumes you have a basic familiarity with some of the Spack
commands, and that you have a working version of Spack installed. If
not, we suggest looking at Spack's *Getting Started* guide. This
tutorial also assumes you have at least a beginner's-level familiarity
with Python.
Also note that this document is a tutorial. It can help you get started
with packaging, but is not intended to be complete. See Spack's
:ref:`packaging-guide` for more complete documentation on this topic.
---------------
Getting Started
---------------
A few things before we get started:
- We'll refer to the Spack installation location via the environment
variable ``SPACK_ROOT``. You should point ``SPACK_ROOT`` at wherever
you have Spack installed.
- Add ``$SPACK_ROOT/bin`` to your ``PATH`` before you start.
- Make sure your ``EDITOR`` environment variable is set to some text
editor you like.
- We'll be writing Python code as part of this tutorial. You can find
successive versions of the Python code in
``$SPACK_ROOT/lib/spack/docs/tutorial/examples``.
-------------------------
Creating the Package File
-------------------------
Spack comes with a handy command to create a new package: ``spack create``
This command is given the location of a package's source code, downloads
the code, and sets up some basic packaging infrastructure for you. The
mpileaks source code can be found on GitHub, and here's what happens when
we run ``spack create`` on it:
.. code-block:: console
$ spack create -f https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz
==> This looks like a URL for mpileaks version 1.0
==> Creating template for package mpileaks
==> Downloading...
==> Fetching https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz
###################################################################################### 100.0%
And Spack should spawn a text editor with this file:
.. literalinclude:: tutorial/examples/0.package.py
:language: python
Spack has created this file in
``$SPACK_ROOT/var/spack/repos/builtin/packages/mpileaks/package.py``. Take a
moment to look over the file. There's a few placeholders that Spack has
created, which we'll fill in as part of this tutorial:
- We'll document some information about this package in the comments.
- We'll fill in the dependency list for this package.
- We'll fill in some of the configuration arguments needed to build this
package.
For the moment, exit your editor and let's see what happens when we try
to build this package:
.. code-block:: console
$ spack install mpileaks
==> Installing mpileaks
==> Using cached archive: /usr/workspace/wsa/legendre/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz
==> Staging archive: /usr/workspace/wsa/legendre/spack/var/spack/stage/mpileaks-1.0-hufwhwpq5benv3sslie6ryflk5s6nm35/mpileaks-1.0.tar.gz
==> Created stage in /usr/workspace/wsa/legendre/spack/var/spack/stage/mpileaks-1.0-hufwhwpq5benv3sslie6ryflk5s6nm35
==> Ran patch() for mpileaks
==> Building mpileaks [AutotoolsPackage]
==> Executing phase : 'autoreconf'
==> Executing phase : 'configure'
==> Error: ProcessError: Command exited with status 1:
'./configure' '--prefix=/usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/mpileaks-1.0-hufwhwpq5benv3sslie6ryflk5s6nm35'
/usr/workspace/wsa/legendre/spack/lib/spack/spack/build_systems/autotools.py:150, in configure:
145 def configure(self, spec, prefix):
146 """Runs configure with the arguments specified in `configure_args`
147 and an appropriately set prefix
148 """
149 options = ['--prefix={0}'.format(prefix)] + self.configure_args()
>> 150 inspect.getmodule(self).configure(*options)
See build log for details:
/tmp/legendre/spack-stage/spack-stage-8HVzqu/mpileaks-1.0/spack-build.out
This obviously didn't work; we need to fill in the package-specific
information. Specifically, Spack didn't try to build any of mpileaks'
dependencies, nor did it use the proper configure arguments. Let's start
fixing things
---------------------
Package Documentation
---------------------
We can bring the ``package.py`` file back into our ``EDITOR`` with the
``spack edit`` command:
.. code-block:: console
$ spack edit mpileaks
Let's remove some of the ``FIXME`` comments, and add links to the mpileaks
homepage and document what mpileaks does. I'm also going to cut out the
Copyright clause at this point to keep this tutorial document shorter,
but you shouldn't do that normally. The results of these changes can be
found in ``$SPACK_ROOT/lib/spack/docs/tutorial/examples/1.package.py``
and are below. Make these changes to your ``package.py``:
.. literalinclude:: tutorial/examples/1.package.py
:lines: 25-
:language: python
We've filled in the comment that describes what this package does and
added a link to the web site. That won't help us build yet, but it will
allow Spack to provide some documentation on this package to other users:
.. code-block:: console
$ spack info mpileaks
AutotoolsPackage: mpileaks
Homepage: https://github.com/hpc/mpileaks
Safe versions:
1.0 https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz
Variants:
None
Installation Phases:
autoreconf configure build install
Build Dependencies:
None
Link Dependencies:
None
Run Dependencies:
None
Virtual Packages:
None
Description:
Tool to detect and report MPI objects like MPI_Requests and
MPI_Datatypes
As we fill in more information about this package the ``spack info`` command
will become more informative. Now let's start making this package build.
------------
Dependencies
------------
The mpileaks packages depends on three other package: ``MPI``,
``adept-utils``, and ``callpath``. Let's add those via the
``depends_on`` command in our ``package.py`` (this version is in
``$SPACK_ROOT/lib/spack/docs/tutorial/examples/2.package.py``):
.. literalinclude:: tutorial/examples/2.package.py
:lines: 25-
:language: python
Now when we go to build mpileaks, Spack will fetch and build these
dependencies before building mpileaks. Note that the mpi dependency is a
different kind of beast than the adept-utils and callpath dependencies;
there is no mpi package available in Spack. Instead mpi is a virtual
dependency. Spack may satisfy that dependency by installing packages
such as ``openmpi`` or ``mvapich``. See the :ref:`packaging-guide` for more
information on virtual dependencies.
Now when we try to install this package a lot more happens:
.. code-block:: console
$ spack install mpileaks
==> Installing mpileaks
==> openmpi is already installed in /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/openmpi-2.0.1-5ee5j34c2y4kb5c3joygrgahidqnwhnz
==> callpath is already installed in /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/callpath-1.0.2-zm4pf3gasgxeibyu2y262suktvaazube
==> adept-utils is already installed in /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/adept-utils-1.0.1-7p7ezxwtajdglj6cmojy2vybjct4j4jz
==> Using cached archive: /usr/workspace/wsa/legendre/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz
==> Already staged mpileaks-1.0-eum4hmnlt6ovalwjnciaygfb3beja4gk in /usr/workspace/wsa/legendre/spack/var/spack/stage/mpileaks-1.0-eum4hmnlt6ovalwjnciaygfb3beja4gk
==> Already patched mpileaks
==> Building mpileaks [AutotoolsPackage]
==> Executing phase : 'autoreconf'
==> Executing phase : 'configure'
==> Error: ProcessError: Command exited with status 1:
'./configure' '--prefix=/usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/mpileaks-1.0-eum4hmnlt6ovalwjnciaygfb3beja4gk'
/usr/workspace/wsa/legendre/spack/lib/spack/spack/build_systems/autotools.py:150, in configure:
145 def configure(self, spec, prefix):
146 """Runs configure with the arguments specified in `configure_args`
147 and an appropriately set prefix
148 """
149 options = ['--prefix={0}'.format(prefix)] + self.configure_args()
>> 150 inspect.getmodule(self).configure(*options)
See build log for details:
/tmp/legendre/spack-stage/spack-stage-7V5yyk/mpileaks-1.0/spack-build.out
Note that this command may take a while to run and produce more output if
you don't have an MPI already installed or configured in Spack.
Now Spack has identified and made sure all of our dependencies have been
built. It found the ``openmpi`` package that will satisfy our ``mpi``
dependency, and the ``callpath`` and ``adept-utils`` package to satisfy our
concrete dependencies.
------------------------
Debugging Package Builds
------------------------
Our ``mpileaks`` package is still not building. It may be obvious to
many of you that we're still missing the configure options. But let's
pretend we're not all intelligent developers and use this opportunity
spend some time debugging. We a few options that can tell us about
what's going wrong:
As per the error message, Spack has given us a ``spack-build.out`` debug log:
.. code-block:: console
==> './configure' '--prefix=/usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/mpileaks-1.0-eum4hmnlt6ovalwjnciaygfb3beja4gk'
checking metadata... no
checking installation directory variables... yes
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for gcc... /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/gcc accepts -g... yes
checking for /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/gcc option to accept ISO C89... none needed
checking for style of include used by make... GNU
checking dependency style of /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/gcc... gcc3
checking whether /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/gcc and cc understand -c and -o together... yes
checking whether we are using the GNU C++ compiler... yes
checking whether /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/g++ accepts -g... yes
checking dependency style of /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/g++... gcc3
checking for /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/openmpi-2.0.1-5ee5j34c2y4kb5c3joygrgahidqnwhnz/bin/mpicc... /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/openmpi-2.0.1-5ee5j34c2y4kb5c3joygrgahidqnwhnz/bin/mpicc
Checking whether /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/openmpi-2.0.1-5ee5j34c2y4kb5c3joygrgahidqnwhnz/bin/mpicc responds to '-showme:compile'... yes
configure: error: unable to locate ``adept-utils`` installation
This gives us the output from the build, and it's fairly obvious that
mpileaks isn't finding its ``adept-utils`` package. Spack has
automatically added the include and library directories of
``adept-utils`` to the compiler's search path, but some packages like
mpileaks can sometimes be picky and still want things spelled out on
their command line. But let's continue to pretend we're not brilliant
developers, and explore some other debugging paths:
We can also enter the build area and try to manually run the build:
.. code-block:: console
$ spack env mpileaks bash
$ spack cd mpileaks
The ``spack env`` command spawned a new shell that contains the same
environment that Spack used to build the mpileaks package (you can
substitute bash for your favorite shell). The ``spack cd`` command
changed our working dirctory to the last attempted build for mpileaks.
From here we can manually re-run the build:
.. code-block:: console
$ ./configure
checking metadata... no
checking installation directory variables... yes
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for gcc... /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/gcc accepts -g... yes
checking for /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/gcc option to accept ISO C89... none needed
checking for style of include used by make... GNU
checking dependency style of /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/gcc... gcc3
checking whether /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/gcc and cc understand -c and -o together... yes
checking whether we are using the GNU C++ compiler... yes
checking whether /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/g++ accepts -g... yes
checking dependency style of /usr/workspace/wsa/legendre/spack/lib/spack/env/gcc/g++... gcc3
checking for /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/openmpi-2.0.1-5ee5j34c2y4kb5c3joygrgahidqnwhnz/bin/mpicc... /usr/workspace/wsa /legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/openmpi-2.0.1-5ee5j34c2y4kb5c3joygrgahidqnwhnz/bin/mpicc
Checking whether /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/openmpi-2.0.1-5ee5j34c2y4kb5c3joygrgahidqnwhnz/bin/mpicc responds to '-showme:compile'... yes
configure: error: unable to locate adept-utils installation
We're seeing the same error, but now we're in a shell where we can run
the command ourselves and debug as needed. We could, for example, run
``./configure --help`` to see what options we can use to specify
dependencies.
We can use the ``exit`` command to leave the shell spawned by ``spack
env``.
------------------------------
Specifying Configure Arguments
------------------------------
Let's add the configure arguments to the mpileaks' ``package.py``. This
version can be found in
``$SPACK_ROOT/lib/spack/docs/tutorial/examples/3.package.py``:
.. literalinclude:: tutorial/examples/3.package.py
:lines: 25-
:language: python
This is all we need for working mpileaks! If we install now we'll see:
.. code-block:: console
$ spack install mpileaks
spack install mpileaks
==> Installing mpileaks
==> openmpi is already installed in /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/openmpi-2.0.1-5ee5j34c2y4kb5c3joygrgahidqnwhnz
==> callpath is already installed in /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/callpath-1.0.2-zm4pf3gasgxeibyu2y262suktvaazube
==> adept-utils is already installed in /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/adept-utils-1.0.1-7p7ezxwtajdglj6cmojy2vybjct4j4jz
==> Using cached archive: /usr/workspace/wsa/legendre/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz
==> Already staged mpileaks-1.0-eum4hmnlt6ovalwjnciaygfb3beja4gk in /usr/workspace/wsa/legendre/spack/var/spack/stage/mpileaks-1.0-eum4hmnlt6ovalwjnciaygfb3beja4gk
==> Already patched mpileaks
==> Building mpileaks [AutotoolsPackage]
==> Executing phase : 'autoreconf'
==> Executing phase : 'configure'
==> Executing phase : 'build'
==> Executing phase : 'install'
==> Successfully installed mpileaks
Fetch: 0.00s. Build: 14.08s. Total: 14.08s.
[+] /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/mpileaks-1.0-eum4hmnlt6ovalwjnciaygfb3beja4gk
We took a few shortcuts for this package that are worth highlighting.
Spack automatically detected that mpileaks was an Autotools-based package
when we ran ``spack create``. If this had been a CMake-based package we
would have been filling in a ``cmake_args`` function instead of
``configure_args``. If Spack hadn't been able to detect the build
system, we'd be filling in a generic install method that would manually
be calling build commands, such as is found in the ``zlib`` package:
.. code-block:: python
def install(self, spec, prefix):
configure('--prefix={0}'.format(prefix))
make()
make('install')
--------
Variants
--------
We have a successful mpileaks build, but let's take some time to improve
it. ``mpileaks`` has a build-time option to truncate parts of the stack
that it walks. Let's add a variant to allow users to set this when they
build in Spack.
To do this, we'll add a variant to our package, as per the following (see
``$SPACK_ROOT/lib/spack/docs/tutorial/examples/4.package.py``):
.. literalinclude:: tutorial/examples/4.package.py
:lines: 25-
:language: python
We've added the variant ``stackstart``, and given it a default value of
``0``. If we install now we can see the stackstart variant added to the
configure line (output truncated for length):
.. code-block:: console
$ spack install --verbose mpileaks stackstart=4
==> Installing mpileaks
==> openmpi is already installed in /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/openmpi-2.0.1-5ee5j34c2y4kb5c3joygrgahidqnwhnz
==> callpath is already installed in /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/callpath-1.0.2-zm4pf3gasgxeibyu2y262suktvaazube
==> adept-utils is already installed in /usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/adept-utils-1.0.1-7p7ezxwtajdglj6cmojy2vybjct4j4jz
==> Using cached archive: /usr/workspace/wsa/legendre/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz
==> Staging archive: /usr/workspace/wsa/legendre/spack/var/spack/stage/mpileaks-1.0-otqo2opkhan5ksujt6tpmdftydrieig7/mpileaks-1.0.tar.gz
==> Created stage in /usr/workspace/wsa/legendre/spack/var/spack/stage/mpileaks-1.0-otqo2opkhan5ksujt6tpmdftydrieig7
==> Ran patch() for mpileaks
==> Building mpileaks [AutotoolsPackage]
==> Executing phase : 'autoreconf'
==> Executing phase : 'configure'
==> './configure' '--prefix=/usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/mpileaks-1.0-otqo2opkhan5ksujt6tpmdftydrieig7' '--with-adept-utils=/usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/adept-utils-1.0.1-7p7ezxwtajdglj6cmojy2vybjct4j4jz' '--with-callpath=/usr/workspace/wsa/legendre/spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/callpath-1.0.2-zm4pf3gasgxeibyu2y262suktvaazube' '--with-stack-start-c=4' '--with-stack-start-fortran=4'
---------------
The Spec Object
---------------
This tutorial has glossed over a few important features, which weren't
too relevant for mpileaks but may be useful for other packages. There
were several places we references the ``self.spec`` object. This is a
powerful class for querying information about what we're building. For
example, you could use the spec to query information about how a
package's dependencies were built, or what compiler was being used, or
what version of a package is being installed. Full documentation can be
found in the :ref:`packaging-guide`, but here's some quick snippets with
common queries:
- Am I building ``mpileaks`` version ``1.1`` or greater?
.. code-block:: python
if self.spec.satisfies('@1.1:'):
# Do things needed for 1.1+
- Is ``openmpi`` the MPI I'm building with?
.. code-block:: python
if self.spec['mpi'].name == 'openmpi':
# Do openmpi things
- Am I building with ``gcc`` version less than ``5.0.0``:
.. code-block:: python
if self.spec.satisfies('%gcc@:5.0.0'):
# Add arguments specific to gcc's earlier than 5.0.0
- Am I built with the ``debug`` variant:
.. code-block:: python
if self.spec.satisfies('+debug'):
# Add -g option to configure flags
- Is my ``dyninst`` dependency greater than version ``8.0``?
.. code-block:: python
if self.spec['dyninst'].satisfies('@8.0:'):
# Use newest dyninst options
More examples can be found in the thousands of packages already added to
Spack in ``$SPACK_ROOT/var/spack/repos/builtin/packages``.
Good Luck!

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

65
lib/spack/env/cc vendored
View File

@@ -174,18 +174,6 @@ if [[ -z $command ]]; then
die "ERROR: Compiler '$SPACK_COMPILER_SPEC' does not support compiling $language programs."
fi
#
# Set paths as defined in the 'environment' section of the compiler config
# names are stored in SPACK_ENV_TO_SET
# values are stored in SPACK_ENV_SET_<varname>
#
IFS=':' read -ra env_set_varnames <<< "$SPACK_ENV_TO_SET"
for varname in "${env_set_varnames[@]}"; do
spack_varname="SPACK_ENV_SET_$varname"
export $varname=${!spack_varname}
unset $spack_varname
done
#
# Filter '.' and Spack environment directories out of PATH so that
# this script doesn't just call itself
@@ -216,9 +204,9 @@ fi
# It doesn't work with -rpath.
# This variable controls whether they are added.
add_rpaths=true
if [[ ($mode == ld || $mode == ccld) && "$SPACK_SHORT_SPEC" =~ "darwin" ]]; then
if [[ $mode == ld && "$SPACK_SHORT_SPEC" =~ "darwin" ]]; then
for arg in "$@"; do
if [[ ($arg == -r && $mode == ld) || ($arg == -Wl,-r && $mode == ccld) ]]; then
if [[ $arg == -r ]]; then
add_rpaths=false
break
fi
@@ -278,38 +266,22 @@ for dep in "${deps[@]}"; do
# Prepend lib and RPATH directories
if [[ -d $dep/lib ]]; then
if [[ $mode == ccld ]]; then
if [[ $SPACK_RPATH_DEPS == *$dep* ]]; then
$add_rpaths && args=("$rpath$dep/lib" "${args[@]}")
fi
if [[ $SPACK_LINK_DEPS == *$dep* ]]; then
args=("-L$dep/lib" "${args[@]}")
fi
$add_rpaths && args=("$rpath$dep/lib" "${args[@]}")
args=("-L$dep/lib" "${args[@]}")
elif [[ $mode == ld ]]; then
if [[ $SPACK_RPATH_DEPS == *$dep* ]]; then
$add_rpaths && args=("-rpath" "$dep/lib" "${args[@]}")
fi
if [[ $SPACK_LINK_DEPS == *$dep* ]]; then
args=("-L$dep/lib" "${args[@]}")
fi
$add_rpaths && args=("-rpath" "$dep/lib" "${args[@]}")
args=("-L$dep/lib" "${args[@]}")
fi
fi
# Prepend lib64 and RPATH directories
if [[ -d $dep/lib64 ]]; then
if [[ $mode == ccld ]]; then
if [[ $SPACK_RPATH_DEPS == *$dep* ]]; then
$add_rpaths && args=("$rpath$dep/lib64" "${args[@]}")
fi
if [[ $SPACK_LINK_DEPS == *$dep* ]]; then
args=("-L$dep/lib64" "${args[@]}")
fi
$add_rpaths && args=("$rpath$dep/lib64" "${args[@]}")
args=("-L$dep/lib64" "${args[@]}")
elif [[ $mode == ld ]]; then
if [[ $SPACK_RPATH_DEPS == *$dep* ]]; then
$add_rpaths && args=("-rpath" "$dep/lib64" "${args[@]}")
fi
if [[ $SPACK_LINK_DEPS == *$dep* ]]; then
args=("-L$dep/lib64" "${args[@]}")
fi
$add_rpaths && args=("-rpath" "$dep/lib64" "${args[@]}")
args=("-L$dep/lib64" "${args[@]}")
fi
fi
done
@@ -323,22 +295,19 @@ elif [[ $mode == ld ]]; then
$add_rpaths && args=("-rpath" "$SPACK_PREFIX/lib" "${args[@]}")
fi
# Set extra RPATHs
IFS=':' read -ra extra_rpaths <<< "$SPACK_COMPILER_EXTRA_RPATHS"
for extra_rpath in "${extra_rpaths[@]}"; do
if [[ $mode == ccld ]]; then
$add_rpaths && args=("$rpath$extra_rpath" "${args[@]}")
elif [[ $mode == ld ]]; then
$add_rpaths && args=("-rpath" "$extra_rpath" "${args[@]}")
fi
done
# Add SPACK_LDLIBS to args
case "$mode" in
ld|ccld)
args=("${args[@]}" ${SPACK_LDLIBS[@]}) ;;
esac
#
# Unset pesky environment variables that could affect build sanity.
#
unset LD_LIBRARY_PATH
unset LD_RUN_PATH
unset DYLD_LIBRARY_PATH
full_command=("$command" "${args[@]}")
# In test command mode, write out full command for Spack tests.

View File

@@ -1 +0,0 @@
../cc

View File

@@ -1,26 +1,26 @@
##############################################################################
# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
# Copyright (c) 2013, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
# it under the terms of the GNU General Public License (as published by
# the Free Software Foundation) version 2.1 dated February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
# conditions of the GNU General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
"""
This module contains external, potentially separately licensed,
@@ -29,21 +29,10 @@
So far:
argparse: We include our own version to be Python 2.6 compatible.
distro: Provides a more stable linux distribution detection.
functools: Used for implementation of total_ordering.
jsonschema: An implementation of JSON Schema for Python.
ordereddict: We include our own version to be Python 2.6 compatible.
py: Needed by pytest. Library with cross-python path,
ini-parsing, io, code, and log facilities.
pyqver2: External script to query required python version of
python source code. Used for ensuring 2.6 compatibility.
pytest: Testing framework used by Spack.
functools: Used for implementation of total_ordering.
yaml: Used for config files.
"""

View File

@@ -1,141 +0,0 @@
Holger Krekel, holger at merlinux eu
merlinux GmbH, Germany, office at merlinux eu
Contributors include::
Abdeali JK
Abhijeet Kasurde
Ahn Ki-Wook
Alexei Kozlenok
Anatoly Bubenkoff
Andreas Zeidler
Andrzej Ostrowski
Andy Freeland
Anthon van der Neut
Antony Lee
Armin Rigo
Aron Curzon
Aviv Palivoda
Ben Webb
Benjamin Peterson
Bernard Pratz
Bob Ippolito
Brian Dorsey
Brian Okken
Brianna Laugher
Bruno Oliveira
Cal Leeming
Carl Friedrich Bolz
Charles Cloud
Charnjit SiNGH (CCSJ)
Chris Lamb
Christian Boelsen
Christian Theunert
Christian Tismer
Christopher Gilling
Daniel Grana
Daniel Hahler
Daniel Nuri
Daniel Wandschneider
Danielle Jenkins
Dave Hunt
David Díaz-Barquero
David Mohr
David Vierra
Diego Russo
Dmitry Dygalo
Duncan Betts
Edison Gustavo Muenz
Edoardo Batini
Eduardo Schettino
Elizaveta Shashkova
Endre Galaczi
Eric Hunsberger
Eric Siegerman
Erik M. Bray
Feng Ma
Florian Bruhin
Floris Bruynooghe
Gabriel Reis
Georgy Dyuldin
Graham Horler
Greg Price
Grig Gheorghiu
Grigorii Eremeev (budulianin)
Guido Wesdorp
Harald Armin Massa
Ian Bicking
Jaap Broekhuizen
Jan Balster
Janne Vanhala
Jason R. Coombs
Javier Domingo Cansino
Javier Romero
John Towler
Jon Sonesen
Jordan Guymon
Joshua Bronson
Jurko Gospodnetić
Justyna Janczyszyn
Kale Kundert
Katarzyna Jachim
Kevin Cox
Lee Kamentsky
Lev Maximov
Lukas Bednar
Luke Murphy
Maciek Fijalkowski
Maho
Marc Schlaich
Marcin Bachry
Mark Abramowitz
Markus Unterwaditzer
Martijn Faassen
Martin K. Scherer
Martin Prusse
Mathieu Clabaut
Matt Bachmann
Matt Williams
Matthias Hafner
mbyt
Michael Aquilina
Michael Birtwell
Michael Droettboom
Michael Seifert
Mike Lundy
Ned Batchelder
Neven Mundar
Nicolas Delaby
Oleg Pidsadnyi
Oliver Bestwalter
Omar Kohl
Pieter Mulder
Piotr Banaszkiewicz
Punyashloka Biswal
Quentin Pradet
Ralf Schmitt
Raphael Pierzina
Raquel Alegre
Roberto Polli
Romain Dorgueil
Roman Bolshakov
Ronny Pfannschmidt
Ross Lawley
Russel Winder
Ryan Wooden
Samuele Pedroni
Simon Gomizelj
Stefan Farmbauer
Stefan Zimmermann
Stefano Taschini
Steffen Allner
Stephan Obermann
Tareq Alayan
Ted Xiao
Thomas Grainger
Tom Viner
Trevor Bekolay
Tyler Goodlet
Vasily Kuznetsov
Wouter van Ackooy
Xuecong Liao

View File

@@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2004-2016 Holger Krekel and others
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,102 +0,0 @@
.. image:: http://docs.pytest.org/en/latest/_static/pytest1.png
:target: http://docs.pytest.org
:align: center
:alt: pytest
------
.. image:: https://img.shields.io/pypi/v/pytest.svg
:target: https://pypi.python.org/pypi/pytest
.. image:: https://img.shields.io/pypi/pyversions/pytest.svg
:target: https://pypi.python.org/pypi/pytest
.. image:: https://img.shields.io/coveralls/pytest-dev/pytest/master.svg
:target: https://coveralls.io/r/pytest-dev/pytest
.. image:: https://travis-ci.org/pytest-dev/pytest.svg?branch=master
:target: https://travis-ci.org/pytest-dev/pytest
.. image:: https://ci.appveyor.com/api/projects/status/mrgbjaua7t33pg6b?svg=true
:target: https://ci.appveyor.com/project/pytestbot/pytest
The ``pytest`` framework makes it easy to write small tests, yet
scales to support complex functional testing for applications and libraries.
An example of a simple test:
.. code-block:: python
# content of test_sample.py
def inc(x):
return x + 1
def test_answer():
assert inc(3) == 5
To execute it::
$ pytest
============================= test session starts =============================
collected 1 items
test_sample.py F
================================== FAILURES ===================================
_________________________________ test_answer _________________________________
def test_answer():
> assert inc(3) == 5
E assert 4 == 5
E + where 4 = inc(3)
test_sample.py:5: AssertionError
========================== 1 failed in 0.04 seconds ===========================
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. See `getting-started <http://docs.pytest.org/en/latest/getting-started.html#our-first-test-run>`_ for more examples.
Features
--------
- Detailed info on failing `assert statements <http://docs.pytest.org/en/latest/assert.html>`_ (no need to remember ``self.assert*`` names);
- `Auto-discovery
<http://docs.pytest.org/en/latest/goodpractices.html#python-test-discovery>`_
of test modules and functions;
- `Modular fixtures <http://docs.pytest.org/en/latest/fixture.html>`_ for
managing small or parametrized long-lived test resources;
- Can run `unittest <http://docs.pytest.org/en/latest/unittest.html>`_ (or trial),
`nose <http://docs.pytest.org/en/latest/nose.html>`_ test suites out of the box;
- Python2.6+, Python3.3+, PyPy-2.3, Jython-2.5 (untested);
- Rich plugin architecture, with over 150+ `external plugins <http://docs.pytest.org/en/latest/plugins.html#installing-external-plugins-searching>`_ and thriving community;
Documentation
-------------
For full documentation, including installation, tutorials and PDF documents, please see http://docs.pytest.org.
Bugs/Requests
-------------
Please use the `GitHub issue tracker <https://github.com/pytest-dev/pytest/issues>`_ to submit bugs or request features.
Changelog
---------
Consult the `Changelog <http://docs.pytest.org/en/latest/changelog.html>`__ page for fixes and enhancements of each version.
License
-------
Copyright Holger Krekel and others, 2004-2016.
Distributed under the terms of the `MIT`_ license, pytest is free and open source software.
.. _`MIT`: https://github.com/pytest-dev/pytest/blob/master/LICENSE

View File

@@ -1,2 +0,0 @@
#
__version__ = '3.0.5'

View File

@@ -1,102 +0,0 @@
"""allow bash-completion for argparse with argcomplete if installed
needs argcomplete>=0.5.6 for python 3.2/3.3 (older versions fail
to find the magic string, so _ARGCOMPLETE env. var is never set, and
this does not need special code.
argcomplete does not support python 2.5 (although the changes for that
are minor).
Function try_argcomplete(parser) should be called directly before
the call to ArgumentParser.parse_args().
The filescompleter is what you normally would use on the positional
arguments specification, in order to get "dirname/" after "dirn<TAB>"
instead of the default "dirname ":
optparser.add_argument(Config._file_or_dir, nargs='*'
).completer=filescompleter
Other, application specific, completers should go in the file
doing the add_argument calls as they need to be specified as .completer
attributes as well. (If argcomplete is not installed, the function the
attribute points to will not be used).
SPEEDUP
=======
The generic argcomplete script for bash-completion
(/etc/bash_completion.d/python-argcomplete.sh )
uses a python program to determine startup script generated by pip.
You can speed up completion somewhat by changing this script to include
# PYTHON_ARGCOMPLETE_OK
so the the python-argcomplete-check-easy-install-script does not
need to be called to find the entry point of the code and see if that is
marked with PYTHON_ARGCOMPLETE_OK
INSTALL/DEBUGGING
=================
To include this support in another application that has setup.py generated
scripts:
- add the line:
# PYTHON_ARGCOMPLETE_OK
near the top of the main python entry point
- include in the file calling parse_args():
from _argcomplete import try_argcomplete, filescompleter
, call try_argcomplete just before parse_args(), and optionally add
filescompleter to the positional arguments' add_argument()
If things do not work right away:
- switch on argcomplete debugging with (also helpful when doing custom
completers):
export _ARC_DEBUG=1
- run:
python-argcomplete-check-easy-install-script $(which appname)
echo $?
will echo 0 if the magic line has been found, 1 if not
- sometimes it helps to find early on errors using:
_ARGCOMPLETE=1 _ARC_DEBUG=1 appname
which should throw a KeyError: 'COMPLINE' (which is properly set by the
global argcomplete script).
"""
import sys
import os
from glob import glob
class FastFilesCompleter:
'Fast file completer class'
def __init__(self, directories=True):
self.directories = directories
def __call__(self, prefix, **kwargs):
"""only called on non option completions"""
if os.path.sep in prefix[1:]: #
prefix_dir = len(os.path.dirname(prefix) + os.path.sep)
else:
prefix_dir = 0
completion = []
globbed = []
if '*' not in prefix and '?' not in prefix:
if prefix[-1] == os.path.sep: # we are on unix, otherwise no bash
globbed.extend(glob(prefix + '.*'))
prefix += '*'
globbed.extend(glob(prefix))
for x in sorted(globbed):
if os.path.isdir(x):
x += '/'
# append stripping the prefix (like bash, not like compgen)
completion.append(x[prefix_dir:])
return completion
if os.environ.get('_ARGCOMPLETE'):
try:
import argcomplete.completers
except ImportError:
sys.exit(-1)
filescompleter = FastFilesCompleter()
def try_argcomplete(parser):
argcomplete.autocomplete(parser)
else:
def try_argcomplete(parser): pass
filescompleter = None

View File

@@ -1,9 +0,0 @@
""" python inspection/code generation API """
from .code import Code # noqa
from .code import ExceptionInfo # noqa
from .code import Frame # noqa
from .code import Traceback # noqa
from .code import getrawcode # noqa
from .source import Source # noqa
from .source import compile_ as compile # noqa
from .source import getfslineno # noqa

View File

@@ -1,81 +0,0 @@
# copied from python-2.7.3's traceback.py
# CHANGES:
# - some_str is replaced, trying to create unicode strings
#
import types
def format_exception_only(etype, value):
"""Format the exception part of a traceback.
The arguments are the exception type and value such as given by
sys.last_type and sys.last_value. The return value is a list of
strings, each ending in a newline.
Normally, the list contains a single string; however, for
SyntaxError exceptions, it contains several lines that (when
printed) display detailed information about where the syntax
error occurred.
The message indicating which exception occurred is always the last
string in the list.
"""
# An instance should not have a meaningful value parameter, but
# sometimes does, particularly for string exceptions, such as
# >>> raise string1, string2 # deprecated
#
# Clear these out first because issubtype(string1, SyntaxError)
# would throw another exception and mask the original problem.
if (isinstance(etype, BaseException) or
isinstance(etype, types.InstanceType) or
etype is None or type(etype) is str):
return [_format_final_exc_line(etype, value)]
stype = etype.__name__
if not issubclass(etype, SyntaxError):
return [_format_final_exc_line(stype, value)]
# It was a syntax error; show exactly where the problem was found.
lines = []
try:
msg, (filename, lineno, offset, badline) = value.args
except Exception:
pass
else:
filename = filename or "<string>"
lines.append(' File "%s", line %d\n' % (filename, lineno))
if badline is not None:
if isinstance(badline, bytes): # python 2 only
badline = badline.decode('utf-8', 'replace')
lines.append(u' %s\n' % badline.strip())
if offset is not None:
caretspace = badline.rstrip('\n')[:offset].lstrip()
# non-space whitespace (likes tabs) must be kept for alignment
caretspace = ((c.isspace() and c or ' ') for c in caretspace)
# only three spaces to account for offset1 == pos 0
lines.append(' %s^\n' % ''.join(caretspace))
value = msg
lines.append(_format_final_exc_line(stype, value))
return lines
def _format_final_exc_line(etype, value):
"""Return a list of a single line -- normal case for format_exception_only"""
valuestr = _some_str(value)
if value is None or not valuestr:
line = "%s\n" % etype
else:
line = "%s: %s\n" % (etype, valuestr)
return line
def _some_str(value):
try:
return unicode(value)
except Exception:
try:
return str(value)
except Exception:
pass
return '<unprintable %s object>' % type(value).__name__

View File

@@ -1,861 +0,0 @@
import sys
from inspect import CO_VARARGS, CO_VARKEYWORDS
import re
from weakref import ref
import py
builtin_repr = repr
reprlib = py.builtin._tryimport('repr', 'reprlib')
if sys.version_info[0] >= 3:
from traceback import format_exception_only
else:
from ._py2traceback import format_exception_only
class Code(object):
""" wrapper around Python code objects """
def __init__(self, rawcode):
if not hasattr(rawcode, "co_filename"):
rawcode = getrawcode(rawcode)
try:
self.filename = rawcode.co_filename
self.firstlineno = rawcode.co_firstlineno - 1
self.name = rawcode.co_name
except AttributeError:
raise TypeError("not a code object: %r" %(rawcode,))
self.raw = rawcode
def __eq__(self, other):
return self.raw == other.raw
__hash__ = None
def __ne__(self, other):
return not self == other
@property
def path(self):
""" return a path object pointing to source code (note that it
might not point to an actually existing file). """
try:
p = py.path.local(self.raw.co_filename)
# maybe don't try this checking
if not p.check():
raise OSError("py.path check failed.")
except OSError:
# XXX maybe try harder like the weird logic
# in the standard lib [linecache.updatecache] does?
p = self.raw.co_filename
return p
@property
def fullsource(self):
""" return a _pytest._code.Source object for the full source file of the code
"""
from _pytest._code import source
full, _ = source.findsource(self.raw)
return full
def source(self):
""" return a _pytest._code.Source object for the code object's source only
"""
# return source only for that part of code
import _pytest._code
return _pytest._code.Source(self.raw)
def getargs(self, var=False):
""" return a tuple with the argument names for the code object
if 'var' is set True also return the names of the variable and
keyword arguments when present
"""
# handfull shortcut for getting args
raw = self.raw
argcount = raw.co_argcount
if var:
argcount += raw.co_flags & CO_VARARGS
argcount += raw.co_flags & CO_VARKEYWORDS
return raw.co_varnames[:argcount]
class Frame(object):
"""Wrapper around a Python frame holding f_locals and f_globals
in which expressions can be evaluated."""
def __init__(self, frame):
self.lineno = frame.f_lineno - 1
self.f_globals = frame.f_globals
self.f_locals = frame.f_locals
self.raw = frame
self.code = Code(frame.f_code)
@property
def statement(self):
""" statement this frame is at """
import _pytest._code
if self.code.fullsource is None:
return _pytest._code.Source("")
return self.code.fullsource.getstatement(self.lineno)
def eval(self, code, **vars):
""" evaluate 'code' in the frame
'vars' are optional additional local variables
returns the result of the evaluation
"""
f_locals = self.f_locals.copy()
f_locals.update(vars)
return eval(code, self.f_globals, f_locals)
def exec_(self, code, **vars):
""" exec 'code' in the frame
'vars' are optiona; additional local variables
"""
f_locals = self.f_locals.copy()
f_locals.update(vars)
py.builtin.exec_(code, self.f_globals, f_locals )
def repr(self, object):
""" return a 'safe' (non-recursive, one-line) string repr for 'object'
"""
return py.io.saferepr(object)
def is_true(self, object):
return object
def getargs(self, var=False):
""" return a list of tuples (name, value) for all arguments
if 'var' is set True also include the variable and keyword
arguments when present
"""
retval = []
for arg in self.code.getargs(var):
try:
retval.append((arg, self.f_locals[arg]))
except KeyError:
pass # this can occur when using Psyco
return retval
class TracebackEntry(object):
""" a single entry in a traceback """
_repr_style = None
exprinfo = None
def __init__(self, rawentry, excinfo=None):
self._excinfo = excinfo
self._rawentry = rawentry
self.lineno = rawentry.tb_lineno - 1
def set_repr_style(self, mode):
assert mode in ("short", "long")
self._repr_style = mode
@property
def frame(self):
import _pytest._code
return _pytest._code.Frame(self._rawentry.tb_frame)
@property
def relline(self):
return self.lineno - self.frame.code.firstlineno
def __repr__(self):
return "<TracebackEntry %s:%d>" %(self.frame.code.path, self.lineno+1)
@property
def statement(self):
""" _pytest._code.Source object for the current statement """
source = self.frame.code.fullsource
return source.getstatement(self.lineno)
@property
def path(self):
""" path to the source code """
return self.frame.code.path
def getlocals(self):
return self.frame.f_locals
locals = property(getlocals, None, None, "locals of underlaying frame")
def getfirstlinesource(self):
# on Jython this firstlineno can be -1 apparently
return max(self.frame.code.firstlineno, 0)
def getsource(self, astcache=None):
""" return failing source code. """
# we use the passed in astcache to not reparse asttrees
# within exception info printing
from _pytest._code.source import getstatementrange_ast
source = self.frame.code.fullsource
if source is None:
return None
key = astnode = None
if astcache is not None:
key = self.frame.code.path
if key is not None:
astnode = astcache.get(key, None)
start = self.getfirstlinesource()
try:
astnode, _, end = getstatementrange_ast(self.lineno, source,
astnode=astnode)
except SyntaxError:
end = self.lineno + 1
else:
if key is not None:
astcache[key] = astnode
return source[start:end]
source = property(getsource)
def ishidden(self):
""" return True if the current frame has a var __tracebackhide__
resolving to True
If __tracebackhide__ is a callable, it gets called with the
ExceptionInfo instance and can decide whether to hide the traceback.
mostly for internal use
"""
try:
tbh = self.frame.f_locals['__tracebackhide__']
except KeyError:
try:
tbh = self.frame.f_globals['__tracebackhide__']
except KeyError:
return False
if py.builtin.callable(tbh):
return tbh(None if self._excinfo is None else self._excinfo())
else:
return tbh
def __str__(self):
try:
fn = str(self.path)
except py.error.Error:
fn = '???'
name = self.frame.code.name
try:
line = str(self.statement).lstrip()
except KeyboardInterrupt:
raise
except:
line = "???"
return " File %r:%d in %s\n %s\n" %(fn, self.lineno+1, name, line)
def name(self):
return self.frame.code.raw.co_name
name = property(name, None, None, "co_name of underlaying code")
class Traceback(list):
""" Traceback objects encapsulate and offer higher level
access to Traceback entries.
"""
Entry = TracebackEntry
def __init__(self, tb, excinfo=None):
""" initialize from given python traceback object and ExceptionInfo """
self._excinfo = excinfo
if hasattr(tb, 'tb_next'):
def f(cur):
while cur is not None:
yield self.Entry(cur, excinfo=excinfo)
cur = cur.tb_next
list.__init__(self, f(tb))
else:
list.__init__(self, tb)
def cut(self, path=None, lineno=None, firstlineno=None, excludepath=None):
""" return a Traceback instance wrapping part of this Traceback
by provding any combination of path, lineno and firstlineno, the
first frame to start the to-be-returned traceback is determined
this allows cutting the first part of a Traceback instance e.g.
for formatting reasons (removing some uninteresting bits that deal
with handling of the exception/traceback)
"""
for x in self:
code = x.frame.code
codepath = code.path
if ((path is None or codepath == path) and
(excludepath is None or not hasattr(codepath, 'relto') or
not codepath.relto(excludepath)) and
(lineno is None or x.lineno == lineno) and
(firstlineno is None or x.frame.code.firstlineno == firstlineno)):
return Traceback(x._rawentry, self._excinfo)
return self
def __getitem__(self, key):
val = super(Traceback, self).__getitem__(key)
if isinstance(key, type(slice(0))):
val = self.__class__(val)
return val
def filter(self, fn=lambda x: not x.ishidden()):
""" return a Traceback instance with certain items removed
fn is a function that gets a single argument, a TracebackEntry
instance, and should return True when the item should be added
to the Traceback, False when not
by default this removes all the TracebackEntries which are hidden
(see ishidden() above)
"""
return Traceback(filter(fn, self), self._excinfo)
def getcrashentry(self):
""" return last non-hidden traceback entry that lead
to the exception of a traceback.
"""
for i in range(-1, -len(self)-1, -1):
entry = self[i]
if not entry.ishidden():
return entry
return self[-1]
def recursionindex(self):
""" return the index of the frame/TracebackEntry where recursion
originates if appropriate, None if no recursion occurred
"""
cache = {}
for i, entry in enumerate(self):
# id for the code.raw is needed to work around
# the strange metaprogramming in the decorator lib from pypi
# which generates code objects that have hash/value equality
#XXX needs a test
key = entry.frame.code.path, id(entry.frame.code.raw), entry.lineno
#print "checking for recursion at", key
l = cache.setdefault(key, [])
if l:
f = entry.frame
loc = f.f_locals
for otherloc in l:
if f.is_true(f.eval(co_equal,
__recursioncache_locals_1=loc,
__recursioncache_locals_2=otherloc)):
return i
l.append(entry.frame.f_locals)
return None
co_equal = compile('__recursioncache_locals_1 == __recursioncache_locals_2',
'?', 'eval')
class ExceptionInfo(object):
""" wraps sys.exc_info() objects and offers
help for navigating the traceback.
"""
_striptext = ''
def __init__(self, tup=None, exprinfo=None):
import _pytest._code
if tup is None:
tup = sys.exc_info()
if exprinfo is None and isinstance(tup[1], AssertionError):
exprinfo = getattr(tup[1], 'msg', None)
if exprinfo is None:
exprinfo = py._builtin._totext(tup[1])
if exprinfo and exprinfo.startswith('assert '):
self._striptext = 'AssertionError: '
self._excinfo = tup
#: the exception class
self.type = tup[0]
#: the exception instance
self.value = tup[1]
#: the exception raw traceback
self.tb = tup[2]
#: the exception type name
self.typename = self.type.__name__
#: the exception traceback (_pytest._code.Traceback instance)
self.traceback = _pytest._code.Traceback(self.tb, excinfo=ref(self))
def __repr__(self):
return "<ExceptionInfo %s tblen=%d>" % (self.typename, len(self.traceback))
def exconly(self, tryshort=False):
""" return the exception as a string
when 'tryshort' resolves to True, and the exception is a
_pytest._code._AssertionError, only the actual exception part of
the exception representation is returned (so 'AssertionError: ' is
removed from the beginning)
"""
lines = format_exception_only(self.type, self.value)
text = ''.join(lines)
text = text.rstrip()
if tryshort:
if text.startswith(self._striptext):
text = text[len(self._striptext):]
return text
def errisinstance(self, exc):
""" return True if the exception is an instance of exc """
return isinstance(self.value, exc)
def _getreprcrash(self):
exconly = self.exconly(tryshort=True)
entry = self.traceback.getcrashentry()
path, lineno = entry.frame.code.raw.co_filename, entry.lineno
return ReprFileLocation(path, lineno+1, exconly)
def getrepr(self, showlocals=False, style="long",
abspath=False, tbfilter=True, funcargs=False):
""" return str()able representation of this exception info.
showlocals: show locals per traceback entry
style: long|short|no|native traceback style
tbfilter: hide entries (where __tracebackhide__ is true)
in case of style==native, tbfilter and showlocals is ignored.
"""
if style == 'native':
return ReprExceptionInfo(ReprTracebackNative(
py.std.traceback.format_exception(
self.type,
self.value,
self.traceback[0]._rawentry,
)), self._getreprcrash())
fmt = FormattedExcinfo(showlocals=showlocals, style=style,
abspath=abspath, tbfilter=tbfilter, funcargs=funcargs)
return fmt.repr_excinfo(self)
def __str__(self):
entry = self.traceback[-1]
loc = ReprFileLocation(entry.path, entry.lineno + 1, self.exconly())
return str(loc)
def __unicode__(self):
entry = self.traceback[-1]
loc = ReprFileLocation(entry.path, entry.lineno + 1, self.exconly())
return unicode(loc)
def match(self, regexp):
"""
Match the regular expression 'regexp' on the string representation of
the exception. If it matches then True is returned (so that it is
possible to write 'assert excinfo.match()'). If it doesn't match an
AssertionError is raised.
"""
__tracebackhide__ = True
if not re.search(regexp, str(self.value)):
assert 0, "Pattern '{0!s}' not found in '{1!s}'".format(
regexp, self.value)
return True
class FormattedExcinfo(object):
""" presenting information about failing Functions and Generators. """
# for traceback entries
flow_marker = ">"
fail_marker = "E"
def __init__(self, showlocals=False, style="long", abspath=True, tbfilter=True, funcargs=False):
self.showlocals = showlocals
self.style = style
self.tbfilter = tbfilter
self.funcargs = funcargs
self.abspath = abspath
self.astcache = {}
def _getindent(self, source):
# figure out indent for given source
try:
s = str(source.getstatement(len(source)-1))
except KeyboardInterrupt:
raise
except:
try:
s = str(source[-1])
except KeyboardInterrupt:
raise
except:
return 0
return 4 + (len(s) - len(s.lstrip()))
def _getentrysource(self, entry):
source = entry.getsource(self.astcache)
if source is not None:
source = source.deindent()
return source
def _saferepr(self, obj):
return py.io.saferepr(obj)
def repr_args(self, entry):
if self.funcargs:
args = []
for argname, argvalue in entry.frame.getargs(var=True):
args.append((argname, self._saferepr(argvalue)))
return ReprFuncArgs(args)
def get_source(self, source, line_index=-1, excinfo=None, short=False):
""" return formatted and marked up source lines. """
import _pytest._code
lines = []
if source is None or line_index >= len(source.lines):
source = _pytest._code.Source("???")
line_index = 0
if line_index < 0:
line_index += len(source)
space_prefix = " "
if short:
lines.append(space_prefix + source.lines[line_index].strip())
else:
for line in source.lines[:line_index]:
lines.append(space_prefix + line)
lines.append(self.flow_marker + " " + source.lines[line_index])
for line in source.lines[line_index+1:]:
lines.append(space_prefix + line)
if excinfo is not None:
indent = 4 if short else self._getindent(source)
lines.extend(self.get_exconly(excinfo, indent=indent, markall=True))
return lines
def get_exconly(self, excinfo, indent=4, markall=False):
lines = []
indent = " " * indent
# get the real exception information out
exlines = excinfo.exconly(tryshort=True).split('\n')
failindent = self.fail_marker + indent[1:]
for line in exlines:
lines.append(failindent + line)
if not markall:
failindent = indent
return lines
def repr_locals(self, locals):
if self.showlocals:
lines = []
keys = [loc for loc in locals if loc[0] != "@"]
keys.sort()
for name in keys:
value = locals[name]
if name == '__builtins__':
lines.append("__builtins__ = <builtins>")
else:
# This formatting could all be handled by the
# _repr() function, which is only reprlib.Repr in
# disguise, so is very configurable.
str_repr = self._saferepr(value)
#if len(str_repr) < 70 or not isinstance(value,
# (list, tuple, dict)):
lines.append("%-10s = %s" %(name, str_repr))
#else:
# self._line("%-10s =\\" % (name,))
# # XXX
# py.std.pprint.pprint(value, stream=self.excinfowriter)
return ReprLocals(lines)
def repr_traceback_entry(self, entry, excinfo=None):
import _pytest._code
source = self._getentrysource(entry)
if source is None:
source = _pytest._code.Source("???")
line_index = 0
else:
# entry.getfirstlinesource() can be -1, should be 0 on jython
line_index = entry.lineno - max(entry.getfirstlinesource(), 0)
lines = []
style = entry._repr_style
if style is None:
style = self.style
if style in ("short", "long"):
short = style == "short"
reprargs = self.repr_args(entry) if not short else None
s = self.get_source(source, line_index, excinfo, short=short)
lines.extend(s)
if short:
message = "in %s" %(entry.name)
else:
message = excinfo and excinfo.typename or ""
path = self._makepath(entry.path)
filelocrepr = ReprFileLocation(path, entry.lineno+1, message)
localsrepr = None
if not short:
localsrepr = self.repr_locals(entry.locals)
return ReprEntry(lines, reprargs, localsrepr, filelocrepr, style)
if excinfo:
lines.extend(self.get_exconly(excinfo, indent=4))
return ReprEntry(lines, None, None, None, style)
def _makepath(self, path):
if not self.abspath:
try:
np = py.path.local().bestrelpath(path)
except OSError:
return path
if len(np) < len(str(path)):
path = np
return path
def repr_traceback(self, excinfo):
traceback = excinfo.traceback
if self.tbfilter:
traceback = traceback.filter()
recursionindex = None
if is_recursion_error(excinfo):
recursionindex = traceback.recursionindex()
last = traceback[-1]
entries = []
extraline = None
for index, entry in enumerate(traceback):
einfo = (last == entry) and excinfo or None
reprentry = self.repr_traceback_entry(entry, einfo)
entries.append(reprentry)
if index == recursionindex:
extraline = "!!! Recursion detected (same locals & position)"
break
return ReprTraceback(entries, extraline, style=self.style)
def repr_excinfo(self, excinfo):
if sys.version_info[0] < 3:
reprtraceback = self.repr_traceback(excinfo)
reprcrash = excinfo._getreprcrash()
return ReprExceptionInfo(reprtraceback, reprcrash)
else:
repr_chain = []
e = excinfo.value
descr = None
while e is not None:
if excinfo:
reprtraceback = self.repr_traceback(excinfo)
reprcrash = excinfo._getreprcrash()
else:
# fallback to native repr if the exception doesn't have a traceback:
# ExceptionInfo objects require a full traceback to work
reprtraceback = ReprTracebackNative(py.std.traceback.format_exception(type(e), e, None))
reprcrash = None
repr_chain += [(reprtraceback, reprcrash, descr)]
if e.__cause__ is not None:
e = e.__cause__
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'The above exception was the direct cause of the following exception:'
elif e.__context__ is not None:
e = e.__context__
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'During handling of the above exception, another exception occurred:'
else:
e = None
repr_chain.reverse()
return ExceptionChainRepr(repr_chain)
class TerminalRepr(object):
def __str__(self):
s = self.__unicode__()
if sys.version_info[0] < 3:
s = s.encode('utf-8')
return s
def __unicode__(self):
# FYI this is called from pytest-xdist's serialization of exception
# information.
io = py.io.TextIO()
tw = py.io.TerminalWriter(file=io)
self.toterminal(tw)
return io.getvalue().strip()
def __repr__(self):
return "<%s instance at %0x>" %(self.__class__, id(self))
class ExceptionRepr(TerminalRepr):
def __init__(self):
self.sections = []
def addsection(self, name, content, sep="-"):
self.sections.append((name, content, sep))
def toterminal(self, tw):
for name, content, sep in self.sections:
tw.sep(sep, name)
tw.line(content)
class ExceptionChainRepr(ExceptionRepr):
def __init__(self, chain):
super(ExceptionChainRepr, self).__init__()
self.chain = chain
# reprcrash and reprtraceback of the outermost (the newest) exception
# in the chain
self.reprtraceback = chain[-1][0]
self.reprcrash = chain[-1][1]
def toterminal(self, tw):
for element in self.chain:
element[0].toterminal(tw)
if element[2] is not None:
tw.line("")
tw.line(element[2], yellow=True)
super(ExceptionChainRepr, self).toterminal(tw)
class ReprExceptionInfo(ExceptionRepr):
def __init__(self, reprtraceback, reprcrash):
super(ReprExceptionInfo, self).__init__()
self.reprtraceback = reprtraceback
self.reprcrash = reprcrash
def toterminal(self, tw):
self.reprtraceback.toterminal(tw)
super(ReprExceptionInfo, self).toterminal(tw)
class ReprTraceback(TerminalRepr):
entrysep = "_ "
def __init__(self, reprentries, extraline, style):
self.reprentries = reprentries
self.extraline = extraline
self.style = style
def toterminal(self, tw):
# the entries might have different styles
for i, entry in enumerate(self.reprentries):
if entry.style == "long":
tw.line("")
entry.toterminal(tw)
if i < len(self.reprentries) - 1:
next_entry = self.reprentries[i+1]
if entry.style == "long" or \
entry.style == "short" and next_entry.style == "long":
tw.sep(self.entrysep)
if self.extraline:
tw.line(self.extraline)
class ReprTracebackNative(ReprTraceback):
def __init__(self, tblines):
self.style = "native"
self.reprentries = [ReprEntryNative(tblines)]
self.extraline = None
class ReprEntryNative(TerminalRepr):
style = "native"
def __init__(self, tblines):
self.lines = tblines
def toterminal(self, tw):
tw.write("".join(self.lines))
class ReprEntry(TerminalRepr):
localssep = "_ "
def __init__(self, lines, reprfuncargs, reprlocals, filelocrepr, style):
self.lines = lines
self.reprfuncargs = reprfuncargs
self.reprlocals = reprlocals
self.reprfileloc = filelocrepr
self.style = style
def toterminal(self, tw):
if self.style == "short":
self.reprfileloc.toterminal(tw)
for line in self.lines:
red = line.startswith("E ")
tw.line(line, bold=True, red=red)
#tw.line("")
return
if self.reprfuncargs:
self.reprfuncargs.toterminal(tw)
for line in self.lines:
red = line.startswith("E ")
tw.line(line, bold=True, red=red)
if self.reprlocals:
#tw.sep(self.localssep, "Locals")
tw.line("")
self.reprlocals.toterminal(tw)
if self.reprfileloc:
if self.lines:
tw.line("")
self.reprfileloc.toterminal(tw)
def __str__(self):
return "%s\n%s\n%s" % ("\n".join(self.lines),
self.reprlocals,
self.reprfileloc)
class ReprFileLocation(TerminalRepr):
def __init__(self, path, lineno, message):
self.path = str(path)
self.lineno = lineno
self.message = message
def toterminal(self, tw):
# filename and lineno output for each entry,
# using an output format that most editors unterstand
msg = self.message
i = msg.find("\n")
if i != -1:
msg = msg[:i]
tw.write(self.path, bold=True, red=True)
tw.line(":%s: %s" % (self.lineno, msg))
class ReprLocals(TerminalRepr):
def __init__(self, lines):
self.lines = lines
def toterminal(self, tw):
for line in self.lines:
tw.line(line)
class ReprFuncArgs(TerminalRepr):
def __init__(self, args):
self.args = args
def toterminal(self, tw):
if self.args:
linesofar = ""
for name, value in self.args:
ns = "%s = %s" %(name, value)
if len(ns) + len(linesofar) + 2 > tw.fullwidth:
if linesofar:
tw.line(linesofar)
linesofar = ns
else:
if linesofar:
linesofar += ", " + ns
else:
linesofar = ns
if linesofar:
tw.line(linesofar)
tw.line("")
def getrawcode(obj, trycall=True):
""" return code object for given function. """
try:
return obj.__code__
except AttributeError:
obj = getattr(obj, 'im_func', obj)
obj = getattr(obj, 'func_code', obj)
obj = getattr(obj, 'f_code', obj)
obj = getattr(obj, '__code__', obj)
if trycall and not hasattr(obj, 'co_firstlineno'):
if hasattr(obj, '__call__') and not py.std.inspect.isclass(obj):
x = getrawcode(obj.__call__, trycall=False)
if hasattr(x, 'co_firstlineno'):
return x
return obj
if sys.version_info[:2] >= (3, 5): # RecursionError introduced in 3.5
def is_recursion_error(excinfo):
return excinfo.errisinstance(RecursionError) # noqa
else:
def is_recursion_error(excinfo):
if not excinfo.errisinstance(RuntimeError):
return False
try:
return "maximum recursion depth exceeded" in str(excinfo.value)
except UnicodeError:
return False

View File

@@ -1,414 +0,0 @@
from __future__ import generators
from bisect import bisect_right
import sys
import inspect, tokenize
import py
cpy_compile = compile
try:
import _ast
from _ast import PyCF_ONLY_AST as _AST_FLAG
except ImportError:
_AST_FLAG = 0
_ast = None
class Source(object):
""" a immutable object holding a source code fragment,
possibly deindenting it.
"""
_compilecounter = 0
def __init__(self, *parts, **kwargs):
self.lines = lines = []
de = kwargs.get('deindent', True)
rstrip = kwargs.get('rstrip', True)
for part in parts:
if not part:
partlines = []
if isinstance(part, Source):
partlines = part.lines
elif isinstance(part, (tuple, list)):
partlines = [x.rstrip("\n") for x in part]
elif isinstance(part, py.builtin._basestring):
partlines = part.split('\n')
if rstrip:
while partlines:
if partlines[-1].strip():
break
partlines.pop()
else:
partlines = getsource(part, deindent=de).lines
if de:
partlines = deindent(partlines)
lines.extend(partlines)
def __eq__(self, other):
try:
return self.lines == other.lines
except AttributeError:
if isinstance(other, str):
return str(self) == other
return False
__hash__ = None
def __getitem__(self, key):
if isinstance(key, int):
return self.lines[key]
else:
if key.step not in (None, 1):
raise IndexError("cannot slice a Source with a step")
newsource = Source()
newsource.lines = self.lines[key.start:key.stop]
return newsource
def __len__(self):
return len(self.lines)
def strip(self):
""" return new source object with trailing
and leading blank lines removed.
"""
start, end = 0, len(self)
while start < end and not self.lines[start].strip():
start += 1
while end > start and not self.lines[end-1].strip():
end -= 1
source = Source()
source.lines[:] = self.lines[start:end]
return source
def putaround(self, before='', after='', indent=' ' * 4):
""" return a copy of the source object with
'before' and 'after' wrapped around it.
"""
before = Source(before)
after = Source(after)
newsource = Source()
lines = [ (indent + line) for line in self.lines]
newsource.lines = before.lines + lines + after.lines
return newsource
def indent(self, indent=' ' * 4):
""" return a copy of the source object with
all lines indented by the given indent-string.
"""
newsource = Source()
newsource.lines = [(indent+line) for line in self.lines]
return newsource
def getstatement(self, lineno, assertion=False):
""" return Source statement which contains the
given linenumber (counted from 0).
"""
start, end = self.getstatementrange(lineno, assertion)
return self[start:end]
def getstatementrange(self, lineno, assertion=False):
""" return (start, end) tuple which spans the minimal
statement region which containing the given lineno.
"""
if not (0 <= lineno < len(self)):
raise IndexError("lineno out of range")
ast, start, end = getstatementrange_ast(lineno, self)
return start, end
def deindent(self, offset=None):
""" return a new source object deindented by offset.
If offset is None then guess an indentation offset from
the first non-blank line. Subsequent lines which have a
lower indentation offset will be copied verbatim as
they are assumed to be part of multilines.
"""
# XXX maybe use the tokenizer to properly handle multiline
# strings etc.pp?
newsource = Source()
newsource.lines[:] = deindent(self.lines, offset)
return newsource
def isparseable(self, deindent=True):
""" return True if source is parseable, heuristically
deindenting it by default.
"""
try:
import parser
except ImportError:
syntax_checker = lambda x: compile(x, 'asd', 'exec')
else:
syntax_checker = parser.suite
if deindent:
source = str(self.deindent())
else:
source = str(self)
try:
#compile(source+'\n', "x", "exec")
syntax_checker(source+'\n')
except KeyboardInterrupt:
raise
except Exception:
return False
else:
return True
def __str__(self):
return "\n".join(self.lines)
def compile(self, filename=None, mode='exec',
flag=generators.compiler_flag,
dont_inherit=0, _genframe=None):
""" return compiled code object. if filename is None
invent an artificial filename which displays
the source/line position of the caller frame.
"""
if not filename or py.path.local(filename).check(file=0):
if _genframe is None:
_genframe = sys._getframe(1) # the caller
fn,lineno = _genframe.f_code.co_filename, _genframe.f_lineno
base = "<%d-codegen " % self._compilecounter
self.__class__._compilecounter += 1
if not filename:
filename = base + '%s:%d>' % (fn, lineno)
else:
filename = base + '%r %s:%d>' % (filename, fn, lineno)
source = "\n".join(self.lines) + '\n'
try:
co = cpy_compile(source, filename, mode, flag)
except SyntaxError:
ex = sys.exc_info()[1]
# re-represent syntax errors from parsing python strings
msglines = self.lines[:ex.lineno]
if ex.offset:
msglines.append(" "*ex.offset + '^')
msglines.append("(code was compiled probably from here: %s)" % filename)
newex = SyntaxError('\n'.join(msglines))
newex.offset = ex.offset
newex.lineno = ex.lineno
newex.text = ex.text
raise newex
else:
if flag & _AST_FLAG:
return co
lines = [(x + "\n") for x in self.lines]
py.std.linecache.cache[filename] = (1, None, lines, filename)
return co
#
# public API shortcut functions
#
def compile_(source, filename=None, mode='exec', flags=
generators.compiler_flag, dont_inherit=0):
""" compile the given source to a raw code object,
and maintain an internal cache which allows later
retrieval of the source code for the code object
and any recursively created code objects.
"""
if _ast is not None and isinstance(source, _ast.AST):
# XXX should Source support having AST?
return cpy_compile(source, filename, mode, flags, dont_inherit)
_genframe = sys._getframe(1) # the caller
s = Source(source)
co = s.compile(filename, mode, flags, _genframe=_genframe)
return co
def getfslineno(obj):
""" Return source location (path, lineno) for the given object.
If the source cannot be determined return ("", -1)
"""
import _pytest._code
try:
code = _pytest._code.Code(obj)
except TypeError:
try:
fn = (py.std.inspect.getsourcefile(obj) or
py.std.inspect.getfile(obj))
except TypeError:
return "", -1
fspath = fn and py.path.local(fn) or None
lineno = -1
if fspath:
try:
_, lineno = findsource(obj)
except IOError:
pass
else:
fspath = code.path
lineno = code.firstlineno
assert isinstance(lineno, int)
return fspath, lineno
#
# helper functions
#
def findsource(obj):
try:
sourcelines, lineno = py.std.inspect.findsource(obj)
except py.builtin._sysex:
raise
except:
return None, -1
source = Source()
source.lines = [line.rstrip() for line in sourcelines]
return source, lineno
def getsource(obj, **kwargs):
import _pytest._code
obj = _pytest._code.getrawcode(obj)
try:
strsrc = inspect.getsource(obj)
except IndentationError:
strsrc = "\"Buggy python version consider upgrading, cannot get source\""
assert isinstance(strsrc, str)
return Source(strsrc, **kwargs)
def deindent(lines, offset=None):
if offset is None:
for line in lines:
line = line.expandtabs()
s = line.lstrip()
if s:
offset = len(line)-len(s)
break
else:
offset = 0
if offset == 0:
return list(lines)
newlines = []
def readline_generator(lines):
for line in lines:
yield line + '\n'
while True:
yield ''
it = readline_generator(lines)
try:
for _, _, (sline, _), (eline, _), _ in tokenize.generate_tokens(lambda: next(it)):
if sline > len(lines):
break # End of input reached
if sline > len(newlines):
line = lines[sline - 1].expandtabs()
if line.lstrip() and line[:offset].isspace():
line = line[offset:] # Deindent
newlines.append(line)
for i in range(sline, eline):
# Don't deindent continuing lines of
# multiline tokens (i.e. multiline strings)
newlines.append(lines[i])
except (IndentationError, tokenize.TokenError):
pass
# Add any lines we didn't see. E.g. if an exception was raised.
newlines.extend(lines[len(newlines):])
return newlines
def get_statement_startend2(lineno, node):
import ast
# flatten all statements and except handlers into one lineno-list
# AST's line numbers start indexing at 1
l = []
for x in ast.walk(node):
if isinstance(x, _ast.stmt) or isinstance(x, _ast.ExceptHandler):
l.append(x.lineno - 1)
for name in "finalbody", "orelse":
val = getattr(x, name, None)
if val:
# treat the finally/orelse part as its own statement
l.append(val[0].lineno - 1 - 1)
l.sort()
insert_index = bisect_right(l, lineno)
start = l[insert_index - 1]
if insert_index >= len(l):
end = None
else:
end = l[insert_index]
return start, end
def getstatementrange_ast(lineno, source, assertion=False, astnode=None):
if astnode is None:
content = str(source)
if sys.version_info < (2,7):
content += "\n"
try:
astnode = compile(content, "source", "exec", 1024) # 1024 for AST
except ValueError:
start, end = getstatementrange_old(lineno, source, assertion)
return None, start, end
start, end = get_statement_startend2(lineno, astnode)
# we need to correct the end:
# - ast-parsing strips comments
# - there might be empty lines
# - we might have lesser indented code blocks at the end
if end is None:
end = len(source.lines)
if end > start + 1:
# make sure we don't span differently indented code blocks
# by using the BlockFinder helper used which inspect.getsource() uses itself
block_finder = inspect.BlockFinder()
# if we start with an indented line, put blockfinder to "started" mode
block_finder.started = source.lines[start][0].isspace()
it = ((x + "\n") for x in source.lines[start:end])
try:
for tok in tokenize.generate_tokens(lambda: next(it)):
block_finder.tokeneater(*tok)
except (inspect.EndOfBlock, IndentationError):
end = block_finder.last + start
except Exception:
pass
# the end might still point to a comment or empty line, correct it
while end:
line = source.lines[end - 1].lstrip()
if line.startswith("#") or not line:
end -= 1
else:
break
return astnode, start, end
def getstatementrange_old(lineno, source, assertion=False):
""" return (start, end) tuple which spans the minimal
statement region which containing the given lineno.
raise an IndexError if no such statementrange can be found.
"""
# XXX this logic is only used on python2.4 and below
# 1. find the start of the statement
from codeop import compile_command
for start in range(lineno, -1, -1):
if assertion:
line = source.lines[start]
# the following lines are not fully tested, change with care
if 'super' in line and 'self' in line and '__init__' in line:
raise IndexError("likely a subclass")
if "assert" not in line and "raise" not in line:
continue
trylines = source.lines[start:lineno+1]
# quick hack to prepare parsing an indented line with
# compile_command() (which errors on "return" outside defs)
trylines.insert(0, 'def xxx():')
trysource = '\n '.join(trylines)
# ^ space here
try:
compile_command(trysource)
except (SyntaxError, OverflowError, ValueError):
continue
# 2. find the end of the statement
for end in range(lineno+1, len(source)+1):
trysource = source[start:end]
if trysource.isparseable():
return start, end
raise SyntaxError("no valid source range around line %d " % (lineno,))

View File

@@ -1,11 +0,0 @@
"""
imports symbols from vendored "pluggy" if available, otherwise
falls back to importing "pluggy" from the default namespace.
"""
try:
from _pytest.vendored_packages.pluggy import * # noqa
from _pytest.vendored_packages.pluggy import __version__ # noqa
except ImportError:
from pluggy import * # noqa
from pluggy import __version__ # noqa

View File

@@ -1,164 +0,0 @@
"""
support for presenting detailed information in failing assertions.
"""
import py
import os
import sys
from _pytest.assertion import util
from _pytest.assertion import rewrite
def pytest_addoption(parser):
group = parser.getgroup("debugconfig")
group.addoption('--assert',
action="store",
dest="assertmode",
choices=("rewrite", "plain",),
default="rewrite",
metavar="MODE",
help="""Control assertion debugging tools. 'plain'
performs no assertion debugging. 'rewrite'
(the default) rewrites assert statements in
test modules on import to provide assert
expression information.""")
def pytest_namespace():
return {'register_assert_rewrite': register_assert_rewrite}
def register_assert_rewrite(*names):
"""Register one or more module names to be rewritten on import.
This function will make sure that this module or all modules inside
the package will get their assert statements rewritten.
Thus you should make sure to call this before the module is
actually imported, usually in your __init__.py if you are a plugin
using a package.
:raise TypeError: if the given module names are not strings.
"""
for name in names:
if not isinstance(name, str):
msg = 'expected module names as *args, got {0} instead'
raise TypeError(msg.format(repr(names)))
for hook in sys.meta_path:
if isinstance(hook, rewrite.AssertionRewritingHook):
importhook = hook
break
else:
importhook = DummyRewriteHook()
importhook.mark_rewrite(*names)
class DummyRewriteHook(object):
"""A no-op import hook for when rewriting is disabled."""
def mark_rewrite(self, *names):
pass
class AssertionState:
"""State for the assertion plugin."""
def __init__(self, config, mode):
self.mode = mode
self.trace = config.trace.root.get("assertion")
self.hook = None
def install_importhook(config):
"""Try to install the rewrite hook, raise SystemError if it fails."""
# Both Jython and CPython 2.6.0 have AST bugs that make the
# assertion rewriting hook malfunction.
if (sys.platform.startswith('java') or
sys.version_info[:3] == (2, 6, 0)):
raise SystemError('rewrite not supported')
config._assertstate = AssertionState(config, 'rewrite')
config._assertstate.hook = hook = rewrite.AssertionRewritingHook(config)
sys.meta_path.insert(0, hook)
config._assertstate.trace('installed rewrite import hook')
def undo():
hook = config._assertstate.hook
if hook is not None and hook in sys.meta_path:
sys.meta_path.remove(hook)
config.add_cleanup(undo)
return hook
def pytest_collection(session):
# this hook is only called when test modules are collected
# so for example not in the master process of pytest-xdist
# (which does not collect test modules)
assertstate = getattr(session.config, '_assertstate', None)
if assertstate:
if assertstate.hook is not None:
assertstate.hook.set_session(session)
def _running_on_ci():
"""Check if we're currently running on a CI system."""
env_vars = ['CI', 'BUILD_NUMBER']
return any(var in os.environ for var in env_vars)
def pytest_runtest_setup(item):
"""Setup the pytest_assertrepr_compare hook
The newinterpret and rewrite modules will use util._reprcompare if
it exists to use custom reporting via the
pytest_assertrepr_compare hook. This sets up this custom
comparison for the test.
"""
def callbinrepr(op, left, right):
"""Call the pytest_assertrepr_compare hook and prepare the result
This uses the first result from the hook and then ensures the
following:
* Overly verbose explanations are dropped unless -vv was used or
running on a CI.
* Embedded newlines are escaped to help util.format_explanation()
later.
* If the rewrite mode is used embedded %-characters are replaced
to protect later % formatting.
The result can be formatted by util.format_explanation() for
pretty printing.
"""
hook_result = item.ihook.pytest_assertrepr_compare(
config=item.config, op=op, left=left, right=right)
for new_expl in hook_result:
if new_expl:
if (sum(len(p) for p in new_expl[1:]) > 80*8 and
item.config.option.verbose < 2 and
not _running_on_ci()):
show_max = 10
truncated_lines = len(new_expl) - show_max
new_expl[show_max:] = [py.builtin._totext(
'Detailed information truncated (%d more lines)'
', use "-vv" to show' % truncated_lines)]
new_expl = [line.replace("\n", "\\n") for line in new_expl]
res = py.builtin._totext("\n~").join(new_expl)
if item.config.getvalue("assertmode") == "rewrite":
res = res.replace("%", "%%")
return res
util._reprcompare = callbinrepr
def pytest_runtest_teardown(item):
util._reprcompare = None
def pytest_sessionfinish(session):
assertstate = getattr(session.config, '_assertstate', None)
if assertstate:
if assertstate.hook is not None:
assertstate.hook.set_session(None)
# Expose this plugin's implementation for the pytest_assertrepr_compare hook
pytest_assertrepr_compare = util.assertrepr_compare

View File

@@ -1,945 +0,0 @@
"""Rewrite assertion AST to produce nice error messages"""
import ast
import _ast
import errno
import itertools
import imp
import marshal
import os
import re
import struct
import sys
import types
from fnmatch import fnmatch
import py
from _pytest.assertion import util
# pytest caches rewritten pycs in __pycache__.
if hasattr(imp, "get_tag"):
PYTEST_TAG = imp.get_tag() + "-PYTEST"
else:
if hasattr(sys, "pypy_version_info"):
impl = "pypy"
elif sys.platform == "java":
impl = "jython"
else:
impl = "cpython"
ver = sys.version_info
PYTEST_TAG = "%s-%s%s-PYTEST" % (impl, ver[0], ver[1])
del ver, impl
PYC_EXT = ".py" + (__debug__ and "c" or "o")
PYC_TAIL = "." + PYTEST_TAG + PYC_EXT
REWRITE_NEWLINES = sys.version_info[:2] != (2, 7) and sys.version_info < (3, 2)
ASCII_IS_DEFAULT_ENCODING = sys.version_info[0] < 3
if sys.version_info >= (3,5):
ast_Call = ast.Call
else:
ast_Call = lambda a,b,c: ast.Call(a, b, c, None, None)
class AssertionRewritingHook(object):
"""PEP302 Import hook which rewrites asserts."""
def __init__(self, config):
self.config = config
self.fnpats = config.getini("python_files")
self.session = None
self.modules = {}
self._rewritten_names = set()
self._register_with_pkg_resources()
self._must_rewrite = set()
def set_session(self, session):
self.session = session
def find_module(self, name, path=None):
state = self.config._assertstate
state.trace("find_module called for: %s" % name)
names = name.rsplit(".", 1)
lastname = names[-1]
pth = None
if path is not None:
# Starting with Python 3.3, path is a _NamespacePath(), which
# causes problems if not converted to list.
path = list(path)
if len(path) == 1:
pth = path[0]
if pth is None:
try:
fd, fn, desc = imp.find_module(lastname, path)
except ImportError:
return None
if fd is not None:
fd.close()
tp = desc[2]
if tp == imp.PY_COMPILED:
if hasattr(imp, "source_from_cache"):
try:
fn = imp.source_from_cache(fn)
except ValueError:
# Python 3 doesn't like orphaned but still-importable
# .pyc files.
fn = fn[:-1]
else:
fn = fn[:-1]
elif tp != imp.PY_SOURCE:
# Don't know what this is.
return None
else:
fn = os.path.join(pth, name.rpartition(".")[2] + ".py")
fn_pypath = py.path.local(fn)
if not self._should_rewrite(name, fn_pypath, state):
return None
self._rewritten_names.add(name)
# The requested module looks like a test file, so rewrite it. This is
# the most magical part of the process: load the source, rewrite the
# asserts, and load the rewritten source. We also cache the rewritten
# module code in a special pyc. We must be aware of the possibility of
# concurrent pytest processes rewriting and loading pycs. To avoid
# tricky race conditions, we maintain the following invariant: The
# cached pyc is always a complete, valid pyc. Operations on it must be
# atomic. POSIX's atomic rename comes in handy.
write = not sys.dont_write_bytecode
cache_dir = os.path.join(fn_pypath.dirname, "__pycache__")
if write:
try:
os.mkdir(cache_dir)
except OSError:
e = sys.exc_info()[1].errno
if e == errno.EEXIST:
# Either the __pycache__ directory already exists (the
# common case) or it's blocked by a non-dir node. In the
# latter case, we'll ignore it in _write_pyc.
pass
elif e in [errno.ENOENT, errno.ENOTDIR]:
# One of the path components was not a directory, likely
# because we're in a zip file.
write = False
elif e in [errno.EACCES, errno.EROFS, errno.EPERM]:
state.trace("read only directory: %r" % fn_pypath.dirname)
write = False
else:
raise
cache_name = fn_pypath.basename[:-3] + PYC_TAIL
pyc = os.path.join(cache_dir, cache_name)
# Notice that even if we're in a read-only directory, I'm going
# to check for a cached pyc. This may not be optimal...
co = _read_pyc(fn_pypath, pyc, state.trace)
if co is None:
state.trace("rewriting %r" % (fn,))
source_stat, co = _rewrite_test(self.config, fn_pypath)
if co is None:
# Probably a SyntaxError in the test.
return None
if write:
_make_rewritten_pyc(state, source_stat, pyc, co)
else:
state.trace("found cached rewritten pyc for %r" % (fn,))
self.modules[name] = co, pyc
return self
def _should_rewrite(self, name, fn_pypath, state):
# always rewrite conftest files
fn = str(fn_pypath)
if fn_pypath.basename == 'conftest.py':
state.trace("rewriting conftest file: %r" % (fn,))
return True
if self.session is not None:
if self.session.isinitpath(fn):
state.trace("matched test file (was specified on cmdline): %r" %
(fn,))
return True
# modules not passed explicitly on the command line are only
# rewritten if they match the naming convention for test files
for pat in self.fnpats:
# use fnmatch instead of fn_pypath.fnmatch because the
# latter might trigger an import to fnmatch.fnmatch
# internally, which would cause this method to be
# called recursively
if fnmatch(fn_pypath.basename, pat):
state.trace("matched test file %r" % (fn,))
return True
for marked in self._must_rewrite:
if name.startswith(marked):
state.trace("matched marked file %r (from %r)" % (name, marked))
return True
return False
def mark_rewrite(self, *names):
"""Mark import names as needing to be re-written.
The named module or package as well as any nested modules will
be re-written on import.
"""
already_imported = set(names).intersection(set(sys.modules))
if already_imported:
for name in already_imported:
if name not in self._rewritten_names:
self._warn_already_imported(name)
self._must_rewrite.update(names)
def _warn_already_imported(self, name):
self.config.warn(
'P1',
'Module already imported so can not be re-written: %s' % name)
def load_module(self, name):
# If there is an existing module object named 'fullname' in
# sys.modules, the loader must use that existing module. (Otherwise,
# the reload() builtin will not work correctly.)
if name in sys.modules:
return sys.modules[name]
co, pyc = self.modules.pop(name)
# I wish I could just call imp.load_compiled here, but __file__ has to
# be set properly. In Python 3.2+, this all would be handled correctly
# by load_compiled.
mod = sys.modules[name] = imp.new_module(name)
try:
mod.__file__ = co.co_filename
# Normally, this attribute is 3.2+.
mod.__cached__ = pyc
mod.__loader__ = self
py.builtin.exec_(co, mod.__dict__)
except:
del sys.modules[name]
raise
return sys.modules[name]
def is_package(self, name):
try:
fd, fn, desc = imp.find_module(name)
except ImportError:
return False
if fd is not None:
fd.close()
tp = desc[2]
return tp == imp.PKG_DIRECTORY
@classmethod
def _register_with_pkg_resources(cls):
"""
Ensure package resources can be loaded from this loader. May be called
multiple times, as the operation is idempotent.
"""
try:
import pkg_resources
# access an attribute in case a deferred importer is present
pkg_resources.__name__
except ImportError:
return
# Since pytest tests are always located in the file system, the
# DefaultProvider is appropriate.
pkg_resources.register_loader_type(cls, pkg_resources.DefaultProvider)
def get_data(self, pathname):
"""Optional PEP302 get_data API.
"""
with open(pathname, 'rb') as f:
return f.read()
def _write_pyc(state, co, source_stat, pyc):
# Technically, we don't have to have the same pyc format as
# (C)Python, since these "pycs" should never be seen by builtin
# import. However, there's little reason deviate, and I hope
# sometime to be able to use imp.load_compiled to load them. (See
# the comment in load_module above.)
try:
fp = open(pyc, "wb")
except IOError:
err = sys.exc_info()[1].errno
state.trace("error writing pyc file at %s: errno=%s" %(pyc, err))
# we ignore any failure to write the cache file
# there are many reasons, permission-denied, __pycache__ being a
# file etc.
return False
try:
fp.write(imp.get_magic())
mtime = int(source_stat.mtime)
size = source_stat.size & 0xFFFFFFFF
fp.write(struct.pack("<ll", mtime, size))
marshal.dump(co, fp)
finally:
fp.close()
return True
RN = "\r\n".encode("utf-8")
N = "\n".encode("utf-8")
cookie_re = re.compile(r"^[ \t\f]*#.*coding[:=][ \t]*[-\w.]+")
BOM_UTF8 = '\xef\xbb\xbf'
def _rewrite_test(config, fn):
"""Try to read and rewrite *fn* and return the code object."""
state = config._assertstate
try:
stat = fn.stat()
source = fn.read("rb")
except EnvironmentError:
return None, None
if ASCII_IS_DEFAULT_ENCODING:
# ASCII is the default encoding in Python 2. Without a coding
# declaration, Python 2 will complain about any bytes in the file
# outside the ASCII range. Sadly, this behavior does not extend to
# compile() or ast.parse(), which prefer to interpret the bytes as
# latin-1. (At least they properly handle explicit coding cookies.) To
# preserve this error behavior, we could force ast.parse() to use ASCII
# as the encoding by inserting a coding cookie. Unfortunately, that
# messes up line numbers. Thus, we have to check ourselves if anything
# is outside the ASCII range in the case no encoding is explicitly
# declared. For more context, see issue #269. Yay for Python 3 which
# gets this right.
end1 = source.find("\n")
end2 = source.find("\n", end1 + 1)
if (not source.startswith(BOM_UTF8) and
cookie_re.match(source[0:end1]) is None and
cookie_re.match(source[end1 + 1:end2]) is None):
if hasattr(state, "_indecode"):
# encodings imported us again, so don't rewrite.
return None, None
state._indecode = True
try:
try:
source.decode("ascii")
except UnicodeDecodeError:
# Let it fail in real import.
return None, None
finally:
del state._indecode
# On Python versions which are not 2.7 and less than or equal to 3.1, the
# parser expects *nix newlines.
if REWRITE_NEWLINES:
source = source.replace(RN, N) + N
try:
tree = ast.parse(source)
except SyntaxError:
# Let this pop up again in the real import.
state.trace("failed to parse: %r" % (fn,))
return None, None
rewrite_asserts(tree, fn, config)
try:
co = compile(tree, fn.strpath, "exec")
except SyntaxError:
# It's possible that this error is from some bug in the
# assertion rewriting, but I don't know of a fast way to tell.
state.trace("failed to compile: %r" % (fn,))
return None, None
return stat, co
def _make_rewritten_pyc(state, source_stat, pyc, co):
"""Try to dump rewritten code to *pyc*."""
if sys.platform.startswith("win"):
# Windows grants exclusive access to open files and doesn't have atomic
# rename, so just write into the final file.
_write_pyc(state, co, source_stat, pyc)
else:
# When not on windows, assume rename is atomic. Dump the code object
# into a file specific to this process and atomically replace it.
proc_pyc = pyc + "." + str(os.getpid())
if _write_pyc(state, co, source_stat, proc_pyc):
os.rename(proc_pyc, pyc)
def _read_pyc(source, pyc, trace=lambda x: None):
"""Possibly read a pytest pyc containing rewritten code.
Return rewritten code if successful or None if not.
"""
try:
fp = open(pyc, "rb")
except IOError:
return None
with fp:
try:
mtime = int(source.mtime())
size = source.size()
data = fp.read(12)
except EnvironmentError as e:
trace('_read_pyc(%s): EnvironmentError %s' % (source, e))
return None
# Check for invalid or out of date pyc file.
if (len(data) != 12 or data[:4] != imp.get_magic() or
struct.unpack("<ll", data[4:]) != (mtime, size)):
trace('_read_pyc(%s): invalid or out of date pyc' % source)
return None
try:
co = marshal.load(fp)
except Exception as e:
trace('_read_pyc(%s): marshal.load error %s' % (source, e))
return None
if not isinstance(co, types.CodeType):
trace('_read_pyc(%s): not a code object' % source)
return None
return co
def rewrite_asserts(mod, module_path=None, config=None):
"""Rewrite the assert statements in mod."""
AssertionRewriter(module_path, config).run(mod)
def _saferepr(obj):
"""Get a safe repr of an object for assertion error messages.
The assertion formatting (util.format_explanation()) requires
newlines to be escaped since they are a special character for it.
Normally assertion.util.format_explanation() does this but for a
custom repr it is possible to contain one of the special escape
sequences, especially '\n{' and '\n}' are likely to be present in
JSON reprs.
"""
repr = py.io.saferepr(obj)
if py.builtin._istext(repr):
t = py.builtin.text
else:
t = py.builtin.bytes
return repr.replace(t("\n"), t("\\n"))
from _pytest.assertion.util import format_explanation as _format_explanation # noqa
def _format_assertmsg(obj):
"""Format the custom assertion message given.
For strings this simply replaces newlines with '\n~' so that
util.format_explanation() will preserve them instead of escaping
newlines. For other objects py.io.saferepr() is used first.
"""
# reprlib appears to have a bug which means that if a string
# contains a newline it gets escaped, however if an object has a
# .__repr__() which contains newlines it does not get escaped.
# However in either case we want to preserve the newline.
if py.builtin._istext(obj) or py.builtin._isbytes(obj):
s = obj
is_repr = False
else:
s = py.io.saferepr(obj)
is_repr = True
if py.builtin._istext(s):
t = py.builtin.text
else:
t = py.builtin.bytes
s = s.replace(t("\n"), t("\n~")).replace(t("%"), t("%%"))
if is_repr:
s = s.replace(t("\\n"), t("\n~"))
return s
def _should_repr_global_name(obj):
return not hasattr(obj, "__name__") and not py.builtin.callable(obj)
def _format_boolop(explanations, is_or):
explanation = "(" + (is_or and " or " or " and ").join(explanations) + ")"
if py.builtin._istext(explanation):
t = py.builtin.text
else:
t = py.builtin.bytes
return explanation.replace(t('%'), t('%%'))
def _call_reprcompare(ops, results, expls, each_obj):
for i, res, expl in zip(range(len(ops)), results, expls):
try:
done = not res
except Exception:
done = True
if done:
break
if util._reprcompare is not None:
custom = util._reprcompare(ops[i], each_obj[i], each_obj[i + 1])
if custom is not None:
return custom
return expl
unary_map = {
ast.Not: "not %s",
ast.Invert: "~%s",
ast.USub: "-%s",
ast.UAdd: "+%s"
}
binop_map = {
ast.BitOr: "|",
ast.BitXor: "^",
ast.BitAnd: "&",
ast.LShift: "<<",
ast.RShift: ">>",
ast.Add: "+",
ast.Sub: "-",
ast.Mult: "*",
ast.Div: "/",
ast.FloorDiv: "//",
ast.Mod: "%%", # escaped for string formatting
ast.Eq: "==",
ast.NotEq: "!=",
ast.Lt: "<",
ast.LtE: "<=",
ast.Gt: ">",
ast.GtE: ">=",
ast.Pow: "**",
ast.Is: "is",
ast.IsNot: "is not",
ast.In: "in",
ast.NotIn: "not in"
}
# Python 3.5+ compatibility
try:
binop_map[ast.MatMult] = "@"
except AttributeError:
pass
# Python 3.4+ compatibility
if hasattr(ast, "NameConstant"):
_NameConstant = ast.NameConstant
else:
def _NameConstant(c):
return ast.Name(str(c), ast.Load())
def set_location(node, lineno, col_offset):
"""Set node location information recursively."""
def _fix(node, lineno, col_offset):
if "lineno" in node._attributes:
node.lineno = lineno
if "col_offset" in node._attributes:
node.col_offset = col_offset
for child in ast.iter_child_nodes(node):
_fix(child, lineno, col_offset)
_fix(node, lineno, col_offset)
return node
class AssertionRewriter(ast.NodeVisitor):
"""Assertion rewriting implementation.
The main entrypoint is to call .run() with an ast.Module instance,
this will then find all the assert statements and re-write them to
provide intermediate values and a detailed assertion error. See
http://pybites.blogspot.be/2011/07/behind-scenes-of-pytests-new-assertion.html
for an overview of how this works.
The entry point here is .run() which will iterate over all the
statements in an ast.Module and for each ast.Assert statement it
finds call .visit() with it. Then .visit_Assert() takes over and
is responsible for creating new ast statements to replace the
original assert statement: it re-writes the test of an assertion
to provide intermediate values and replace it with an if statement
which raises an assertion error with a detailed explanation in
case the expression is false.
For this .visit_Assert() uses the visitor pattern to visit all the
AST nodes of the ast.Assert.test field, each visit call returning
an AST node and the corresponding explanation string. During this
state is kept in several instance attributes:
:statements: All the AST statements which will replace the assert
statement.
:variables: This is populated by .variable() with each variable
used by the statements so that they can all be set to None at
the end of the statements.
:variable_counter: Counter to create new unique variables needed
by statements. Variables are created using .variable() and
have the form of "@py_assert0".
:on_failure: The AST statements which will be executed if the
assertion test fails. This is the code which will construct
the failure message and raises the AssertionError.
:explanation_specifiers: A dict filled by .explanation_param()
with %-formatting placeholders and their corresponding
expressions to use in the building of an assertion message.
This is used by .pop_format_context() to build a message.
:stack: A stack of the explanation_specifiers dicts maintained by
.push_format_context() and .pop_format_context() which allows
to build another %-formatted string while already building one.
This state is reset on every new assert statement visited and used
by the other visitors.
"""
def __init__(self, module_path, config):
super(AssertionRewriter, self).__init__()
self.module_path = module_path
self.config = config
def run(self, mod):
"""Find all assert statements in *mod* and rewrite them."""
if not mod.body:
# Nothing to do.
return
# Insert some special imports at the top of the module but after any
# docstrings and __future__ imports.
aliases = [ast.alias(py.builtin.builtins.__name__, "@py_builtins"),
ast.alias("_pytest.assertion.rewrite", "@pytest_ar")]
expect_docstring = True
pos = 0
lineno = 0
for item in mod.body:
if (expect_docstring and isinstance(item, ast.Expr) and
isinstance(item.value, ast.Str)):
doc = item.value.s
if "PYTEST_DONT_REWRITE" in doc:
# The module has disabled assertion rewriting.
return
lineno += len(doc) - 1
expect_docstring = False
elif (not isinstance(item, ast.ImportFrom) or item.level > 0 or
item.module != "__future__"):
lineno = item.lineno
break
pos += 1
imports = [ast.Import([alias], lineno=lineno, col_offset=0)
for alias in aliases]
mod.body[pos:pos] = imports
# Collect asserts.
nodes = [mod]
while nodes:
node = nodes.pop()
for name, field in ast.iter_fields(node):
if isinstance(field, list):
new = []
for i, child in enumerate(field):
if isinstance(child, ast.Assert):
# Transform assert.
new.extend(self.visit(child))
else:
new.append(child)
if isinstance(child, ast.AST):
nodes.append(child)
setattr(node, name, new)
elif (isinstance(field, ast.AST) and
# Don't recurse into expressions as they can't contain
# asserts.
not isinstance(field, ast.expr)):
nodes.append(field)
def variable(self):
"""Get a new variable."""
# Use a character invalid in python identifiers to avoid clashing.
name = "@py_assert" + str(next(self.variable_counter))
self.variables.append(name)
return name
def assign(self, expr):
"""Give *expr* a name."""
name = self.variable()
self.statements.append(ast.Assign([ast.Name(name, ast.Store())], expr))
return ast.Name(name, ast.Load())
def display(self, expr):
"""Call py.io.saferepr on the expression."""
return self.helper("saferepr", expr)
def helper(self, name, *args):
"""Call a helper in this module."""
py_name = ast.Name("@pytest_ar", ast.Load())
attr = ast.Attribute(py_name, "_" + name, ast.Load())
return ast_Call(attr, list(args), [])
def builtin(self, name):
"""Return the builtin called *name*."""
builtin_name = ast.Name("@py_builtins", ast.Load())
return ast.Attribute(builtin_name, name, ast.Load())
def explanation_param(self, expr):
"""Return a new named %-formatting placeholder for expr.
This creates a %-formatting placeholder for expr in the
current formatting context, e.g. ``%(py0)s``. The placeholder
and expr are placed in the current format context so that it
can be used on the next call to .pop_format_context().
"""
specifier = "py" + str(next(self.variable_counter))
self.explanation_specifiers[specifier] = expr
return "%(" + specifier + ")s"
def push_format_context(self):
"""Create a new formatting context.
The format context is used for when an explanation wants to
have a variable value formatted in the assertion message. In
this case the value required can be added using
.explanation_param(). Finally .pop_format_context() is used
to format a string of %-formatted values as added by
.explanation_param().
"""
self.explanation_specifiers = {}
self.stack.append(self.explanation_specifiers)
def pop_format_context(self, expl_expr):
"""Format the %-formatted string with current format context.
The expl_expr should be an ast.Str instance constructed from
the %-placeholders created by .explanation_param(). This will
add the required code to format said string to .on_failure and
return the ast.Name instance of the formatted string.
"""
current = self.stack.pop()
if self.stack:
self.explanation_specifiers = self.stack[-1]
keys = [ast.Str(key) for key in current.keys()]
format_dict = ast.Dict(keys, list(current.values()))
form = ast.BinOp(expl_expr, ast.Mod(), format_dict)
name = "@py_format" + str(next(self.variable_counter))
self.on_failure.append(ast.Assign([ast.Name(name, ast.Store())], form))
return ast.Name(name, ast.Load())
def generic_visit(self, node):
"""Handle expressions we don't have custom code for."""
assert isinstance(node, ast.expr)
res = self.assign(node)
return res, self.explanation_param(self.display(res))
def visit_Assert(self, assert_):
"""Return the AST statements to replace the ast.Assert instance.
This re-writes the test of an assertion to provide
intermediate values and replace it with an if statement which
raises an assertion error with a detailed explanation in case
the expression is false.
"""
if isinstance(assert_.test, ast.Tuple) and self.config is not None:
fslocation = (self.module_path, assert_.lineno)
self.config.warn('R1', 'assertion is always true, perhaps '
'remove parentheses?', fslocation=fslocation)
self.statements = []
self.variables = []
self.variable_counter = itertools.count()
self.stack = []
self.on_failure = []
self.push_format_context()
# Rewrite assert into a bunch of statements.
top_condition, explanation = self.visit(assert_.test)
# Create failure message.
body = self.on_failure
negation = ast.UnaryOp(ast.Not(), top_condition)
self.statements.append(ast.If(negation, body, []))
if assert_.msg:
assertmsg = self.helper('format_assertmsg', assert_.msg)
explanation = "\n>assert " + explanation
else:
assertmsg = ast.Str("")
explanation = "assert " + explanation
template = ast.BinOp(assertmsg, ast.Add(), ast.Str(explanation))
msg = self.pop_format_context(template)
fmt = self.helper("format_explanation", msg)
err_name = ast.Name("AssertionError", ast.Load())
exc = ast_Call(err_name, [fmt], [])
if sys.version_info[0] >= 3:
raise_ = ast.Raise(exc, None)
else:
raise_ = ast.Raise(exc, None, None)
body.append(raise_)
# Clear temporary variables by setting them to None.
if self.variables:
variables = [ast.Name(name, ast.Store())
for name in self.variables]
clear = ast.Assign(variables, _NameConstant(None))
self.statements.append(clear)
# Fix line numbers.
for stmt in self.statements:
set_location(stmt, assert_.lineno, assert_.col_offset)
return self.statements
def visit_Name(self, name):
# Display the repr of the name if it's a local variable or
# _should_repr_global_name() thinks it's acceptable.
locs = ast_Call(self.builtin("locals"), [], [])
inlocs = ast.Compare(ast.Str(name.id), [ast.In()], [locs])
dorepr = self.helper("should_repr_global_name", name)
test = ast.BoolOp(ast.Or(), [inlocs, dorepr])
expr = ast.IfExp(test, self.display(name), ast.Str(name.id))
return name, self.explanation_param(expr)
def visit_BoolOp(self, boolop):
res_var = self.variable()
expl_list = self.assign(ast.List([], ast.Load()))
app = ast.Attribute(expl_list, "append", ast.Load())
is_or = int(isinstance(boolop.op, ast.Or))
body = save = self.statements
fail_save = self.on_failure
levels = len(boolop.values) - 1
self.push_format_context()
# Process each operand, short-circuting if needed.
for i, v in enumerate(boolop.values):
if i:
fail_inner = []
# cond is set in a prior loop iteration below
self.on_failure.append(ast.If(cond, fail_inner, [])) # noqa
self.on_failure = fail_inner
self.push_format_context()
res, expl = self.visit(v)
body.append(ast.Assign([ast.Name(res_var, ast.Store())], res))
expl_format = self.pop_format_context(ast.Str(expl))
call = ast_Call(app, [expl_format], [])
self.on_failure.append(ast.Expr(call))
if i < levels:
cond = res
if is_or:
cond = ast.UnaryOp(ast.Not(), cond)
inner = []
self.statements.append(ast.If(cond, inner, []))
self.statements = body = inner
self.statements = save
self.on_failure = fail_save
expl_template = self.helper("format_boolop", expl_list, ast.Num(is_or))
expl = self.pop_format_context(expl_template)
return ast.Name(res_var, ast.Load()), self.explanation_param(expl)
def visit_UnaryOp(self, unary):
pattern = unary_map[unary.op.__class__]
operand_res, operand_expl = self.visit(unary.operand)
res = self.assign(ast.UnaryOp(unary.op, operand_res))
return res, pattern % (operand_expl,)
def visit_BinOp(self, binop):
symbol = binop_map[binop.op.__class__]
left_expr, left_expl = self.visit(binop.left)
right_expr, right_expl = self.visit(binop.right)
explanation = "(%s %s %s)" % (left_expl, symbol, right_expl)
res = self.assign(ast.BinOp(left_expr, binop.op, right_expr))
return res, explanation
def visit_Call_35(self, call):
"""
visit `ast.Call` nodes on Python3.5 and after
"""
new_func, func_expl = self.visit(call.func)
arg_expls = []
new_args = []
new_kwargs = []
for arg in call.args:
res, expl = self.visit(arg)
arg_expls.append(expl)
new_args.append(res)
for keyword in call.keywords:
res, expl = self.visit(keyword.value)
new_kwargs.append(ast.keyword(keyword.arg, res))
if keyword.arg:
arg_expls.append(keyword.arg + "=" + expl)
else: ## **args have `arg` keywords with an .arg of None
arg_expls.append("**" + expl)
expl = "%s(%s)" % (func_expl, ', '.join(arg_expls))
new_call = ast.Call(new_func, new_args, new_kwargs)
res = self.assign(new_call)
res_expl = self.explanation_param(self.display(res))
outer_expl = "%s\n{%s = %s\n}" % (res_expl, res_expl, expl)
return res, outer_expl
def visit_Starred(self, starred):
# From Python 3.5, a Starred node can appear in a function call
res, expl = self.visit(starred.value)
return starred, '*' + expl
def visit_Call_legacy(self, call):
"""
visit `ast.Call nodes on 3.4 and below`
"""
new_func, func_expl = self.visit(call.func)
arg_expls = []
new_args = []
new_kwargs = []
new_star = new_kwarg = None
for arg in call.args:
res, expl = self.visit(arg)
new_args.append(res)
arg_expls.append(expl)
for keyword in call.keywords:
res, expl = self.visit(keyword.value)
new_kwargs.append(ast.keyword(keyword.arg, res))
arg_expls.append(keyword.arg + "=" + expl)
if call.starargs:
new_star, expl = self.visit(call.starargs)
arg_expls.append("*" + expl)
if call.kwargs:
new_kwarg, expl = self.visit(call.kwargs)
arg_expls.append("**" + expl)
expl = "%s(%s)" % (func_expl, ', '.join(arg_expls))
new_call = ast.Call(new_func, new_args, new_kwargs,
new_star, new_kwarg)
res = self.assign(new_call)
res_expl = self.explanation_param(self.display(res))
outer_expl = "%s\n{%s = %s\n}" % (res_expl, res_expl, expl)
return res, outer_expl
# ast.Call signature changed on 3.5,
# conditionally change which methods is named
# visit_Call depending on Python version
if sys.version_info >= (3, 5):
visit_Call = visit_Call_35
else:
visit_Call = visit_Call_legacy
def visit_Attribute(self, attr):
if not isinstance(attr.ctx, ast.Load):
return self.generic_visit(attr)
value, value_expl = self.visit(attr.value)
res = self.assign(ast.Attribute(value, attr.attr, ast.Load()))
res_expl = self.explanation_param(self.display(res))
pat = "%s\n{%s = %s.%s\n}"
expl = pat % (res_expl, res_expl, value_expl, attr.attr)
return res, expl
def visit_Compare(self, comp):
self.push_format_context()
left_res, left_expl = self.visit(comp.left)
if isinstance(comp.left, (_ast.Compare, _ast.BoolOp)):
left_expl = "({0})".format(left_expl)
res_variables = [self.variable() for i in range(len(comp.ops))]
load_names = [ast.Name(v, ast.Load()) for v in res_variables]
store_names = [ast.Name(v, ast.Store()) for v in res_variables]
it = zip(range(len(comp.ops)), comp.ops, comp.comparators)
expls = []
syms = []
results = [left_res]
for i, op, next_operand in it:
next_res, next_expl = self.visit(next_operand)
if isinstance(next_operand, (_ast.Compare, _ast.BoolOp)):
next_expl = "({0})".format(next_expl)
results.append(next_res)
sym = binop_map[op.__class__]
syms.append(ast.Str(sym))
expl = "%s %s %s" % (left_expl, sym, next_expl)
expls.append(ast.Str(expl))
res_expr = ast.Compare(left_res, [op], [next_res])
self.statements.append(ast.Assign([store_names[i]], res_expr))
left_res, left_expl = next_res, next_expl
# Use pytest.assertion.util._reprcompare if that's available.
expl_call = self.helper("call_reprcompare",
ast.Tuple(syms, ast.Load()),
ast.Tuple(load_names, ast.Load()),
ast.Tuple(expls, ast.Load()),
ast.Tuple(results, ast.Load()))
if len(comp.ops) > 1:
res = ast.BoolOp(ast.And(), load_names)
else:
res = load_names[0]
return res, self.explanation_param(self.pop_format_context(expl_call))

View File

@@ -1,300 +0,0 @@
"""Utilities for assertion debugging"""
import pprint
import _pytest._code
import py
try:
from collections import Sequence
except ImportError:
Sequence = list
BuiltinAssertionError = py.builtin.builtins.AssertionError
u = py.builtin._totext
# The _reprcompare attribute on the util module is used by the new assertion
# interpretation code and assertion rewriter to detect this plugin was
# loaded and in turn call the hooks defined here as part of the
# DebugInterpreter.
_reprcompare = None
# the re-encoding is needed for python2 repr
# with non-ascii characters (see issue 877 and 1379)
def ecu(s):
try:
return u(s, 'utf-8', 'replace')
except TypeError:
return s
def format_explanation(explanation):
"""This formats an explanation
Normally all embedded newlines are escaped, however there are
three exceptions: \n{, \n} and \n~. The first two are intended
cover nested explanations, see function and attribute explanations
for examples (.visit_Call(), visit_Attribute()). The last one is
for when one explanation needs to span multiple lines, e.g. when
displaying diffs.
"""
explanation = ecu(explanation)
lines = _split_explanation(explanation)
result = _format_lines(lines)
return u('\n').join(result)
def _split_explanation(explanation):
"""Return a list of individual lines in the explanation
This will return a list of lines split on '\n{', '\n}' and '\n~'.
Any other newlines will be escaped and appear in the line as the
literal '\n' characters.
"""
raw_lines = (explanation or u('')).split('\n')
lines = [raw_lines[0]]
for l in raw_lines[1:]:
if l and l[0] in ['{', '}', '~', '>']:
lines.append(l)
else:
lines[-1] += '\\n' + l
return lines
def _format_lines(lines):
"""Format the individual lines
This will replace the '{', '}' and '~' characters of our mini
formatting language with the proper 'where ...', 'and ...' and ' +
...' text, taking care of indentation along the way.
Return a list of formatted lines.
"""
result = lines[:1]
stack = [0]
stackcnt = [0]
for line in lines[1:]:
if line.startswith('{'):
if stackcnt[-1]:
s = u('and ')
else:
s = u('where ')
stack.append(len(result))
stackcnt[-1] += 1
stackcnt.append(0)
result.append(u(' +') + u(' ')*(len(stack)-1) + s + line[1:])
elif line.startswith('}'):
stack.pop()
stackcnt.pop()
result[stack[-1]] += line[1:]
else:
assert line[0] in ['~', '>']
stack[-1] += 1
indent = len(stack) if line.startswith('~') else len(stack) - 1
result.append(u(' ')*indent + line[1:])
assert len(stack) == 1
return result
# Provide basestring in python3
try:
basestring = basestring
except NameError:
basestring = str
def assertrepr_compare(config, op, left, right):
"""Return specialised explanations for some operators/operands"""
width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op
left_repr = py.io.saferepr(left, maxsize=int(width//2))
right_repr = py.io.saferepr(right, maxsize=width-len(left_repr))
summary = u('%s %s %s') % (ecu(left_repr), op, ecu(right_repr))
issequence = lambda x: (isinstance(x, (list, tuple, Sequence)) and
not isinstance(x, basestring))
istext = lambda x: isinstance(x, basestring)
isdict = lambda x: isinstance(x, dict)
isset = lambda x: isinstance(x, (set, frozenset))
def isiterable(obj):
try:
iter(obj)
return not istext(obj)
except TypeError:
return False
verbose = config.getoption('verbose')
explanation = None
try:
if op == '==':
if istext(left) and istext(right):
explanation = _diff_text(left, right, verbose)
else:
if issequence(left) and issequence(right):
explanation = _compare_eq_sequence(left, right, verbose)
elif isset(left) and isset(right):
explanation = _compare_eq_set(left, right, verbose)
elif isdict(left) and isdict(right):
explanation = _compare_eq_dict(left, right, verbose)
if isiterable(left) and isiterable(right):
expl = _compare_eq_iterable(left, right, verbose)
if explanation is not None:
explanation.extend(expl)
else:
explanation = expl
elif op == 'not in':
if istext(left) and istext(right):
explanation = _notin_text(left, right, verbose)
except Exception:
explanation = [
u('(pytest_assertion plugin: representation of details failed. '
'Probably an object has a faulty __repr__.)'),
u(_pytest._code.ExceptionInfo())]
if not explanation:
return None
return [summary] + explanation
def _diff_text(left, right, verbose=False):
"""Return the explanation for the diff between text or bytes
Unless --verbose is used this will skip leading and trailing
characters which are identical to keep the diff minimal.
If the input are bytes they will be safely converted to text.
"""
from difflib import ndiff
explanation = []
if isinstance(left, py.builtin.bytes):
left = u(repr(left)[1:-1]).replace(r'\n', '\n')
if isinstance(right, py.builtin.bytes):
right = u(repr(right)[1:-1]).replace(r'\n', '\n')
if not verbose:
i = 0 # just in case left or right has zero length
for i in range(min(len(left), len(right))):
if left[i] != right[i]:
break
if i > 42:
i -= 10 # Provide some context
explanation = [u('Skipping %s identical leading '
'characters in diff, use -v to show') % i]
left = left[i:]
right = right[i:]
if len(left) == len(right):
for i in range(len(left)):
if left[-i] != right[-i]:
break
if i > 42:
i -= 10 # Provide some context
explanation += [u('Skipping %s identical trailing '
'characters in diff, use -v to show') % i]
left = left[:-i]
right = right[:-i]
keepends = True
explanation += [line.strip('\n')
for line in ndiff(left.splitlines(keepends),
right.splitlines(keepends))]
return explanation
def _compare_eq_iterable(left, right, verbose=False):
if not verbose:
return [u('Use -v to get the full diff')]
# dynamic import to speedup pytest
import difflib
try:
left_formatting = pprint.pformat(left).splitlines()
right_formatting = pprint.pformat(right).splitlines()
explanation = [u('Full diff:')]
except Exception:
# hack: PrettyPrinter.pformat() in python 2 fails when formatting items that can't be sorted(), ie, calling
# sorted() on a list would raise. See issue #718.
# As a workaround, the full diff is generated by using the repr() string of each item of each container.
left_formatting = sorted(repr(x) for x in left)
right_formatting = sorted(repr(x) for x in right)
explanation = [u('Full diff (fallback to calling repr on each item):')]
explanation.extend(line.strip() for line in difflib.ndiff(left_formatting, right_formatting))
return explanation
def _compare_eq_sequence(left, right, verbose=False):
explanation = []
for i in range(min(len(left), len(right))):
if left[i] != right[i]:
explanation += [u('At index %s diff: %r != %r')
% (i, left[i], right[i])]
break
if len(left) > len(right):
explanation += [u('Left contains more items, first extra item: %s')
% py.io.saferepr(left[len(right)],)]
elif len(left) < len(right):
explanation += [
u('Right contains more items, first extra item: %s') %
py.io.saferepr(right[len(left)],)]
return explanation
def _compare_eq_set(left, right, verbose=False):
explanation = []
diff_left = left - right
diff_right = right - left
if diff_left:
explanation.append(u('Extra items in the left set:'))
for item in diff_left:
explanation.append(py.io.saferepr(item))
if diff_right:
explanation.append(u('Extra items in the right set:'))
for item in diff_right:
explanation.append(py.io.saferepr(item))
return explanation
def _compare_eq_dict(left, right, verbose=False):
explanation = []
common = set(left).intersection(set(right))
same = dict((k, left[k]) for k in common if left[k] == right[k])
if same and not verbose:
explanation += [u('Omitting %s identical items, use -v to show') %
len(same)]
elif same:
explanation += [u('Common items:')]
explanation += pprint.pformat(same).splitlines()
diff = set(k for k in common if left[k] != right[k])
if diff:
explanation += [u('Differing items:')]
for k in diff:
explanation += [py.io.saferepr({k: left[k]}) + ' != ' +
py.io.saferepr({k: right[k]})]
extra_left = set(left) - set(right)
if extra_left:
explanation.append(u('Left contains more items:'))
explanation.extend(pprint.pformat(
dict((k, left[k]) for k in extra_left)).splitlines())
extra_right = set(right) - set(left)
if extra_right:
explanation.append(u('Right contains more items:'))
explanation.extend(pprint.pformat(
dict((k, right[k]) for k in extra_right)).splitlines())
return explanation
def _notin_text(term, text, verbose=False):
index = text.find(term)
head = text[:index]
tail = text[index+len(term):]
correct_text = head + tail
diff = _diff_text(correct_text, text, verbose)
newdiff = [u('%s is contained here:') % py.io.saferepr(term, maxsize=42)]
for line in diff:
if line.startswith(u('Skipping')):
continue
if line.startswith(u('- ')):
continue
if line.startswith(u('+ ')):
newdiff.append(u(' ') + line[2:])
else:
newdiff.append(line)
return newdiff

View File

@@ -1,245 +0,0 @@
"""
merged implementation of the cache provider
the name cache was not choosen to ensure pluggy automatically
ignores the external pytest-cache
"""
import py
import pytest
import json
from os.path import sep as _sep, altsep as _altsep
class Cache(object):
def __init__(self, config):
self.config = config
self._cachedir = config.rootdir.join(".cache")
self.trace = config.trace.root.get("cache")
if config.getvalue("cacheclear"):
self.trace("clearing cachedir")
if self._cachedir.check():
self._cachedir.remove()
self._cachedir.mkdir()
def makedir(self, name):
""" return a directory path object with the given name. If the
directory does not yet exist, it will be created. You can use it
to manage files likes e. g. store/retrieve database
dumps across test sessions.
:param name: must be a string not containing a ``/`` separator.
Make sure the name contains your plugin or application
identifiers to prevent clashes with other cache users.
"""
if _sep in name or _altsep is not None and _altsep in name:
raise ValueError("name is not allowed to contain path separators")
return self._cachedir.ensure_dir("d", name)
def _getvaluepath(self, key):
return self._cachedir.join('v', *key.split('/'))
def get(self, key, default):
""" return cached value for the given key. If no value
was yet cached or the value cannot be read, the specified
default is returned.
:param key: must be a ``/`` separated value. Usually the first
name is the name of your plugin or your application.
:param default: must be provided in case of a cache-miss or
invalid cache values.
"""
path = self._getvaluepath(key)
if path.check():
try:
with path.open("r") as f:
return json.load(f)
except ValueError:
self.trace("cache-invalid at %s" % (path,))
return default
def set(self, key, value):
""" save value for the given key.
:param key: must be a ``/`` separated value. Usually the first
name is the name of your plugin or your application.
:param value: must be of any combination of basic
python types, including nested types
like e. g. lists of dictionaries.
"""
path = self._getvaluepath(key)
try:
path.dirpath().ensure_dir()
except (py.error.EEXIST, py.error.EACCES):
self.config.warn(
code='I9', message='could not create cache path %s' % (path,)
)
return
try:
f = path.open('w')
except py.error.ENOTDIR:
self.config.warn(
code='I9', message='cache could not write path %s' % (path,))
else:
with f:
self.trace("cache-write %s: %r" % (key, value,))
json.dump(value, f, indent=2, sort_keys=True)
class LFPlugin:
""" Plugin which implements the --lf (run last-failing) option """
def __init__(self, config):
self.config = config
active_keys = 'lf', 'failedfirst'
self.active = any(config.getvalue(key) for key in active_keys)
if self.active:
self.lastfailed = config.cache.get("cache/lastfailed", {})
else:
self.lastfailed = {}
def pytest_report_header(self):
if self.active:
if not self.lastfailed:
mode = "run all (no recorded failures)"
else:
mode = "rerun last %d failures%s" % (
len(self.lastfailed),
" first" if self.config.getvalue("failedfirst") else "")
return "run-last-failure: %s" % mode
def pytest_runtest_logreport(self, report):
if report.failed and "xfail" not in report.keywords:
self.lastfailed[report.nodeid] = True
elif not report.failed:
if report.when == "call":
self.lastfailed.pop(report.nodeid, None)
def pytest_collectreport(self, report):
passed = report.outcome in ('passed', 'skipped')
if passed:
if report.nodeid in self.lastfailed:
self.lastfailed.pop(report.nodeid)
self.lastfailed.update(
(item.nodeid, True)
for item in report.result)
else:
self.lastfailed[report.nodeid] = True
def pytest_collection_modifyitems(self, session, config, items):
if self.active and self.lastfailed:
previously_failed = []
previously_passed = []
for item in items:
if item.nodeid in self.lastfailed:
previously_failed.append(item)
else:
previously_passed.append(item)
if not previously_failed and previously_passed:
# running a subset of all tests with recorded failures outside
# of the set of tests currently executing
pass
elif self.config.getvalue("failedfirst"):
items[:] = previously_failed + previously_passed
else:
items[:] = previously_failed
config.hook.pytest_deselected(items=previously_passed)
def pytest_sessionfinish(self, session):
config = self.config
if config.getvalue("cacheshow") or hasattr(config, "slaveinput"):
return
prev_failed = config.cache.get("cache/lastfailed", None) is not None
if (session.testscollected and prev_failed) or self.lastfailed:
config.cache.set("cache/lastfailed", self.lastfailed)
def pytest_addoption(parser):
group = parser.getgroup("general")
group.addoption(
'--lf', '--last-failed', action='store_true', dest="lf",
help="rerun only the tests that failed "
"at the last run (or all if none failed)")
group.addoption(
'--ff', '--failed-first', action='store_true', dest="failedfirst",
help="run all tests but run the last failures first. "
"This may re-order tests and thus lead to "
"repeated fixture setup/teardown")
group.addoption(
'--cache-show', action='store_true', dest="cacheshow",
help="show cache contents, don't perform collection or tests")
group.addoption(
'--cache-clear', action='store_true', dest="cacheclear",
help="remove all cache contents at start of test run.")
def pytest_cmdline_main(config):
if config.option.cacheshow:
from _pytest.main import wrap_session
return wrap_session(config, cacheshow)
@pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
config.cache = Cache(config)
config.pluginmanager.register(LFPlugin(config), "lfplugin")
@pytest.fixture
def cache(request):
"""
Return a cache object that can persist state between testing sessions.
cache.get(key, default)
cache.set(key, value)
Keys must be a ``/`` separated value, where the first part is usually the
name of your plugin or application to avoid clashes with other cache users.
Values can be any object handled by the json stdlib module.
"""
return request.config.cache
def pytest_report_header(config):
if config.option.verbose:
relpath = py.path.local().bestrelpath(config.cache._cachedir)
return "cachedir: %s" % relpath
def cacheshow(config, session):
from pprint import pprint
tw = py.io.TerminalWriter()
tw.line("cachedir: " + str(config.cache._cachedir))
if not config.cache._cachedir.check():
tw.line("cache is empty")
return 0
dummy = object()
basedir = config.cache._cachedir
vdir = basedir.join("v")
tw.sep("-", "cache values")
for valpath in vdir.visit(lambda x: x.isfile()):
key = valpath.relto(vdir).replace(valpath.sep, "/")
val = config.cache.get(key, dummy)
if val is dummy:
tw.line("%s contains unreadable content, "
"will be ignored" % key)
else:
tw.line("%s contains:" % key)
stream = py.io.TextIO()
pprint(val, stream=stream)
for line in stream.getvalue().splitlines():
tw.line(" " + line)
ddir = basedir.join("d")
if ddir.isdir() and ddir.listdir():
tw.sep("-", "cache directories")
for p in basedir.join("d").visit():
#if p.check(dir=1):
# print("%s/" % p.relto(basedir))
if p.isfile():
key = p.relto(basedir)
tw.line("%s is a file of length %d" % (
key, p.size()))
return 0

View File

@@ -1,491 +0,0 @@
"""
per-test stdout/stderr capturing mechanism.
"""
from __future__ import with_statement
import contextlib
import sys
import os
from tempfile import TemporaryFile
import py
import pytest
from py.io import TextIO
unicode = py.builtin.text
patchsysdict = {0: 'stdin', 1: 'stdout', 2: 'stderr'}
def pytest_addoption(parser):
group = parser.getgroup("general")
group._addoption(
'--capture', action="store",
default="fd" if hasattr(os, "dup") else "sys",
metavar="method", choices=['fd', 'sys', 'no'],
help="per-test capturing method: one of fd|sys|no.")
group._addoption(
'-s', action="store_const", const="no", dest="capture",
help="shortcut for --capture=no.")
@pytest.hookimpl(hookwrapper=True)
def pytest_load_initial_conftests(early_config, parser, args):
_readline_workaround()
ns = early_config.known_args_namespace
pluginmanager = early_config.pluginmanager
capman = CaptureManager(ns.capture)
pluginmanager.register(capman, "capturemanager")
# make sure that capturemanager is properly reset at final shutdown
early_config.add_cleanup(capman.reset_capturings)
# make sure logging does not raise exceptions at the end
def silence_logging_at_shutdown():
if "logging" in sys.modules:
sys.modules["logging"].raiseExceptions = False
early_config.add_cleanup(silence_logging_at_shutdown)
# finally trigger conftest loading but while capturing (issue93)
capman.init_capturings()
outcome = yield
out, err = capman.suspendcapture()
if outcome.excinfo is not None:
sys.stdout.write(out)
sys.stderr.write(err)
class CaptureManager:
def __init__(self, method):
self._method = method
def _getcapture(self, method):
if method == "fd":
return MultiCapture(out=True, err=True, Capture=FDCapture)
elif method == "sys":
return MultiCapture(out=True, err=True, Capture=SysCapture)
elif method == "no":
return MultiCapture(out=False, err=False, in_=False)
else:
raise ValueError("unknown capturing method: %r" % method)
def init_capturings(self):
assert not hasattr(self, "_capturing")
self._capturing = self._getcapture(self._method)
self._capturing.start_capturing()
def reset_capturings(self):
cap = self.__dict__.pop("_capturing", None)
if cap is not None:
cap.pop_outerr_to_orig()
cap.stop_capturing()
def resumecapture(self):
self._capturing.resume_capturing()
def suspendcapture(self, in_=False):
self.deactivate_funcargs()
cap = getattr(self, "_capturing", None)
if cap is not None:
try:
outerr = cap.readouterr()
finally:
cap.suspend_capturing(in_=in_)
return outerr
def activate_funcargs(self, pyfuncitem):
capfuncarg = pyfuncitem.__dict__.pop("_capfuncarg", None)
if capfuncarg is not None:
capfuncarg._start()
self._capfuncarg = capfuncarg
def deactivate_funcargs(self):
capfuncarg = self.__dict__.pop("_capfuncarg", None)
if capfuncarg is not None:
capfuncarg.close()
@pytest.hookimpl(hookwrapper=True)
def pytest_make_collect_report(self, collector):
if isinstance(collector, pytest.File):
self.resumecapture()
outcome = yield
out, err = self.suspendcapture()
rep = outcome.get_result()
if out:
rep.sections.append(("Captured stdout", out))
if err:
rep.sections.append(("Captured stderr", err))
else:
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_setup(self, item):
self.resumecapture()
yield
self.suspendcapture_item(item, "setup")
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(self, item):
self.resumecapture()
self.activate_funcargs(item)
yield
#self.deactivate_funcargs() called from suspendcapture()
self.suspendcapture_item(item, "call")
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_teardown(self, item):
self.resumecapture()
yield
self.suspendcapture_item(item, "teardown")
@pytest.hookimpl(tryfirst=True)
def pytest_keyboard_interrupt(self, excinfo):
self.reset_capturings()
@pytest.hookimpl(tryfirst=True)
def pytest_internalerror(self, excinfo):
self.reset_capturings()
def suspendcapture_item(self, item, when, in_=False):
out, err = self.suspendcapture(in_=in_)
item.add_report_section(when, "stdout", out)
item.add_report_section(when, "stderr", err)
error_capsysfderror = "cannot use capsys and capfd at the same time"
@pytest.fixture
def capsys(request):
"""Enable capturing of writes to sys.stdout/sys.stderr and make
captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` tuple.
"""
if "capfd" in request.fixturenames:
raise request.raiseerror(error_capsysfderror)
request.node._capfuncarg = c = CaptureFixture(SysCapture, request)
return c
@pytest.fixture
def capfd(request):
"""Enable capturing of writes to file descriptors 1 and 2 and make
captured output available via ``capfd.readouterr()`` method calls
which return a ``(out, err)`` tuple.
"""
if "capsys" in request.fixturenames:
request.raiseerror(error_capsysfderror)
if not hasattr(os, 'dup'):
pytest.skip("capfd funcarg needs os.dup")
request.node._capfuncarg = c = CaptureFixture(FDCapture, request)
return c
class CaptureFixture:
def __init__(self, captureclass, request):
self.captureclass = captureclass
self.request = request
def _start(self):
self._capture = MultiCapture(out=True, err=True, in_=False,
Capture=self.captureclass)
self._capture.start_capturing()
def close(self):
cap = self.__dict__.pop("_capture", None)
if cap is not None:
self._outerr = cap.pop_outerr_to_orig()
cap.stop_capturing()
def readouterr(self):
try:
return self._capture.readouterr()
except AttributeError:
return self._outerr
@contextlib.contextmanager
def disabled(self):
capmanager = self.request.config.pluginmanager.getplugin('capturemanager')
capmanager.suspendcapture_item(self.request.node, "call", in_=True)
try:
yield
finally:
capmanager.resumecapture()
def safe_text_dupfile(f, mode, default_encoding="UTF8"):
""" return a open text file object that's a duplicate of f on the
FD-level if possible.
"""
encoding = getattr(f, "encoding", None)
try:
fd = f.fileno()
except Exception:
if "b" not in getattr(f, "mode", "") and hasattr(f, "encoding"):
# we seem to have a text stream, let's just use it
return f
else:
newfd = os.dup(fd)
if "b" not in mode:
mode += "b"
f = os.fdopen(newfd, mode, 0) # no buffering
return EncodedFile(f, encoding or default_encoding)
class EncodedFile(object):
errors = "strict" # possibly needed by py3 code (issue555)
def __init__(self, buffer, encoding):
self.buffer = buffer
self.encoding = encoding
def write(self, obj):
if isinstance(obj, unicode):
obj = obj.encode(self.encoding, "replace")
self.buffer.write(obj)
def writelines(self, linelist):
data = ''.join(linelist)
self.write(data)
def __getattr__(self, name):
return getattr(object.__getattribute__(self, "buffer"), name)
class MultiCapture(object):
out = err = in_ = None
def __init__(self, out=True, err=True, in_=True, Capture=None):
if in_:
self.in_ = Capture(0)
if out:
self.out = Capture(1)
if err:
self.err = Capture(2)
def start_capturing(self):
if self.in_:
self.in_.start()
if self.out:
self.out.start()
if self.err:
self.err.start()
def pop_outerr_to_orig(self):
""" pop current snapshot out/err capture and flush to orig streams. """
out, err = self.readouterr()
if out:
self.out.writeorg(out)
if err:
self.err.writeorg(err)
return out, err
def suspend_capturing(self, in_=False):
if self.out:
self.out.suspend()
if self.err:
self.err.suspend()
if in_ and self.in_:
self.in_.suspend()
self._in_suspended = True
def resume_capturing(self):
if self.out:
self.out.resume()
if self.err:
self.err.resume()
if hasattr(self, "_in_suspended"):
self.in_.resume()
del self._in_suspended
def stop_capturing(self):
""" stop capturing and reset capturing streams """
if hasattr(self, '_reset'):
raise ValueError("was already stopped")
self._reset = True
if self.out:
self.out.done()
if self.err:
self.err.done()
if self.in_:
self.in_.done()
def readouterr(self):
""" return snapshot unicode value of stdout/stderr capturings. """
return (self.out.snap() if self.out is not None else "",
self.err.snap() if self.err is not None else "")
class NoCapture:
__init__ = start = done = suspend = resume = lambda *args: None
class FDCapture:
""" Capture IO to/from a given os-level filedescriptor. """
def __init__(self, targetfd, tmpfile=None):
self.targetfd = targetfd
try:
self.targetfd_save = os.dup(self.targetfd)
except OSError:
self.start = lambda: None
self.done = lambda: None
else:
if targetfd == 0:
assert not tmpfile, "cannot set tmpfile with stdin"
tmpfile = open(os.devnull, "r")
self.syscapture = SysCapture(targetfd)
else:
if tmpfile is None:
f = TemporaryFile()
with f:
tmpfile = safe_text_dupfile(f, mode="wb+")
if targetfd in patchsysdict:
self.syscapture = SysCapture(targetfd, tmpfile)
else:
self.syscapture = NoCapture()
self.tmpfile = tmpfile
self.tmpfile_fd = tmpfile.fileno()
def __repr__(self):
return "<FDCapture %s oldfd=%s>" % (self.targetfd, self.targetfd_save)
def start(self):
""" Start capturing on targetfd using memorized tmpfile. """
try:
os.fstat(self.targetfd_save)
except (AttributeError, OSError):
raise ValueError("saved filedescriptor not valid anymore")
os.dup2(self.tmpfile_fd, self.targetfd)
self.syscapture.start()
def snap(self):
f = self.tmpfile
f.seek(0)
res = f.read()
if res:
enc = getattr(f, "encoding", None)
if enc and isinstance(res, bytes):
res = py.builtin._totext(res, enc, "replace")
f.truncate(0)
f.seek(0)
return res
return ''
def done(self):
""" stop capturing, restore streams, return original capture file,
seeked to position zero. """
targetfd_save = self.__dict__.pop("targetfd_save")
os.dup2(targetfd_save, self.targetfd)
os.close(targetfd_save)
self.syscapture.done()
self.tmpfile.close()
def suspend(self):
self.syscapture.suspend()
os.dup2(self.targetfd_save, self.targetfd)
def resume(self):
self.syscapture.resume()
os.dup2(self.tmpfile_fd, self.targetfd)
def writeorg(self, data):
""" write to original file descriptor. """
if py.builtin._istext(data):
data = data.encode("utf8") # XXX use encoding of original stream
os.write(self.targetfd_save, data)
class SysCapture:
def __init__(self, fd, tmpfile=None):
name = patchsysdict[fd]
self._old = getattr(sys, name)
self.name = name
if tmpfile is None:
if name == "stdin":
tmpfile = DontReadFromInput()
else:
tmpfile = TextIO()
self.tmpfile = tmpfile
def start(self):
setattr(sys, self.name, self.tmpfile)
def snap(self):
f = self.tmpfile
res = f.getvalue()
f.truncate(0)
f.seek(0)
return res
def done(self):
setattr(sys, self.name, self._old)
del self._old
self.tmpfile.close()
def suspend(self):
setattr(sys, self.name, self._old)
def resume(self):
setattr(sys, self.name, self.tmpfile)
def writeorg(self, data):
self._old.write(data)
self._old.flush()
class DontReadFromInput:
"""Temporary stub class. Ideally when stdin is accessed, the
capturing should be turned off, with possibly all data captured
so far sent to the screen. This should be configurable, though,
because in automated test runs it is better to crash than
hang indefinitely.
"""
encoding = None
def read(self, *args):
raise IOError("reading from stdin while output is captured")
readline = read
readlines = read
__iter__ = read
def fileno(self):
raise ValueError("redirected Stdin is pseudofile, has no fileno()")
def isatty(self):
return False
def close(self):
pass
@property
def buffer(self):
if sys.version_info >= (3,0):
return self
else:
raise AttributeError('redirected stdin has no attribute buffer')
def _readline_workaround():
"""
Ensure readline is imported so that it attaches to the correct stdio
handles on Windows.
Pdb uses readline support where available--when not running from the Python
prompt, the readline module is not imported until running the pdb REPL. If
running pytest with the --pdb option this means the readline module is not
imported until after I/O capture has been started.
This is a problem for pyreadline, which is often used to implement readline
support on Windows, as it does not attach to the correct handles for stdout
and/or stdin if they have been redirected by the FDCapture mechanism. This
workaround ensures that readline is imported before I/O capture is setup so
that it can attach to the actual stdin/out for the console.
See https://github.com/pytest-dev/pytest/pull/1281
"""
if not sys.platform.startswith('win32'):
return
try:
import readline # noqa
except ImportError:
pass

View File

@@ -1,230 +0,0 @@
"""
python version compatibility code
"""
import sys
import inspect
import types
import re
import functools
import py
import _pytest
try:
import enum
except ImportError: # pragma: no cover
# Only available in Python 3.4+ or as a backport
enum = None
_PY3 = sys.version_info > (3, 0)
_PY2 = not _PY3
NoneType = type(None)
NOTSET = object()
if hasattr(inspect, 'signature'):
def _format_args(func):
return str(inspect.signature(func))
else:
def _format_args(func):
return inspect.formatargspec(*inspect.getargspec(func))
isfunction = inspect.isfunction
isclass = inspect.isclass
# used to work around a python2 exception info leak
exc_clear = getattr(sys, 'exc_clear', lambda: None)
# The type of re.compile objects is not exposed in Python.
REGEX_TYPE = type(re.compile(''))
def is_generator(func):
try:
return _pytest._code.getrawcode(func).co_flags & 32 # generator function
except AttributeError: # builtin functions have no bytecode
# assume them to not be generators
return False
def getlocation(function, curdir):
import inspect
fn = py.path.local(inspect.getfile(function))
lineno = py.builtin._getcode(function).co_firstlineno
if fn.relto(curdir):
fn = fn.relto(curdir)
return "%s:%d" %(fn, lineno+1)
def num_mock_patch_args(function):
""" return number of arguments used up by mock arguments (if any) """
patchings = getattr(function, "patchings", None)
if not patchings:
return 0
mock = sys.modules.get("mock", sys.modules.get("unittest.mock", None))
if mock is not None:
return len([p for p in patchings
if not p.attribute_name and p.new is mock.DEFAULT])
return len(patchings)
def getfuncargnames(function, startindex=None):
# XXX merge with main.py's varnames
#assert not isclass(function)
realfunction = function
while hasattr(realfunction, "__wrapped__"):
realfunction = realfunction.__wrapped__
if startindex is None:
startindex = inspect.ismethod(function) and 1 or 0
if realfunction != function:
startindex += num_mock_patch_args(function)
function = realfunction
if isinstance(function, functools.partial):
argnames = inspect.getargs(_pytest._code.getrawcode(function.func))[0]
partial = function
argnames = argnames[len(partial.args):]
if partial.keywords:
for kw in partial.keywords:
argnames.remove(kw)
else:
argnames = inspect.getargs(_pytest._code.getrawcode(function))[0]
defaults = getattr(function, 'func_defaults',
getattr(function, '__defaults__', None)) or ()
numdefaults = len(defaults)
if numdefaults:
return tuple(argnames[startindex:-numdefaults])
return tuple(argnames[startindex:])
if sys.version_info[:2] == (2, 6):
def isclass(object):
""" Return true if the object is a class. Overrides inspect.isclass for
python 2.6 because it will return True for objects which always return
something on __getattr__ calls (see #1035).
Backport of https://hg.python.org/cpython/rev/35bf8f7a8edc
"""
return isinstance(object, (type, types.ClassType))
if _PY3:
import codecs
STRING_TYPES = bytes, str
def _escape_strings(val):
"""If val is pure ascii, returns it as a str(). Otherwise, escapes
bytes objects into a sequence of escaped bytes:
b'\xc3\xb4\xc5\xd6' -> u'\\xc3\\xb4\\xc5\\xd6'
and escapes unicode objects into a sequence of escaped unicode
ids, e.g.:
'4\\nV\\U00043efa\\x0eMXWB\\x1e\\u3028\\u15fd\\xcd\\U0007d944'
note:
the obvious "v.decode('unicode-escape')" will return
valid utf-8 unicode if it finds them in bytes, but we
want to return escaped bytes for any byte, even if they match
a utf-8 string.
"""
if isinstance(val, bytes):
if val:
# source: http://goo.gl/bGsnwC
encoded_bytes, _ = codecs.escape_encode(val)
return encoded_bytes.decode('ascii')
else:
# empty bytes crashes codecs.escape_encode (#1087)
return ''
else:
return val.encode('unicode_escape').decode('ascii')
else:
STRING_TYPES = bytes, str, unicode
def _escape_strings(val):
"""In py2 bytes and str are the same type, so return if it's a bytes
object, return it unchanged if it is a full ascii string,
otherwise escape it into its binary form.
If it's a unicode string, change the unicode characters into
unicode escapes.
"""
if isinstance(val, bytes):
try:
return val.encode('ascii')
except UnicodeDecodeError:
return val.encode('string-escape')
else:
return val.encode('unicode-escape')
def get_real_func(obj):
""" gets the real function object of the (possibly) wrapped object by
functools.wraps or functools.partial.
"""
while hasattr(obj, "__wrapped__"):
obj = obj.__wrapped__
if isinstance(obj, functools.partial):
obj = obj.func
return obj
def getfslineno(obj):
# xxx let decorators etc specify a sane ordering
obj = get_real_func(obj)
if hasattr(obj, 'place_as'):
obj = obj.place_as
fslineno = _pytest._code.getfslineno(obj)
assert isinstance(fslineno[1], int), obj
return fslineno
def getimfunc(func):
try:
return func.__func__
except AttributeError:
try:
return func.im_func
except AttributeError:
return func
def safe_getattr(object, name, default):
""" Like getattr but return default upon any Exception.
Attribute access can potentially fail for 'evil' Python objects.
See issue214
"""
try:
return getattr(object, name, default)
except Exception:
return default
def _is_unittest_unexpected_success_a_failure():
"""Return if the test suite should fail if a @expectedFailure unittest test PASSES.
From https://docs.python.org/3/library/unittest.html?highlight=unittest#unittest.TestResult.wasSuccessful:
Changed in version 3.4: Returns False if there were any
unexpectedSuccesses from tests marked with the expectedFailure() decorator.
"""
return sys.version_info >= (3, 4)
if _PY3:
def safe_str(v):
"""returns v as string"""
return str(v)
else:
def safe_str(v):
"""returns v as string, converting to ascii if necessary"""
try:
return str(v)
except UnicodeError:
errors = 'replace'
return v.encode('ascii', errors)

File diff suppressed because it is too large Load Diff

View File

@@ -1,124 +0,0 @@
""" interactive debugging with PDB, the Python Debugger. """
from __future__ import absolute_import
import pdb
import sys
import pytest
def pytest_addoption(parser):
group = parser.getgroup("general")
group._addoption(
'--pdb', dest="usepdb", action="store_true",
help="start the interactive Python debugger on errors.")
group._addoption(
'--pdbcls', dest="usepdb_cls", metavar="modulename:classname",
help="start a custom interactive Python debugger on errors. "
"For example: --pdbcls=IPython.terminal.debugger:TerminalPdb")
def pytest_namespace():
return {'set_trace': pytestPDB().set_trace}
def pytest_configure(config):
if config.getvalue("usepdb") or config.getvalue("usepdb_cls"):
config.pluginmanager.register(PdbInvoke(), 'pdbinvoke')
if config.getvalue("usepdb_cls"):
modname, classname = config.getvalue("usepdb_cls").split(":")
__import__(modname)
pdb_cls = getattr(sys.modules[modname], classname)
else:
pdb_cls = pdb.Pdb
pytestPDB._pdb_cls = pdb_cls
old = (pdb.set_trace, pytestPDB._pluginmanager)
def fin():
pdb.set_trace, pytestPDB._pluginmanager = old
pytestPDB._config = None
pytestPDB._pdb_cls = pdb.Pdb
pdb.set_trace = pytest.set_trace
pytestPDB._pluginmanager = config.pluginmanager
pytestPDB._config = config
config._cleanup.append(fin)
class pytestPDB:
""" Pseudo PDB that defers to the real pdb. """
_pluginmanager = None
_config = None
_pdb_cls = pdb.Pdb
def set_trace(self):
""" invoke PDB set_trace debugging, dropping any IO capturing. """
import _pytest.config
frame = sys._getframe().f_back
if self._pluginmanager is not None:
capman = self._pluginmanager.getplugin("capturemanager")
if capman:
capman.suspendcapture(in_=True)
tw = _pytest.config.create_terminal_writer(self._config)
tw.line()
tw.sep(">", "PDB set_trace (IO-capturing turned off)")
self._pluginmanager.hook.pytest_enter_pdb(config=self._config)
self._pdb_cls().set_trace(frame)
class PdbInvoke:
def pytest_exception_interact(self, node, call, report):
capman = node.config.pluginmanager.getplugin("capturemanager")
if capman:
out, err = capman.suspendcapture(in_=True)
sys.stdout.write(out)
sys.stdout.write(err)
_enter_pdb(node, call.excinfo, report)
def pytest_internalerror(self, excrepr, excinfo):
for line in str(excrepr).split("\n"):
sys.stderr.write("INTERNALERROR> %s\n" %line)
sys.stderr.flush()
tb = _postmortem_traceback(excinfo)
post_mortem(tb)
def _enter_pdb(node, excinfo, rep):
# XXX we re-use the TerminalReporter's terminalwriter
# because this seems to avoid some encoding related troubles
# for not completely clear reasons.
tw = node.config.pluginmanager.getplugin("terminalreporter")._tw
tw.line()
tw.sep(">", "traceback")
rep.toterminal(tw)
tw.sep(">", "entering PDB")
tb = _postmortem_traceback(excinfo)
post_mortem(tb)
rep._pdbshown = True
return rep
def _postmortem_traceback(excinfo):
# A doctest.UnexpectedException is not useful for post_mortem.
# Use the underlying exception instead:
from doctest import UnexpectedException
if isinstance(excinfo.value, UnexpectedException):
return excinfo.value.exc_info[2]
else:
return excinfo._excinfo[2]
def _find_last_non_hidden_frame(stack):
i = max(0, len(stack) - 1)
while i and stack[i][0].f_locals.get("__tracebackhide__", False):
i -= 1
return i
def post_mortem(t):
class Pdb(pytestPDB._pdb_cls):
def get_stack(self, f, t):
stack, i = pdb.Pdb.get_stack(self, f, t)
if f is None:
i = _find_last_non_hidden_frame(stack)
return stack, i
p = Pdb()
p.reset()
p.interaction(None, t)

View File

@@ -1,24 +0,0 @@
"""
This module contains deprecation messages and bits of code used elsewhere in the codebase
that is planned to be removed in the next pytest release.
Keeping it in a central location makes it easy to track what is deprecated and should
be removed when the time comes.
"""
MAIN_STR_ARGS = 'passing a string to pytest.main() is deprecated, ' \
'pass a list of arguments instead.'
YIELD_TESTS = 'yield tests are deprecated, and scheduled to be removed in pytest 4.0'
FUNCARG_PREFIX = (
'{name}: declaring fixtures using "pytest_funcarg__" prefix is deprecated '
'and scheduled to be removed in pytest 4.0. '
'Please remove the prefix and use the @pytest.fixture decorator instead.')
SETUP_CFG_PYTEST = '[pytest] section in setup.cfg files is deprecated, use [tool:pytest] instead.'
GETFUNCARGVALUE = "use of getfuncargvalue is deprecated, use getfixturevalue"
RESULT_LOG = '--result-log is deprecated and scheduled for removal in pytest 4.0'

View File

@@ -1,331 +0,0 @@
""" discover and run doctests in modules and test files."""
from __future__ import absolute_import
import traceback
import pytest
from _pytest._code.code import ExceptionInfo, ReprFileLocation, TerminalRepr
from _pytest.fixtures import FixtureRequest
DOCTEST_REPORT_CHOICE_NONE = 'none'
DOCTEST_REPORT_CHOICE_CDIFF = 'cdiff'
DOCTEST_REPORT_CHOICE_NDIFF = 'ndiff'
DOCTEST_REPORT_CHOICE_UDIFF = 'udiff'
DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE = 'only_first_failure'
DOCTEST_REPORT_CHOICES = (
DOCTEST_REPORT_CHOICE_NONE,
DOCTEST_REPORT_CHOICE_CDIFF,
DOCTEST_REPORT_CHOICE_NDIFF,
DOCTEST_REPORT_CHOICE_UDIFF,
DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE,
)
def pytest_addoption(parser):
parser.addini('doctest_optionflags', 'option flags for doctests',
type="args", default=["ELLIPSIS"])
group = parser.getgroup("collect")
group.addoption("--doctest-modules",
action="store_true", default=False,
help="run doctests in all .py modules",
dest="doctestmodules")
group.addoption("--doctest-report",
type=str.lower, default="udiff",
help="choose another output format for diffs on doctest failure",
choices=DOCTEST_REPORT_CHOICES,
dest="doctestreport")
group.addoption("--doctest-glob",
action="append", default=[], metavar="pat",
help="doctests file matching pattern, default: test*.txt",
dest="doctestglob")
group.addoption("--doctest-ignore-import-errors",
action="store_true", default=False,
help="ignore doctest ImportErrors",
dest="doctest_ignore_import_errors")
def pytest_collect_file(path, parent):
config = parent.config
if path.ext == ".py":
if config.option.doctestmodules:
return DoctestModule(path, parent)
elif _is_doctest(config, path, parent):
return DoctestTextfile(path, parent)
def _is_doctest(config, path, parent):
if path.ext in ('.txt', '.rst') and parent.session.isinitpath(path):
return True
globs = config.getoption("doctestglob") or ['test*.txt']
for glob in globs:
if path.check(fnmatch=glob):
return True
return False
class ReprFailDoctest(TerminalRepr):
def __init__(self, reprlocation, lines):
self.reprlocation = reprlocation
self.lines = lines
def toterminal(self, tw):
for line in self.lines:
tw.line(line)
self.reprlocation.toterminal(tw)
class DoctestItem(pytest.Item):
def __init__(self, name, parent, runner=None, dtest=None):
super(DoctestItem, self).__init__(name, parent)
self.runner = runner
self.dtest = dtest
self.obj = None
self.fixture_request = None
def setup(self):
if self.dtest is not None:
self.fixture_request = _setup_fixtures(self)
globs = dict(getfixture=self.fixture_request.getfixturevalue)
for name, value in self.fixture_request.getfixturevalue('doctest_namespace').items():
globs[name] = value
self.dtest.globs.update(globs)
def runtest(self):
_check_all_skipped(self.dtest)
self.runner.run(self.dtest)
def repr_failure(self, excinfo):
import doctest
if excinfo.errisinstance((doctest.DocTestFailure,
doctest.UnexpectedException)):
doctestfailure = excinfo.value
example = doctestfailure.example
test = doctestfailure.test
filename = test.filename
if test.lineno is None:
lineno = None
else:
lineno = test.lineno + example.lineno + 1
message = excinfo.type.__name__
reprlocation = ReprFileLocation(filename, lineno, message)
checker = _get_checker()
report_choice = _get_report_choice(self.config.getoption("doctestreport"))
if lineno is not None:
lines = doctestfailure.test.docstring.splitlines(False)
# add line numbers to the left of the error message
lines = ["%03d %s" % (i + test.lineno + 1, x)
for (i, x) in enumerate(lines)]
# trim docstring error lines to 10
lines = lines[example.lineno - 9:example.lineno + 1]
else:
lines = ['EXAMPLE LOCATION UNKNOWN, not showing all tests of that example']
indent = '>>>'
for line in example.source.splitlines():
lines.append('??? %s %s' % (indent, line))
indent = '...'
if excinfo.errisinstance(doctest.DocTestFailure):
lines += checker.output_difference(example,
doctestfailure.got, report_choice).split("\n")
else:
inner_excinfo = ExceptionInfo(excinfo.value.exc_info)
lines += ["UNEXPECTED EXCEPTION: %s" %
repr(inner_excinfo.value)]
lines += traceback.format_exception(*excinfo.value.exc_info)
return ReprFailDoctest(reprlocation, lines)
else:
return super(DoctestItem, self).repr_failure(excinfo)
def reportinfo(self):
return self.fspath, None, "[doctest] %s" % self.name
def _get_flag_lookup():
import doctest
return dict(DONT_ACCEPT_TRUE_FOR_1=doctest.DONT_ACCEPT_TRUE_FOR_1,
DONT_ACCEPT_BLANKLINE=doctest.DONT_ACCEPT_BLANKLINE,
NORMALIZE_WHITESPACE=doctest.NORMALIZE_WHITESPACE,
ELLIPSIS=doctest.ELLIPSIS,
IGNORE_EXCEPTION_DETAIL=doctest.IGNORE_EXCEPTION_DETAIL,
COMPARISON_FLAGS=doctest.COMPARISON_FLAGS,
ALLOW_UNICODE=_get_allow_unicode_flag(),
ALLOW_BYTES=_get_allow_bytes_flag(),
)
def get_optionflags(parent):
optionflags_str = parent.config.getini("doctest_optionflags")
flag_lookup_table = _get_flag_lookup()
flag_acc = 0
for flag in optionflags_str:
flag_acc |= flag_lookup_table[flag]
return flag_acc
class DoctestTextfile(pytest.Module):
obj = None
def collect(self):
import doctest
# inspired by doctest.testfile; ideally we would use it directly,
# but it doesn't support passing a custom checker
text = self.fspath.read()
filename = str(self.fspath)
name = self.fspath.basename
globs = {'__name__': '__main__'}
optionflags = get_optionflags(self)
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags,
checker=_get_checker())
parser = doctest.DocTestParser()
test = parser.get_doctest(text, globs, name, filename, 0)
if test.examples:
yield DoctestItem(test.name, self, runner, test)
def _check_all_skipped(test):
"""raises pytest.skip() if all examples in the given DocTest have the SKIP
option set.
"""
import doctest
all_skipped = all(x.options.get(doctest.SKIP, False) for x in test.examples)
if all_skipped:
pytest.skip('all tests skipped by +SKIP option')
class DoctestModule(pytest.Module):
def collect(self):
import doctest
if self.fspath.basename == "conftest.py":
module = self.config.pluginmanager._importconftest(self.fspath)
else:
try:
module = self.fspath.pyimport()
except ImportError:
if self.config.getvalue('doctest_ignore_import_errors'):
pytest.skip('unable to import module %r' % self.fspath)
else:
raise
# uses internal doctest module parsing mechanism
finder = doctest.DocTestFinder()
optionflags = get_optionflags(self)
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags,
checker=_get_checker())
for test in finder.find(module, module.__name__):
if test.examples: # skip empty doctests
yield DoctestItem(test.name, self, runner, test)
def _setup_fixtures(doctest_item):
"""
Used by DoctestTextfile and DoctestItem to setup fixture information.
"""
def func():
pass
doctest_item.funcargs = {}
fm = doctest_item.session._fixturemanager
doctest_item._fixtureinfo = fm.getfixtureinfo(node=doctest_item, func=func,
cls=None, funcargs=False)
fixture_request = FixtureRequest(doctest_item)
fixture_request._fillfixtures()
return fixture_request
def _get_checker():
"""
Returns a doctest.OutputChecker subclass that takes in account the
ALLOW_UNICODE option to ignore u'' prefixes in strings and ALLOW_BYTES
to strip b'' prefixes.
Useful when the same doctest should run in Python 2 and Python 3.
An inner class is used to avoid importing "doctest" at the module
level.
"""
if hasattr(_get_checker, 'LiteralsOutputChecker'):
return _get_checker.LiteralsOutputChecker()
import doctest
import re
class LiteralsOutputChecker(doctest.OutputChecker):
"""
Copied from doctest_nose_plugin.py from the nltk project:
https://github.com/nltk/nltk
Further extended to also support byte literals.
"""
_unicode_literal_re = re.compile(r"(\W|^)[uU]([rR]?[\'\"])", re.UNICODE)
_bytes_literal_re = re.compile(r"(\W|^)[bB]([rR]?[\'\"])", re.UNICODE)
def check_output(self, want, got, optionflags):
res = doctest.OutputChecker.check_output(self, want, got,
optionflags)
if res:
return True
allow_unicode = optionflags & _get_allow_unicode_flag()
allow_bytes = optionflags & _get_allow_bytes_flag()
if not allow_unicode and not allow_bytes:
return False
else: # pragma: no cover
def remove_prefixes(regex, txt):
return re.sub(regex, r'\1\2', txt)
if allow_unicode:
want = remove_prefixes(self._unicode_literal_re, want)
got = remove_prefixes(self._unicode_literal_re, got)
if allow_bytes:
want = remove_prefixes(self._bytes_literal_re, want)
got = remove_prefixes(self._bytes_literal_re, got)
res = doctest.OutputChecker.check_output(self, want, got,
optionflags)
return res
_get_checker.LiteralsOutputChecker = LiteralsOutputChecker
return _get_checker.LiteralsOutputChecker()
def _get_allow_unicode_flag():
"""
Registers and returns the ALLOW_UNICODE flag.
"""
import doctest
return doctest.register_optionflag('ALLOW_UNICODE')
def _get_allow_bytes_flag():
"""
Registers and returns the ALLOW_BYTES flag.
"""
import doctest
return doctest.register_optionflag('ALLOW_BYTES')
def _get_report_choice(key):
"""
This function returns the actual `doctest` module flag value, we want to do it as late as possible to avoid
importing `doctest` and all its dependencies when parsing options, as it adds overhead and breaks tests.
"""
import doctest
return {
DOCTEST_REPORT_CHOICE_UDIFF: doctest.REPORT_UDIFF,
DOCTEST_REPORT_CHOICE_CDIFF: doctest.REPORT_CDIFF,
DOCTEST_REPORT_CHOICE_NDIFF: doctest.REPORT_NDIFF,
DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE: doctest.REPORT_ONLY_FIRST_FAILURE,
DOCTEST_REPORT_CHOICE_NONE: 0,
}[key]
@pytest.fixture(scope='session')
def doctest_namespace():
"""
Inject names into the doctest namespace.
"""
return dict()

File diff suppressed because it is too large Load Diff

View File

@@ -1,45 +0,0 @@
"""
Provides a function to report all internal modules for using freezing tools
pytest
"""
def pytest_namespace():
return {'freeze_includes': freeze_includes}
def freeze_includes():
"""
Returns a list of module names used by py.test that should be
included by cx_freeze.
"""
import py
import _pytest
result = list(_iter_all_modules(py))
result += list(_iter_all_modules(_pytest))
return result
def _iter_all_modules(package, prefix=''):
"""
Iterates over the names of all modules that can be found in the given
package, recursively.
Example:
_iter_all_modules(_pytest) ->
['_pytest.assertion.newinterpret',
'_pytest.capture',
'_pytest.core',
...
]
"""
import os
import pkgutil
if type(package) is not str:
path, prefix = package.__path__[0], package.__name__ + '.'
else:
path = package
for _, name, is_package in pkgutil.iter_modules([path]):
if is_package:
for m in _iter_all_modules(os.path.join(path, name), prefix=name + '.'):
yield prefix + m
else:
yield prefix + name

View File

@@ -1,144 +0,0 @@
""" version info, help messages, tracing configuration. """
import py
import pytest
import os, sys
def pytest_addoption(parser):
group = parser.getgroup('debugconfig')
group.addoption('--version', action="store_true",
help="display pytest lib version and import information.")
group._addoption("-h", "--help", action="store_true", dest="help",
help="show help message and configuration info")
group._addoption('-p', action="append", dest="plugins", default = [],
metavar="name",
help="early-load given plugin (multi-allowed). "
"To avoid loading of plugins, use the `no:` prefix, e.g. "
"`no:doctest`.")
group.addoption('--traceconfig', '--trace-config',
action="store_true", default=False,
help="trace considerations of conftest.py files."),
group.addoption('--debug',
action="store_true", dest="debug", default=False,
help="store internal tracing debug information in 'pytestdebug.log'.")
group._addoption(
'-o', '--override-ini', nargs='*', dest="override_ini",
action="append",
help="override config option with option=value style, e.g. `-o xfail_strict=True`.")
@pytest.hookimpl(hookwrapper=True)
def pytest_cmdline_parse():
outcome = yield
config = outcome.get_result()
if config.option.debug:
path = os.path.abspath("pytestdebug.log")
debugfile = open(path, 'w')
debugfile.write("versions pytest-%s, py-%s, "
"python-%s\ncwd=%s\nargs=%s\n\n" %(
pytest.__version__, py.__version__,
".".join(map(str, sys.version_info)),
os.getcwd(), config._origargs))
config.trace.root.setwriter(debugfile.write)
undo_tracing = config.pluginmanager.enable_tracing()
sys.stderr.write("writing pytestdebug information to %s\n" % path)
def unset_tracing():
debugfile.close()
sys.stderr.write("wrote pytestdebug information to %s\n" %
debugfile.name)
config.trace.root.setwriter(None)
undo_tracing()
config.add_cleanup(unset_tracing)
def pytest_cmdline_main(config):
if config.option.version:
p = py.path.local(pytest.__file__)
sys.stderr.write("This is pytest version %s, imported from %s\n" %
(pytest.__version__, p))
plugininfo = getpluginversioninfo(config)
if plugininfo:
for line in plugininfo:
sys.stderr.write(line + "\n")
return 0
elif config.option.help:
config._do_configure()
showhelp(config)
config._ensure_unconfigure()
return 0
def showhelp(config):
reporter = config.pluginmanager.get_plugin('terminalreporter')
tw = reporter._tw
tw.write(config._parser.optparser.format_help())
tw.line()
tw.line()
tw.line("[pytest] ini-options in the first "
"pytest.ini|tox.ini|setup.cfg file found:")
tw.line()
for name in config._parser._ininames:
help, type, default = config._parser._inidict[name]
if type is None:
type = "string"
spec = "%s (%s)" % (name, type)
line = " %-24s %s" %(spec, help)
tw.line(line[:tw.fullwidth])
tw.line()
tw.line("environment variables:")
vars = [
("PYTEST_ADDOPTS", "extra command line options"),
("PYTEST_PLUGINS", "comma-separated plugins to load during startup"),
("PYTEST_DEBUG", "set to enable debug tracing of pytest's internals")
]
for name, help in vars:
tw.line(" %-24s %s" % (name, help))
tw.line()
tw.line()
tw.line("to see available markers type: pytest --markers")
tw.line("to see available fixtures type: pytest --fixtures")
tw.line("(shown according to specified file_or_dir or current dir "
"if not specified)")
for warningreport in reporter.stats.get('warnings', []):
tw.line("warning : " + warningreport.message, red=True)
return
conftest_options = [
('pytest_plugins', 'list of plugin names to load'),
]
def getpluginversioninfo(config):
lines = []
plugininfo = config.pluginmanager.list_plugin_distinfo()
if plugininfo:
lines.append("setuptools registered plugins:")
for plugin, dist in plugininfo:
loc = getattr(plugin, '__file__', repr(plugin))
content = "%s-%s at %s" % (dist.project_name, dist.version, loc)
lines.append(" " + content)
return lines
def pytest_report_header(config):
lines = []
if config.option.debug or config.option.traceconfig:
lines.append("using: pytest-%s pylib-%s" %
(pytest.__version__,py.__version__))
verinfo = getpluginversioninfo(config)
if verinfo:
lines.extend(verinfo)
if config.option.traceconfig:
lines.append("active plugins:")
items = config.pluginmanager.list_name_plugin()
for name, plugin in items:
if hasattr(plugin, '__file__'):
r = plugin.__file__
else:
r = repr(plugin)
lines.append(" %-20s: %s" %(name, r))
return lines

View File

@@ -1,314 +0,0 @@
""" hook specifications for pytest plugins, invoked from main.py and builtin plugins. """
from _pytest._pluggy import HookspecMarker
hookspec = HookspecMarker("pytest")
# -------------------------------------------------------------------------
# Initialization hooks called for every plugin
# -------------------------------------------------------------------------
@hookspec(historic=True)
def pytest_addhooks(pluginmanager):
"""called at plugin registration time to allow adding new hooks via a call to
pluginmanager.add_hookspecs(module_or_class, prefix)."""
@hookspec(historic=True)
def pytest_namespace():
"""return dict of name->object to be made globally available in
the pytest namespace. This hook is called at plugin registration
time.
"""
@hookspec(historic=True)
def pytest_plugin_registered(plugin, manager):
""" a new pytest plugin got registered. """
@hookspec(historic=True)
def pytest_addoption(parser):
"""register argparse-style options and ini-style config values,
called once at the beginning of a test run.
.. note::
This function should be implemented only in plugins or ``conftest.py``
files situated at the tests root directory due to how pytest
:ref:`discovers plugins during startup <pluginorder>`.
:arg parser: To add command line options, call
:py:func:`parser.addoption(...) <_pytest.config.Parser.addoption>`.
To add ini-file values call :py:func:`parser.addini(...)
<_pytest.config.Parser.addini>`.
Options can later be accessed through the
:py:class:`config <_pytest.config.Config>` object, respectively:
- :py:func:`config.getoption(name) <_pytest.config.Config.getoption>` to
retrieve the value of a command line option.
- :py:func:`config.getini(name) <_pytest.config.Config.getini>` to retrieve
a value read from an ini-style file.
The config object is passed around on many internal objects via the ``.config``
attribute or can be retrieved as the ``pytestconfig`` fixture or accessed
via (deprecated) ``pytest.config``.
"""
@hookspec(historic=True)
def pytest_configure(config):
""" called after command line options have been parsed
and all plugins and initial conftest files been loaded.
This hook is called for every plugin.
"""
# -------------------------------------------------------------------------
# Bootstrapping hooks called for plugins registered early enough:
# internal and 3rd party plugins as well as directly
# discoverable conftest.py local plugins.
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_cmdline_parse(pluginmanager, args):
"""return initialized config object, parsing the specified args. """
def pytest_cmdline_preparse(config, args):
"""(deprecated) modify command line arguments before option parsing. """
@hookspec(firstresult=True)
def pytest_cmdline_main(config):
""" called for performing the main command line action. The default
implementation will invoke the configure hooks and runtest_mainloop. """
def pytest_load_initial_conftests(early_config, parser, args):
""" implements the loading of initial conftest files ahead
of command line option parsing. """
# -------------------------------------------------------------------------
# collection hooks
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_collection(session):
""" perform the collection protocol for the given session. """
def pytest_collection_modifyitems(session, config, items):
""" called after collection has been performed, may filter or re-order
the items in-place."""
def pytest_collection_finish(session):
""" called after collection has been performed and modified. """
@hookspec(firstresult=True)
def pytest_ignore_collect(path, config):
""" return True to prevent considering this path for collection.
This hook is consulted for all files and directories prior to calling
more specific hooks.
"""
@hookspec(firstresult=True)
def pytest_collect_directory(path, parent):
""" called before traversing a directory for collection files. """
def pytest_collect_file(path, parent):
""" return collection Node or None for the given path. Any new node
needs to have the specified ``parent`` as a parent."""
# logging hooks for collection
def pytest_collectstart(collector):
""" collector starts collecting. """
def pytest_itemcollected(item):
""" we just collected a test item. """
def pytest_collectreport(report):
""" collector finished collecting. """
def pytest_deselected(items):
""" called for test items deselected by keyword. """
@hookspec(firstresult=True)
def pytest_make_collect_report(collector):
""" perform ``collector.collect()`` and return a CollectReport. """
# -------------------------------------------------------------------------
# Python test function related hooks
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_pycollect_makemodule(path, parent):
""" return a Module collector or None for the given path.
This hook will be called for each matching test module path.
The pytest_collect_file hook needs to be used if you want to
create test modules for files that do not match as a test module.
"""
@hookspec(firstresult=True)
def pytest_pycollect_makeitem(collector, name, obj):
""" return custom item/collector for a python object in a module, or None. """
@hookspec(firstresult=True)
def pytest_pyfunc_call(pyfuncitem):
""" call underlying test function. """
def pytest_generate_tests(metafunc):
""" generate (multiple) parametrized calls to a test function."""
@hookspec(firstresult=True)
def pytest_make_parametrize_id(config, val):
"""Return a user-friendly string representation of the given ``val`` that will be used
by @pytest.mark.parametrize calls. Return None if the hook doesn't know about ``val``.
"""
# -------------------------------------------------------------------------
# generic runtest related hooks
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_runtestloop(session):
""" called for performing the main runtest loop
(after collection finished). """
def pytest_itemstart(item, node):
""" (deprecated, use pytest_runtest_logstart). """
@hookspec(firstresult=True)
def pytest_runtest_protocol(item, nextitem):
""" implements the runtest_setup/call/teardown protocol for
the given test item, including capturing exceptions and calling
reporting hooks.
:arg item: test item for which the runtest protocol is performed.
:arg nextitem: the scheduled-to-be-next test item (or None if this
is the end my friend). This argument is passed on to
:py:func:`pytest_runtest_teardown`.
:return boolean: True if no further hook implementations should be invoked.
"""
def pytest_runtest_logstart(nodeid, location):
""" signal the start of running a single test item. """
def pytest_runtest_setup(item):
""" called before ``pytest_runtest_call(item)``. """
def pytest_runtest_call(item):
""" called to execute the test ``item``. """
def pytest_runtest_teardown(item, nextitem):
""" called after ``pytest_runtest_call``.
:arg nextitem: the scheduled-to-be-next test item (None if no further
test item is scheduled). This argument can be used to
perform exact teardowns, i.e. calling just enough finalizers
so that nextitem only needs to call setup-functions.
"""
@hookspec(firstresult=True)
def pytest_runtest_makereport(item, call):
""" return a :py:class:`_pytest.runner.TestReport` object
for the given :py:class:`pytest.Item` and
:py:class:`_pytest.runner.CallInfo`.
"""
def pytest_runtest_logreport(report):
""" process a test setup/call/teardown report relating to
the respective phase of executing a test. """
# -------------------------------------------------------------------------
# Fixture related hooks
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_fixture_setup(fixturedef, request):
""" performs fixture setup execution. """
def pytest_fixture_post_finalizer(fixturedef):
""" called after fixture teardown, but before the cache is cleared so
the fixture result cache ``fixturedef.cached_result`` can
still be accessed."""
# -------------------------------------------------------------------------
# test session related hooks
# -------------------------------------------------------------------------
def pytest_sessionstart(session):
""" before session.main() is called. """
def pytest_sessionfinish(session, exitstatus):
""" whole test run finishes. """
def pytest_unconfigure(config):
""" called before test process is exited. """
# -------------------------------------------------------------------------
# hooks for customising the assert methods
# -------------------------------------------------------------------------
def pytest_assertrepr_compare(config, op, left, right):
"""return explanation for comparisons in failing assert expressions.
Return None for no custom explanation, otherwise return a list
of strings. The strings will be joined by newlines but any newlines
*in* a string will be escaped. Note that all but the first line will
be indented sligthly, the intention is for the first line to be a summary.
"""
# -------------------------------------------------------------------------
# hooks for influencing reporting (invoked from _pytest_terminal)
# -------------------------------------------------------------------------
def pytest_report_header(config, startdir):
""" return a string to be displayed as header info for terminal reporting."""
@hookspec(firstresult=True)
def pytest_report_teststatus(report):
""" return result-category, shortletter and verbose word for reporting."""
def pytest_terminal_summary(terminalreporter, exitstatus):
""" add additional section in terminal summary reporting. """
@hookspec(historic=True)
def pytest_logwarning(message, code, nodeid, fslocation):
""" process a warning specified by a message, a code string,
a nodeid and fslocation (both of which may be None
if the warning is not tied to a partilar node/location)."""
# -------------------------------------------------------------------------
# doctest hooks
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_doctest_prepare_content(content):
""" return processed content for a given doctest"""
# -------------------------------------------------------------------------
# error handling and internal debugging hooks
# -------------------------------------------------------------------------
def pytest_internalerror(excrepr, excinfo):
""" called for internal errors. """
def pytest_keyboard_interrupt(excinfo):
""" called for keyboard interrupt. """
def pytest_exception_interact(node, call, report):
"""called when an exception was raised which can potentially be
interactively handled.
This hook is only called if an exception was raised
that is not an internal exception like ``skip.Exception``.
"""
def pytest_enter_pdb(config):
""" called upon pdb.set_trace(), can be used by plugins to take special
action just before the python debugger enters in interactive mode.
:arg config: pytest config object
:type config: _pytest.config.Config
"""

View File

@@ -1,413 +0,0 @@
"""
report test results in JUnit-XML format,
for use with Jenkins and build integration servers.
Based on initial code from Ross Lawley.
"""
# Output conforms to https://github.com/jenkinsci/xunit-plugin/blob/master/
# src/main/resources/org/jenkinsci/plugins/xunit/types/model/xsd/junit-10.xsd
import functools
import py
import os
import re
import sys
import time
import pytest
from _pytest.config import filename_arg
# Python 2.X and 3.X compatibility
if sys.version_info[0] < 3:
from codecs import open
else:
unichr = chr
unicode = str
long = int
class Junit(py.xml.Namespace):
pass
# We need to get the subset of the invalid unicode ranges according to
# XML 1.0 which are valid in this python build. Hence we calculate
# this dynamically instead of hardcoding it. The spec range of valid
# chars is: Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD]
# | [#x10000-#x10FFFF]
_legal_chars = (0x09, 0x0A, 0x0d)
_legal_ranges = (
(0x20, 0x7E), (0x80, 0xD7FF), (0xE000, 0xFFFD), (0x10000, 0x10FFFF),
)
_legal_xml_re = [
unicode("%s-%s") % (unichr(low), unichr(high))
for (low, high) in _legal_ranges if low < sys.maxunicode
]
_legal_xml_re = [unichr(x) for x in _legal_chars] + _legal_xml_re
illegal_xml_re = re.compile(unicode('[^%s]') % unicode('').join(_legal_xml_re))
del _legal_chars
del _legal_ranges
del _legal_xml_re
_py_ext_re = re.compile(r"\.py$")
def bin_xml_escape(arg):
def repl(matchobj):
i = ord(matchobj.group())
if i <= 0xFF:
return unicode('#x%02X') % i
else:
return unicode('#x%04X') % i
return py.xml.raw(illegal_xml_re.sub(repl, py.xml.escape(arg)))
class _NodeReporter(object):
def __init__(self, nodeid, xml):
self.id = nodeid
self.xml = xml
self.add_stats = self.xml.add_stats
self.duration = 0
self.properties = []
self.nodes = []
self.testcase = None
self.attrs = {}
def append(self, node):
self.xml.add_stats(type(node).__name__)
self.nodes.append(node)
def add_property(self, name, value):
self.properties.append((str(name), bin_xml_escape(value)))
def make_properties_node(self):
"""Return a Junit node containing custom properties, if any.
"""
if self.properties:
return Junit.properties([
Junit.property(name=name, value=value)
for name, value in self.properties
])
return ''
def record_testreport(self, testreport):
assert not self.testcase
names = mangle_test_address(testreport.nodeid)
classnames = names[:-1]
if self.xml.prefix:
classnames.insert(0, self.xml.prefix)
attrs = {
"classname": ".".join(classnames),
"name": bin_xml_escape(names[-1]),
"file": testreport.location[0],
}
if testreport.location[1] is not None:
attrs["line"] = testreport.location[1]
self.attrs = attrs
def to_xml(self):
testcase = Junit.testcase(time=self.duration, **self.attrs)
testcase.append(self.make_properties_node())
for node in self.nodes:
testcase.append(node)
return testcase
def _add_simple(self, kind, message, data=None):
data = bin_xml_escape(data)
node = kind(data, message=message)
self.append(node)
def _write_captured_output(self, report):
for capname in ('out', 'err'):
content = getattr(report, 'capstd' + capname)
if content:
tag = getattr(Junit, 'system-' + capname)
self.append(tag(bin_xml_escape(content)))
def append_pass(self, report):
self.add_stats('passed')
self._write_captured_output(report)
def append_failure(self, report):
# msg = str(report.longrepr.reprtraceback.extraline)
if hasattr(report, "wasxfail"):
self._add_simple(
Junit.skipped,
"xfail-marked test passes unexpectedly")
else:
if hasattr(report.longrepr, "reprcrash"):
message = report.longrepr.reprcrash.message
elif isinstance(report.longrepr, (unicode, str)):
message = report.longrepr
else:
message = str(report.longrepr)
message = bin_xml_escape(message)
fail = Junit.failure(message=message)
fail.append(bin_xml_escape(report.longrepr))
self.append(fail)
self._write_captured_output(report)
def append_collect_error(self, report):
# msg = str(report.longrepr.reprtraceback.extraline)
self.append(Junit.error(bin_xml_escape(report.longrepr),
message="collection failure"))
def append_collect_skipped(self, report):
self._add_simple(
Junit.skipped, "collection skipped", report.longrepr)
def append_error(self, report):
if getattr(report, 'when', None) == 'teardown':
msg = "test teardown failure"
else:
msg = "test setup failure"
self._add_simple(
Junit.error, msg, report.longrepr)
self._write_captured_output(report)
def append_skipped(self, report):
if hasattr(report, "wasxfail"):
self._add_simple(
Junit.skipped, "expected test failure", report.wasxfail
)
else:
filename, lineno, skipreason = report.longrepr
if skipreason.startswith("Skipped: "):
skipreason = bin_xml_escape(skipreason[9:])
self.append(
Junit.skipped("%s:%s: %s" % (filename, lineno, skipreason),
type="pytest.skip",
message=skipreason))
self._write_captured_output(report)
def finalize(self):
data = self.to_xml().unicode(indent=0)
self.__dict__.clear()
self.to_xml = lambda: py.xml.raw(data)
@pytest.fixture
def record_xml_property(request):
"""Add extra xml properties to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded.
"""
request.node.warn(
code='C3',
message='record_xml_property is an experimental feature',
)
xml = getattr(request.config, "_xml", None)
if xml is not None:
node_reporter = xml.node_reporter(request.node.nodeid)
return node_reporter.add_property
else:
def add_property_noop(name, value):
pass
return add_property_noop
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting")
group.addoption(
'--junitxml', '--junit-xml',
action="store",
dest="xmlpath",
metavar="path",
type=functools.partial(filename_arg, optname="--junitxml"),
default=None,
help="create junit-xml style report file at given path.")
group.addoption(
'--junitprefix', '--junit-prefix',
action="store",
metavar="str",
default=None,
help="prepend prefix to classnames in junit-xml output")
def pytest_configure(config):
xmlpath = config.option.xmlpath
# prevent opening xmllog on slave nodes (xdist)
if xmlpath and not hasattr(config, 'slaveinput'):
config._xml = LogXML(xmlpath, config.option.junitprefix)
config.pluginmanager.register(config._xml)
def pytest_unconfigure(config):
xml = getattr(config, '_xml', None)
if xml:
del config._xml
config.pluginmanager.unregister(xml)
def mangle_test_address(address):
path, possible_open_bracket, params = address.partition('[')
names = path.split("::")
try:
names.remove('()')
except ValueError:
pass
# convert file path to dotted path
names[0] = names[0].replace("/", '.')
names[0] = _py_ext_re.sub("", names[0])
# put any params back
names[-1] += possible_open_bracket + params
return names
class LogXML(object):
def __init__(self, logfile, prefix):
logfile = os.path.expanduser(os.path.expandvars(logfile))
self.logfile = os.path.normpath(os.path.abspath(logfile))
self.prefix = prefix
self.stats = dict.fromkeys([
'error',
'passed',
'failure',
'skipped',
], 0)
self.node_reporters = {} # nodeid -> _NodeReporter
self.node_reporters_ordered = []
self.global_properties = []
def finalize(self, report):
nodeid = getattr(report, 'nodeid', report)
# local hack to handle xdist report order
slavenode = getattr(report, 'node', None)
reporter = self.node_reporters.pop((nodeid, slavenode))
if reporter is not None:
reporter.finalize()
def node_reporter(self, report):
nodeid = getattr(report, 'nodeid', report)
# local hack to handle xdist report order
slavenode = getattr(report, 'node', None)
key = nodeid, slavenode
if key in self.node_reporters:
# TODO: breasks for --dist=each
return self.node_reporters[key]
reporter = _NodeReporter(nodeid, self)
self.node_reporters[key] = reporter
self.node_reporters_ordered.append(reporter)
return reporter
def add_stats(self, key):
if key in self.stats:
self.stats[key] += 1
def _opentestcase(self, report):
reporter = self.node_reporter(report)
reporter.record_testreport(report)
return reporter
def pytest_runtest_logreport(self, report):
"""handle a setup/call/teardown report, generating the appropriate
xml tags as necessary.
note: due to plugins like xdist, this hook may be called in interlaced
order with reports from other nodes. for example:
usual call order:
-> setup node1
-> call node1
-> teardown node1
-> setup node2
-> call node2
-> teardown node2
possible call order in xdist:
-> setup node1
-> call node1
-> setup node2
-> call node2
-> teardown node2
-> teardown node1
"""
if report.passed:
if report.when == "call": # ignore setup/teardown
reporter = self._opentestcase(report)
reporter.append_pass(report)
elif report.failed:
reporter = self._opentestcase(report)
if report.when == "call":
reporter.append_failure(report)
else:
reporter.append_error(report)
elif report.skipped:
reporter = self._opentestcase(report)
reporter.append_skipped(report)
self.update_testcase_duration(report)
if report.when == "teardown":
self.finalize(report)
def update_testcase_duration(self, report):
"""accumulates total duration for nodeid from given report and updates
the Junit.testcase with the new total if already created.
"""
reporter = self.node_reporter(report)
reporter.duration += getattr(report, 'duration', 0.0)
def pytest_collectreport(self, report):
if not report.passed:
reporter = self._opentestcase(report)
if report.failed:
reporter.append_collect_error(report)
else:
reporter.append_collect_skipped(report)
def pytest_internalerror(self, excrepr):
reporter = self.node_reporter('internal')
reporter.attrs.update(classname="pytest", name='internal')
reporter._add_simple(Junit.error, 'internal error', excrepr)
def pytest_sessionstart(self):
self.suite_start_time = time.time()
def pytest_sessionfinish(self):
dirname = os.path.dirname(os.path.abspath(self.logfile))
if not os.path.isdir(dirname):
os.makedirs(dirname)
logfile = open(self.logfile, 'w', encoding='utf-8')
suite_stop_time = time.time()
suite_time_delta = suite_stop_time - self.suite_start_time
numtests = self.stats['passed'] + self.stats['failure'] + self.stats['skipped'] + self.stats['error']
logfile.write('<?xml version="1.0" encoding="utf-8"?>')
logfile.write(Junit.testsuite(
self._get_global_properties_node(),
[x.to_xml() for x in self.node_reporters_ordered],
name="pytest",
errors=self.stats['error'],
failures=self.stats['failure'],
skips=self.stats['skipped'],
tests=numtests,
time="%.3f" % suite_time_delta, ).unicode(indent=0))
logfile.close()
def pytest_terminal_summary(self, terminalreporter):
terminalreporter.write_sep("-",
"generated xml file: %s" % (self.logfile))
def add_global_property(self, name, value):
self.global_properties.append((str(name), bin_xml_escape(value)))
def _get_global_properties_node(self):
"""Return a Junit node containing custom properties, if any.
"""
if self.global_properties:
return Junit.properties(
[
Junit.property(name=name, value=value)
for name, value in self.global_properties
]
)
return ''

View File

@@ -1,762 +0,0 @@
""" core implementation of testing process: init, session, runtest loop. """
import functools
import os
import sys
import _pytest
import _pytest._code
import py
import pytest
try:
from collections import MutableMapping as MappingMixin
except ImportError:
from UserDict import DictMixin as MappingMixin
from _pytest.config import directory_arg
from _pytest.runner import collect_one_node
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
# exitcodes for the command line
EXIT_OK = 0
EXIT_TESTSFAILED = 1
EXIT_INTERRUPTED = 2
EXIT_INTERNALERROR = 3
EXIT_USAGEERROR = 4
EXIT_NOTESTSCOLLECTED = 5
def pytest_addoption(parser):
parser.addini("norecursedirs", "directory patterns to avoid for recursion",
type="args", default=['.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg'])
parser.addini("testpaths", "directories to search for tests when no files or directories are given in the command line.",
type="args", default=[])
#parser.addini("dirpatterns",
# "patterns specifying possible locations of test files",
# type="linelist", default=["**/test_*.txt",
# "**/test_*.py", "**/*_test.py"]
#)
group = parser.getgroup("general", "running and selection options")
group._addoption('-x', '--exitfirst', action="store_const",
dest="maxfail", const=1,
help="exit instantly on first error or failed test."),
group._addoption('--maxfail', metavar="num",
action="store", type=int, dest="maxfail", default=0,
help="exit after first num failures or errors.")
group._addoption('--strict', action="store_true",
help="run pytest in strict mode, warnings become errors.")
group._addoption("-c", metavar="file", type=str, dest="inifilename",
help="load configuration from `file` instead of trying to locate one of the implicit configuration files.")
group._addoption("--continue-on-collection-errors", action="store_true",
default=False, dest="continue_on_collection_errors",
help="Force test execution even if collection errors occur.")
group = parser.getgroup("collect", "collection")
group.addoption('--collectonly', '--collect-only', action="store_true",
help="only collect tests, don't execute them."),
group.addoption('--pyargs', action="store_true",
help="try to interpret all arguments as python packages.")
group.addoption("--ignore", action="append", metavar="path",
help="ignore path during collection (multi-allowed).")
# when changing this to --conf-cut-dir, config.py Conftest.setinitial
# needs upgrading as well
group.addoption('--confcutdir', dest="confcutdir", default=None,
metavar="dir", type=functools.partial(directory_arg, optname="--confcutdir"),
help="only load conftest.py's relative to specified dir.")
group.addoption('--noconftest', action="store_true",
dest="noconftest", default=False,
help="Don't load any conftest.py files.")
group.addoption('--keepduplicates', '--keep-duplicates', action="store_true",
dest="keepduplicates", default=False,
help="Keep duplicate tests.")
group = parser.getgroup("debugconfig",
"test session debugging and configuration")
group.addoption('--basetemp', dest="basetemp", default=None, metavar="dir",
help="base temporary directory for this test run.")
def pytest_namespace():
collect = dict(Item=Item, Collector=Collector, File=File, Session=Session)
return dict(collect=collect)
def pytest_configure(config):
pytest.config = config # compatibiltiy
def wrap_session(config, doit):
"""Skeleton command line program"""
session = Session(config)
session.exitstatus = EXIT_OK
initstate = 0
try:
try:
config._do_configure()
initstate = 1
config.hook.pytest_sessionstart(session=session)
initstate = 2
session.exitstatus = doit(config, session) or 0
except pytest.UsageError:
raise
except KeyboardInterrupt:
excinfo = _pytest._code.ExceptionInfo()
if initstate < 2 and isinstance(
excinfo.value, pytest.exit.Exception):
sys.stderr.write('{0}: {1}\n'.format(
excinfo.typename, excinfo.value.msg))
config.hook.pytest_keyboard_interrupt(excinfo=excinfo)
session.exitstatus = EXIT_INTERRUPTED
except:
excinfo = _pytest._code.ExceptionInfo()
config.notify_exception(excinfo, config.option)
session.exitstatus = EXIT_INTERNALERROR
if excinfo.errisinstance(SystemExit):
sys.stderr.write("mainloop: caught Spurious SystemExit!\n")
finally:
excinfo = None # Explicitly break reference cycle.
session.startdir.chdir()
if initstate >= 2:
config.hook.pytest_sessionfinish(
session=session,
exitstatus=session.exitstatus)
config._ensure_unconfigure()
return session.exitstatus
def pytest_cmdline_main(config):
return wrap_session(config, _main)
def _main(config, session):
""" default command line protocol for initialization, session,
running tests and reporting. """
config.hook.pytest_collection(session=session)
config.hook.pytest_runtestloop(session=session)
if session.testsfailed:
return EXIT_TESTSFAILED
elif session.testscollected == 0:
return EXIT_NOTESTSCOLLECTED
def pytest_collection(session):
return session.perform_collect()
def pytest_runtestloop(session):
if (session.testsfailed and
not session.config.option.continue_on_collection_errors):
raise session.Interrupted(
"%d errors during collection" % session.testsfailed)
if session.config.option.collectonly:
return True
for i, item in enumerate(session.items):
nextitem = session.items[i+1] if i+1 < len(session.items) else None
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
if session.shouldstop:
raise session.Interrupted(session.shouldstop)
return True
def pytest_ignore_collect(path, config):
p = path.dirpath()
ignore_paths = config._getconftest_pathlist("collect_ignore", path=p)
ignore_paths = ignore_paths or []
excludeopt = config.getoption("ignore")
if excludeopt:
ignore_paths.extend([py.path.local(x) for x in excludeopt])
if path in ignore_paths:
return True
# Skip duplicate paths.
keepduplicates = config.getoption("keepduplicates")
duplicate_paths = config.pluginmanager._duplicatepaths
if not keepduplicates:
if path in duplicate_paths:
return True
else:
duplicate_paths.add(path)
return False
class FSHookProxy:
def __init__(self, fspath, pm, remove_mods):
self.fspath = fspath
self.pm = pm
self.remove_mods = remove_mods
def __getattr__(self, name):
x = self.pm.subset_hook_caller(name, remove_plugins=self.remove_mods)
self.__dict__[name] = x
return x
def compatproperty(name):
def fget(self):
import warnings
warnings.warn("This usage is deprecated, please use pytest.{0} instead".format(name),
PendingDeprecationWarning, stacklevel=2)
return getattr(pytest, name)
return property(fget)
class NodeKeywords(MappingMixin):
def __init__(self, node):
self.node = node
self.parent = node.parent
self._markers = {node.name: True}
def __getitem__(self, key):
try:
return self._markers[key]
except KeyError:
if self.parent is None:
raise
return self.parent.keywords[key]
def __setitem__(self, key, value):
self._markers[key] = value
def __delitem__(self, key):
raise ValueError("cannot delete key in keywords dict")
def __iter__(self):
seen = set(self._markers)
if self.parent is not None:
seen.update(self.parent.keywords)
return iter(seen)
def __len__(self):
return len(self.__iter__())
def keys(self):
return list(self)
def __repr__(self):
return "<NodeKeywords for node %s>" % (self.node, )
class Node(object):
""" base class for Collector and Item the test collection tree.
Collector subclasses have children, Items are terminal nodes."""
def __init__(self, name, parent=None, config=None, session=None):
#: a unique name within the scope of the parent node
self.name = name
#: the parent collector node.
self.parent = parent
#: the pytest config object
self.config = config or parent.config
#: the session this node is part of
self.session = session or parent.session
#: filesystem path where this node was collected from (can be None)
self.fspath = getattr(parent, 'fspath', None)
#: keywords/markers collected from all scopes
self.keywords = NodeKeywords(self)
#: allow adding of extra keywords to use for matching
self.extra_keyword_matches = set()
# used for storing artificial fixturedefs for direct parametrization
self._name2pseudofixturedef = {}
@property
def ihook(self):
""" fspath sensitive hook proxy used to call pytest hooks"""
return self.session.gethookproxy(self.fspath)
Module = compatproperty("Module")
Class = compatproperty("Class")
Instance = compatproperty("Instance")
Function = compatproperty("Function")
File = compatproperty("File")
Item = compatproperty("Item")
def _getcustomclass(self, name):
cls = getattr(self, name)
if cls != getattr(pytest, name):
py.log._apiwarn("2.0", "use of node.%s is deprecated, "
"use pytest_pycollect_makeitem(...) to create custom "
"collection nodes" % name)
return cls
def __repr__(self):
return "<%s %r>" %(self.__class__.__name__,
getattr(self, 'name', None))
def warn(self, code, message):
""" generate a warning with the given code and message for this
item. """
assert isinstance(code, str)
fslocation = getattr(self, "location", None)
if fslocation is None:
fslocation = getattr(self, "fspath", None)
else:
fslocation = "%s:%s" % (fslocation[0], fslocation[1] + 1)
self.ihook.pytest_logwarning.call_historic(kwargs=dict(
code=code, message=message,
nodeid=self.nodeid, fslocation=fslocation))
# methods for ordering nodes
@property
def nodeid(self):
""" a ::-separated string denoting its collection tree address. """
try:
return self._nodeid
except AttributeError:
self._nodeid = x = self._makeid()
return x
def _makeid(self):
return self.parent.nodeid + "::" + self.name
def __hash__(self):
return hash(self.nodeid)
def setup(self):
pass
def teardown(self):
pass
def _memoizedcall(self, attrname, function):
exattrname = "_ex_" + attrname
failure = getattr(self, exattrname, None)
if failure is not None:
py.builtin._reraise(failure[0], failure[1], failure[2])
if hasattr(self, attrname):
return getattr(self, attrname)
try:
res = function()
except py.builtin._sysex:
raise
except:
failure = sys.exc_info()
setattr(self, exattrname, failure)
raise
setattr(self, attrname, res)
return res
def listchain(self):
""" return list of all parent collectors up to self,
starting from root of collection tree. """
chain = []
item = self
while item is not None:
chain.append(item)
item = item.parent
chain.reverse()
return chain
def add_marker(self, marker):
""" dynamically add a marker object to the node.
``marker`` can be a string or pytest.mark.* instance.
"""
from _pytest.mark import MarkDecorator
if isinstance(marker, py.builtin._basestring):
marker = MarkDecorator(marker)
elif not isinstance(marker, MarkDecorator):
raise ValueError("is not a string or pytest.mark.* Marker")
self.keywords[marker.name] = marker
def get_marker(self, name):
""" get a marker object from this node or None if
the node doesn't have a marker with that name. """
val = self.keywords.get(name, None)
if val is not None:
from _pytest.mark import MarkInfo, MarkDecorator
if isinstance(val, (MarkDecorator, MarkInfo)):
return val
def listextrakeywords(self):
""" Return a set of all extra keywords in self and any parents."""
extra_keywords = set()
item = self
for item in self.listchain():
extra_keywords.update(item.extra_keyword_matches)
return extra_keywords
def listnames(self):
return [x.name for x in self.listchain()]
def addfinalizer(self, fin):
""" register a function to be called when this node is finalized.
This method can only be called when this node is active
in a setup chain, for example during self.setup().
"""
self.session._setupstate.addfinalizer(fin, self)
def getparent(self, cls):
""" get the next parent node (including ourself)
which is an instance of the given class"""
current = self
while current and not isinstance(current, cls):
current = current.parent
return current
def _prunetraceback(self, excinfo):
pass
def _repr_failure_py(self, excinfo, style=None):
fm = self.session._fixturemanager
if excinfo.errisinstance(fm.FixtureLookupError):
return excinfo.value.formatrepr()
tbfilter = True
if self.config.option.fulltrace:
style="long"
else:
tb = _pytest._code.Traceback([excinfo.traceback[-1]])
self._prunetraceback(excinfo)
if len(excinfo.traceback) == 0:
excinfo.traceback = tb
tbfilter = False # prunetraceback already does it
if style == "auto":
style = "long"
# XXX should excinfo.getrepr record all data and toterminal() process it?
if style is None:
if self.config.option.tbstyle == "short":
style = "short"
else:
style = "long"
try:
os.getcwd()
abspath = False
except OSError:
abspath = True
return excinfo.getrepr(funcargs=True, abspath=abspath,
showlocals=self.config.option.showlocals,
style=style, tbfilter=tbfilter)
repr_failure = _repr_failure_py
class Collector(Node):
""" Collector instances create children through collect()
and thus iteratively build a tree.
"""
class CollectError(Exception):
""" an error during collection, contains a custom message. """
def collect(self):
""" returns a list of children (items and collectors)
for this collection node.
"""
raise NotImplementedError("abstract")
def repr_failure(self, excinfo):
""" represent a collection failure. """
if excinfo.errisinstance(self.CollectError):
exc = excinfo.value
return str(exc.args[0])
return self._repr_failure_py(excinfo, style="short")
def _memocollect(self):
""" internal helper method to cache results of calling collect(). """
return self._memoizedcall('_collected', lambda: list(self.collect()))
def _prunetraceback(self, excinfo):
if hasattr(self, 'fspath'):
traceback = excinfo.traceback
ntraceback = traceback.cut(path=self.fspath)
if ntraceback == traceback:
ntraceback = ntraceback.cut(excludepath=tracebackcutdir)
excinfo.traceback = ntraceback.filter()
class FSCollector(Collector):
def __init__(self, fspath, parent=None, config=None, session=None):
fspath = py.path.local(fspath) # xxx only for test_resultlog.py?
name = fspath.basename
if parent is not None:
rel = fspath.relto(parent.fspath)
if rel:
name = rel
name = name.replace(os.sep, "/")
super(FSCollector, self).__init__(name, parent, config, session)
self.fspath = fspath
def _makeid(self):
relpath = self.fspath.relto(self.config.rootdir)
if os.sep != "/":
relpath = relpath.replace(os.sep, "/")
return relpath
class File(FSCollector):
""" base class for collecting tests from a file. """
class Item(Node):
""" a basic test invocation item. Note that for a single function
there might be multiple test invocation items.
"""
nextitem = None
def __init__(self, name, parent=None, config=None, session=None):
super(Item, self).__init__(name, parent, config, session)
self._report_sections = []
def add_report_section(self, when, key, content):
if content:
self._report_sections.append((when, key, content))
def reportinfo(self):
return self.fspath, None, ""
@property
def location(self):
try:
return self._location
except AttributeError:
location = self.reportinfo()
# bestrelpath is a quite slow function
cache = self.config.__dict__.setdefault("_bestrelpathcache", {})
try:
fspath = cache[location[0]]
except KeyError:
fspath = self.session.fspath.bestrelpath(location[0])
cache[location[0]] = fspath
location = (fspath, location[1], str(location[2]))
self._location = location
return location
class NoMatch(Exception):
""" raised if matching cannot locate a matching names. """
class Interrupted(KeyboardInterrupt):
""" signals an interrupted test run. """
__module__ = 'builtins' # for py3
class Session(FSCollector):
Interrupted = Interrupted
def __init__(self, config):
FSCollector.__init__(self, config.rootdir, parent=None,
config=config, session=self)
self.testsfailed = 0
self.testscollected = 0
self.shouldstop = False
self.trace = config.trace.root.get("collection")
self._norecursepatterns = config.getini("norecursedirs")
self.startdir = py.path.local()
self.config.pluginmanager.register(self, name="session")
def _makeid(self):
return ""
@pytest.hookimpl(tryfirst=True)
def pytest_collectstart(self):
if self.shouldstop:
raise self.Interrupted(self.shouldstop)
@pytest.hookimpl(tryfirst=True)
def pytest_runtest_logreport(self, report):
if report.failed and not hasattr(report, 'wasxfail'):
self.testsfailed += 1
maxfail = self.config.getvalue("maxfail")
if maxfail and self.testsfailed >= maxfail:
self.shouldstop = "stopping after %d failures" % (
self.testsfailed)
pytest_collectreport = pytest_runtest_logreport
def isinitpath(self, path):
return path in self._initialpaths
def gethookproxy(self, fspath):
# check if we have the common case of running
# hooks with all conftest.py filesall conftest.py
pm = self.config.pluginmanager
my_conftestmodules = pm._getconftestmodules(fspath)
remove_mods = pm._conftest_plugins.difference(my_conftestmodules)
if remove_mods:
# one or more conftests are not in use at this fspath
proxy = FSHookProxy(fspath, pm, remove_mods)
else:
# all plugis are active for this fspath
proxy = self.config.hook
return proxy
def perform_collect(self, args=None, genitems=True):
hook = self.config.hook
try:
items = self._perform_collect(args, genitems)
hook.pytest_collection_modifyitems(session=self,
config=self.config, items=items)
finally:
hook.pytest_collection_finish(session=self)
self.testscollected = len(items)
return items
def _perform_collect(self, args, genitems):
if args is None:
args = self.config.args
self.trace("perform_collect", self, args)
self.trace.root.indent += 1
self._notfound = []
self._initialpaths = set()
self._initialparts = []
self.items = items = []
for arg in args:
parts = self._parsearg(arg)
self._initialparts.append(parts)
self._initialpaths.add(parts[0])
rep = collect_one_node(self)
self.ihook.pytest_collectreport(report=rep)
self.trace.root.indent -= 1
if self._notfound:
errors = []
for arg, exc in self._notfound:
line = "(no name %r in any of %r)" % (arg, exc.args[0])
errors.append("not found: %s\n%s" % (arg, line))
#XXX: test this
raise pytest.UsageError(*errors)
if not genitems:
return rep.result
else:
if rep.passed:
for node in rep.result:
self.items.extend(self.genitems(node))
return items
def collect(self):
for parts in self._initialparts:
arg = "::".join(map(str, parts))
self.trace("processing argument", arg)
self.trace.root.indent += 1
try:
for x in self._collect(arg):
yield x
except NoMatch:
# we are inside a make_report hook so
# we cannot directly pass through the exception
self._notfound.append((arg, sys.exc_info()[1]))
self.trace.root.indent -= 1
def _collect(self, arg):
names = self._parsearg(arg)
path = names.pop(0)
if path.check(dir=1):
assert not names, "invalid arg %r" %(arg,)
for path in path.visit(fil=lambda x: x.check(file=1),
rec=self._recurse, bf=True, sort=True):
for x in self._collectfile(path):
yield x
else:
assert path.check(file=1)
for x in self.matchnodes(self._collectfile(path), names):
yield x
def _collectfile(self, path):
ihook = self.gethookproxy(path)
if not self.isinitpath(path):
if ihook.pytest_ignore_collect(path=path, config=self.config):
return ()
return ihook.pytest_collect_file(path=path, parent=self)
def _recurse(self, path):
ihook = self.gethookproxy(path.dirpath())
if ihook.pytest_ignore_collect(path=path, config=self.config):
return
for pat in self._norecursepatterns:
if path.check(fnmatch=pat):
return False
ihook = self.gethookproxy(path)
ihook.pytest_collect_directory(path=path, parent=self)
return True
def _tryconvertpyarg(self, x):
"""Convert a dotted module name to path.
"""
import pkgutil
try:
loader = pkgutil.find_loader(x)
except ImportError:
return x
if loader is None:
return x
# This method is sometimes invoked when AssertionRewritingHook, which
# does not define a get_filename method, is already in place:
try:
path = loader.get_filename(x)
except AttributeError:
# Retrieve path from AssertionRewritingHook:
path = loader.modules[x][0].co_filename
if loader.is_package(x):
path = os.path.dirname(path)
return path
def _parsearg(self, arg):
""" return (fspath, names) tuple after checking the file exists. """
parts = str(arg).split("::")
if self.config.option.pyargs:
parts[0] = self._tryconvertpyarg(parts[0])
relpath = parts[0].replace("/", os.sep)
path = self.config.invocation_dir.join(relpath, abs=True)
if not path.check():
if self.config.option.pyargs:
raise pytest.UsageError("file or package not found: " + arg + " (missing __init__.py?)")
else:
raise pytest.UsageError("file not found: " + arg)
parts[0] = path
return parts
def matchnodes(self, matching, names):
self.trace("matchnodes", matching, names)
self.trace.root.indent += 1
nodes = self._matchnodes(matching, names)
num = len(nodes)
self.trace("matchnodes finished -> ", num, "nodes")
self.trace.root.indent -= 1
if num == 0:
raise NoMatch(matching, names[:1])
return nodes
def _matchnodes(self, matching, names):
if not matching or not names:
return matching
name = names[0]
assert name
nextnames = names[1:]
resultnodes = []
for node in matching:
if isinstance(node, pytest.Item):
if not names:
resultnodes.append(node)
continue
assert isinstance(node, pytest.Collector)
rep = collect_one_node(node)
if rep.passed:
has_matched = False
for x in rep.result:
# TODO: remove parametrized workaround once collection structure contains parametrization
if x.name == name or x.name.split("[")[0] == name:
resultnodes.extend(self.matchnodes([x], nextnames))
has_matched = True
# XXX accept IDs that don't have "()" for class instances
if not has_matched and len(rep.result) == 1 and x.name == "()":
nextnames.insert(0, name)
resultnodes.extend(self.matchnodes([x], nextnames))
node.ihook.pytest_collectreport(report=rep)
return resultnodes
def genitems(self, node):
self.trace("genitems", node)
if isinstance(node, pytest.Item):
node.ihook.pytest_itemcollected(item=node)
yield node
else:
assert isinstance(node, pytest.Collector)
rep = collect_one_node(node)
if rep.passed:
for subnode in rep.result:
for x in self.genitems(subnode):
yield x
node.ihook.pytest_collectreport(report=rep)

View File

@@ -1,328 +0,0 @@
""" generic mechanism for marking and selecting python functions. """
import inspect
class MarkerError(Exception):
"""Error in use of a pytest marker/attribute."""
def pytest_namespace():
return {'mark': MarkGenerator()}
def pytest_addoption(parser):
group = parser.getgroup("general")
group._addoption(
'-k',
action="store", dest="keyword", default='', metavar="EXPRESSION",
help="only run tests which match the given substring expression. "
"An expression is a python evaluatable expression "
"where all names are substring-matched against test names "
"and their parent classes. Example: -k 'test_method or test_"
"other' matches all test functions and classes whose name "
"contains 'test_method' or 'test_other'. "
"Additionally keywords are matched to classes and functions "
"containing extra names in their 'extra_keyword_matches' set, "
"as well as functions which have names assigned directly to them."
)
group._addoption(
"-m",
action="store", dest="markexpr", default="", metavar="MARKEXPR",
help="only run tests matching given mark expression. "
"example: -m 'mark1 and not mark2'."
)
group.addoption(
"--markers", action="store_true",
help="show markers (builtin, plugin and per-project ones)."
)
parser.addini("markers", "markers for test functions", 'linelist')
def pytest_cmdline_main(config):
import _pytest.config
if config.option.markers:
config._do_configure()
tw = _pytest.config.create_terminal_writer(config)
for line in config.getini("markers"):
name, rest = line.split(":", 1)
tw.write("@pytest.mark.%s:" % name, bold=True)
tw.line(rest)
tw.line()
config._ensure_unconfigure()
return 0
pytest_cmdline_main.tryfirst = True
def pytest_collection_modifyitems(items, config):
keywordexpr = config.option.keyword.lstrip()
matchexpr = config.option.markexpr
if not keywordexpr and not matchexpr:
return
# pytest used to allow "-" for negating
# but today we just allow "-" at the beginning, use "not" instead
# we probably remove "-" alltogether soon
if keywordexpr.startswith("-"):
keywordexpr = "not " + keywordexpr[1:]
selectuntil = False
if keywordexpr[-1:] == ":":
selectuntil = True
keywordexpr = keywordexpr[:-1]
remaining = []
deselected = []
for colitem in items:
if keywordexpr and not matchkeyword(colitem, keywordexpr):
deselected.append(colitem)
else:
if selectuntil:
keywordexpr = None
if matchexpr:
if not matchmark(colitem, matchexpr):
deselected.append(colitem)
continue
remaining.append(colitem)
if deselected:
config.hook.pytest_deselected(items=deselected)
items[:] = remaining
class MarkMapping:
"""Provides a local mapping for markers where item access
resolves to True if the marker is present. """
def __init__(self, keywords):
mymarks = set()
for key, value in keywords.items():
if isinstance(value, MarkInfo) or isinstance(value, MarkDecorator):
mymarks.add(key)
self._mymarks = mymarks
def __getitem__(self, name):
return name in self._mymarks
class KeywordMapping:
"""Provides a local mapping for keywords.
Given a list of names, map any substring of one of these names to True.
"""
def __init__(self, names):
self._names = names
def __getitem__(self, subname):
for name in self._names:
if subname in name:
return True
return False
def matchmark(colitem, markexpr):
"""Tries to match on any marker names, attached to the given colitem."""
return eval(markexpr, {}, MarkMapping(colitem.keywords))
def matchkeyword(colitem, keywordexpr):
"""Tries to match given keyword expression to given collector item.
Will match on the name of colitem, including the names of its parents.
Only matches names of items which are either a :class:`Class` or a
:class:`Function`.
Additionally, matches on names in the 'extra_keyword_matches' set of
any item, as well as names directly assigned to test functions.
"""
mapped_names = set()
# Add the names of the current item and any parent items
import pytest
for item in colitem.listchain():
if not isinstance(item, pytest.Instance):
mapped_names.add(item.name)
# Add the names added as extra keywords to current or parent items
for name in colitem.listextrakeywords():
mapped_names.add(name)
# Add the names attached to the current function through direct assignment
if hasattr(colitem, 'function'):
for name in colitem.function.__dict__:
mapped_names.add(name)
mapping = KeywordMapping(mapped_names)
if " " not in keywordexpr:
# special case to allow for simple "-k pass" and "-k 1.3"
return mapping[keywordexpr]
elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]:
return not mapping[keywordexpr[4:]]
return eval(keywordexpr, {}, mapping)
def pytest_configure(config):
import pytest
if config.option.strict:
pytest.mark._config = config
class MarkGenerator:
""" Factory for :class:`MarkDecorator` objects - exposed as
a ``pytest.mark`` singleton instance. Example::
import pytest
@pytest.mark.slowtest
def test_function():
pass
will set a 'slowtest' :class:`MarkInfo` object
on the ``test_function`` object. """
def __getattr__(self, name):
if name[0] == "_":
raise AttributeError("Marker name must NOT start with underscore")
if hasattr(self, '_config'):
self._check(name)
return MarkDecorator(name)
def _check(self, name):
try:
if name in self._markers:
return
except AttributeError:
pass
self._markers = l = set()
for line in self._config.getini("markers"):
beginning = line.split(":", 1)
x = beginning[0].split("(", 1)[0]
l.add(x)
if name not in self._markers:
raise AttributeError("%r not a registered marker" % (name,))
def istestfunc(func):
return hasattr(func, "__call__") and \
getattr(func, "__name__", "<lambda>") != "<lambda>"
class MarkDecorator:
""" A decorator for test functions and test classes. When applied
it will create :class:`MarkInfo` objects which may be
:ref:`retrieved by hooks as item keywords <excontrolskip>`.
MarkDecorator instances are often created like this::
mark1 = pytest.mark.NAME # simple MarkDecorator
mark2 = pytest.mark.NAME(name1=value) # parametrized MarkDecorator
and can then be applied as decorators to test functions::
@mark2
def test_function():
pass
When a MarkDecorator instance is called it does the following:
1. If called with a single class as its only positional argument and no
additional keyword arguments, it attaches itself to the class so it
gets applied automatically to all test cases found in that class.
2. If called with a single function as its only positional argument and
no additional keyword arguments, it attaches a MarkInfo object to the
function, containing all the arguments already stored internally in
the MarkDecorator.
3. When called in any other case, it performs a 'fake construction' call,
i.e. it returns a new MarkDecorator instance with the original
MarkDecorator's content updated with the arguments passed to this
call.
Note: The rules above prevent MarkDecorator objects from storing only a
single function or class reference as their positional argument with no
additional keyword or positional arguments.
"""
def __init__(self, name, args=None, kwargs=None):
self.name = name
self.args = args or ()
self.kwargs = kwargs or {}
@property
def markname(self):
return self.name # for backward-compat (2.4.1 had this attr)
def __repr__(self):
d = self.__dict__.copy()
name = d.pop('name')
return "<MarkDecorator %r %r>" % (name, d)
def __call__(self, *args, **kwargs):
""" if passed a single callable argument: decorate it with mark info.
otherwise add *args/**kwargs in-place to mark information. """
if args and not kwargs:
func = args[0]
is_class = inspect.isclass(func)
if len(args) == 1 and (istestfunc(func) or is_class):
if is_class:
if hasattr(func, 'pytestmark'):
mark_list = func.pytestmark
if not isinstance(mark_list, list):
mark_list = [mark_list]
# always work on a copy to avoid updating pytestmark
# from a superclass by accident
mark_list = mark_list + [self]
func.pytestmark = mark_list
else:
func.pytestmark = [self]
else:
holder = getattr(func, self.name, None)
if holder is None:
holder = MarkInfo(
self.name, self.args, self.kwargs
)
setattr(func, self.name, holder)
else:
holder.add(self.args, self.kwargs)
return func
kw = self.kwargs.copy()
kw.update(kwargs)
args = self.args + args
return self.__class__(self.name, args=args, kwargs=kw)
def extract_argvalue(maybe_marked_args):
# TODO: incorrect mark data, the old code wanst able to collect lists
# individual parametrized argument sets can be wrapped in a series
# of markers in which case we unwrap the values and apply the mark
# at Function init
newmarks = {}
argval = maybe_marked_args
while isinstance(argval, MarkDecorator):
newmark = MarkDecorator(argval.markname,
argval.args[:-1], argval.kwargs)
newmarks[newmark.markname] = newmark
argval = argval.args[-1]
return argval, newmarks
class MarkInfo:
""" Marking object created by :class:`MarkDecorator` instances. """
def __init__(self, name, args, kwargs):
#: name of attribute
self.name = name
#: positional argument list, empty if none specified
self.args = args
#: keyword argument dictionary, empty if nothing specified
self.kwargs = kwargs.copy()
self._arglist = [(args, kwargs.copy())]
def __repr__(self):
return "<MarkInfo %r args=%r kwargs=%r>" % (
self.name, self.args, self.kwargs
)
def add(self, args, kwargs):
""" add a MarkInfo with the given args and kwargs. """
self._arglist.append((args, kwargs))
self.args += args
self.kwargs.update(kwargs)
def __iter__(self):
""" yield MarkInfo objects each relating to a marking-call. """
for args, kwargs in self._arglist:
yield MarkInfo(self.name, args, kwargs)

View File

@@ -1,258 +0,0 @@
""" monkeypatching and mocking functionality. """
import os, sys
import re
from py.builtin import _basestring
import pytest
RE_IMPORT_ERROR_NAME = re.compile("^No module named (.*)$")
@pytest.fixture
def monkeypatch(request):
"""The returned ``monkeypatch`` fixture provides these
helper methods to modify objects, dictionaries or os.environ::
monkeypatch.setattr(obj, name, value, raising=True)
monkeypatch.delattr(obj, name, raising=True)
monkeypatch.setitem(mapping, name, value)
monkeypatch.delitem(obj, name, raising=True)
monkeypatch.setenv(name, value, prepend=False)
monkeypatch.delenv(name, value, raising=True)
monkeypatch.syspath_prepend(path)
monkeypatch.chdir(path)
All modifications will be undone after the requesting
test function or fixture has finished. The ``raising``
parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target.
"""
mpatch = MonkeyPatch()
request.addfinalizer(mpatch.undo)
return mpatch
def resolve(name):
# simplified from zope.dottedname
parts = name.split('.')
used = parts.pop(0)
found = __import__(used)
for part in parts:
used += '.' + part
try:
found = getattr(found, part)
except AttributeError:
pass
else:
continue
# we use explicit un-nesting of the handling block in order
# to avoid nested exceptions on python 3
try:
__import__(used)
except ImportError as ex:
# str is used for py2 vs py3
expected = str(ex).split()[-1]
if expected == used:
raise
else:
raise ImportError(
'import error in %s: %s' % (used, ex)
)
found = annotated_getattr(found, part, used)
return found
def annotated_getattr(obj, name, ann):
try:
obj = getattr(obj, name)
except AttributeError:
raise AttributeError(
'%r object at %s has no attribute %r' % (
type(obj).__name__, ann, name
)
)
return obj
def derive_importpath(import_path, raising):
if not isinstance(import_path, _basestring) or "." not in import_path:
raise TypeError("must be absolute import path string, not %r" %
(import_path,))
module, attr = import_path.rsplit('.', 1)
target = resolve(module)
if raising:
annotated_getattr(target, attr, ann=module)
return attr, target
class Notset:
def __repr__(self):
return "<notset>"
notset = Notset()
class MonkeyPatch:
""" Object returned by the ``monkeypatch`` fixture keeping a record of setattr/item/env/syspath changes.
"""
def __init__(self):
self._setattr = []
self._setitem = []
self._cwd = None
self._savesyspath = None
def setattr(self, target, name, value=notset, raising=True):
""" Set attribute value on target, memorizing the old value.
By default raise AttributeError if the attribute did not exist.
For convenience you can specify a string as ``target`` which
will be interpreted as a dotted import path, with the last part
being the attribute name. Example:
``monkeypatch.setattr("os.getcwd", lambda x: "/")``
would set the ``getcwd`` function of the ``os`` module.
The ``raising`` value determines if the setattr should fail
if the attribute is not already present (defaults to True
which means it will raise).
"""
__tracebackhide__ = True
import inspect
if value is notset:
if not isinstance(target, _basestring):
raise TypeError("use setattr(target, name, value) or "
"setattr(target, value) with target being a dotted "
"import string")
value = name
name, target = derive_importpath(target, raising)
oldval = getattr(target, name, notset)
if raising and oldval is notset:
raise AttributeError("%r has no attribute %r" % (target, name))
# avoid class descriptors like staticmethod/classmethod
if inspect.isclass(target):
oldval = target.__dict__.get(name, notset)
self._setattr.append((target, name, oldval))
setattr(target, name, value)
def delattr(self, target, name=notset, raising=True):
""" Delete attribute ``name`` from ``target``, by default raise
AttributeError it the attribute did not previously exist.
If no ``name`` is specified and ``target`` is a string
it will be interpreted as a dotted import path with the
last part being the attribute name.
If ``raising`` is set to False, no exception will be raised if the
attribute is missing.
"""
__tracebackhide__ = True
if name is notset:
if not isinstance(target, _basestring):
raise TypeError("use delattr(target, name) or "
"delattr(target) with target being a dotted "
"import string")
name, target = derive_importpath(target, raising)
if not hasattr(target, name):
if raising:
raise AttributeError(name)
else:
self._setattr.append((target, name, getattr(target, name, notset)))
delattr(target, name)
def setitem(self, dic, name, value):
""" Set dictionary entry ``name`` to value. """
self._setitem.append((dic, name, dic.get(name, notset)))
dic[name] = value
def delitem(self, dic, name, raising=True):
""" Delete ``name`` from dict. Raise KeyError if it doesn't exist.
If ``raising`` is set to False, no exception will be raised if the
key is missing.
"""
if name not in dic:
if raising:
raise KeyError(name)
else:
self._setitem.append((dic, name, dic.get(name, notset)))
del dic[name]
def setenv(self, name, value, prepend=None):
""" Set environment variable ``name`` to ``value``. If ``prepend``
is a character, read the current environment variable value
and prepend the ``value`` adjoined with the ``prepend`` character."""
value = str(value)
if prepend and name in os.environ:
value = value + prepend + os.environ[name]
self.setitem(os.environ, name, value)
def delenv(self, name, raising=True):
""" Delete ``name`` from the environment. Raise KeyError it does not
exist.
If ``raising`` is set to False, no exception will be raised if the
environment variable is missing.
"""
self.delitem(os.environ, name, raising=raising)
def syspath_prepend(self, path):
""" Prepend ``path`` to ``sys.path`` list of import locations. """
if self._savesyspath is None:
self._savesyspath = sys.path[:]
sys.path.insert(0, str(path))
def chdir(self, path):
""" Change the current working directory to the specified path.
Path can be a string or a py.path.local object.
"""
if self._cwd is None:
self._cwd = os.getcwd()
if hasattr(path, "chdir"):
path.chdir()
else:
os.chdir(path)
def undo(self):
""" Undo previous changes. This call consumes the
undo stack. Calling it a second time has no effect unless
you do more monkeypatching after the undo call.
There is generally no need to call `undo()`, since it is
called automatically during tear-down.
Note that the same `monkeypatch` fixture is used across a
single test function invocation. If `monkeypatch` is used both by
the test function itself and one of the test fixtures,
calling `undo()` will undo all of the changes made in
both functions.
"""
for obj, name, value in reversed(self._setattr):
if value is not notset:
setattr(obj, name, value)
else:
delattr(obj, name)
self._setattr[:] = []
for dictionary, name, value in reversed(self._setitem):
if value is notset:
try:
del dictionary[name]
except KeyError:
pass # was already deleted, so we have the desired state
else:
dictionary[name] = value
self._setitem[:] = []
if self._savesyspath is not None:
sys.path[:] = self._savesyspath
self._savesyspath = None
if self._cwd is not None:
os.chdir(self._cwd)
self._cwd = None

View File

@@ -1,71 +0,0 @@
""" run test suites written for nose. """
import sys
import py
import pytest
from _pytest import unittest
def get_skip_exceptions():
skip_classes = set()
for module_name in ('unittest', 'unittest2', 'nose'):
mod = sys.modules.get(module_name)
if hasattr(mod, 'SkipTest'):
skip_classes.add(mod.SkipTest)
return tuple(skip_classes)
def pytest_runtest_makereport(item, call):
if call.excinfo and call.excinfo.errisinstance(get_skip_exceptions()):
# let's substitute the excinfo with a pytest.skip one
call2 = call.__class__(lambda:
pytest.skip(str(call.excinfo.value)), call.when)
call.excinfo = call2.excinfo
@pytest.hookimpl(trylast=True)
def pytest_runtest_setup(item):
if is_potential_nosetest(item):
if isinstance(item.parent, pytest.Generator):
gen = item.parent
if not hasattr(gen, '_nosegensetup'):
call_optional(gen.obj, 'setup')
if isinstance(gen.parent, pytest.Instance):
call_optional(gen.parent.obj, 'setup')
gen._nosegensetup = True
if not call_optional(item.obj, 'setup'):
# call module level setup if there is no object level one
call_optional(item.parent.obj, 'setup')
#XXX this implies we only call teardown when setup worked
item.session._setupstate.addfinalizer((lambda: teardown_nose(item)), item)
def teardown_nose(item):
if is_potential_nosetest(item):
if not call_optional(item.obj, 'teardown'):
call_optional(item.parent.obj, 'teardown')
#if hasattr(item.parent, '_nosegensetup'):
# #call_optional(item._nosegensetup, 'teardown')
# del item.parent._nosegensetup
def pytest_make_collect_report(collector):
if isinstance(collector, pytest.Generator):
call_optional(collector.obj, 'setup')
def is_potential_nosetest(item):
# extra check needed since we do not do nose style setup/teardown
# on direct unittest style classes
return isinstance(item, pytest.Function) and \
not isinstance(item, unittest.TestCaseFunction)
def call_optional(obj, name):
method = getattr(obj, name, None)
isfixture = hasattr(method, "_pytestfixturefunction")
if method is not None and not isfixture and py.builtin.callable(method):
# If there's any problems allow the exception to raise rather than
# silently ignoring them
method()
return True

View File

@@ -1,98 +0,0 @@
""" submit failure or test session information to a pastebin service. """
import pytest
import sys
import tempfile
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting")
group._addoption('--pastebin', metavar="mode",
action='store', dest="pastebin", default=None,
choices=['failed', 'all'],
help="send failed|all info to bpaste.net pastebin service.")
@pytest.hookimpl(trylast=True)
def pytest_configure(config):
import py
if config.option.pastebin == "all":
tr = config.pluginmanager.getplugin('terminalreporter')
# if no terminal reporter plugin is present, nothing we can do here;
# this can happen when this function executes in a slave node
# when using pytest-xdist, for example
if tr is not None:
# pastebin file will be utf-8 encoded binary file
config._pastebinfile = tempfile.TemporaryFile('w+b')
oldwrite = tr._tw.write
def tee_write(s, **kwargs):
oldwrite(s, **kwargs)
if py.builtin._istext(s):
s = s.encode('utf-8')
config._pastebinfile.write(s)
tr._tw.write = tee_write
def pytest_unconfigure(config):
if hasattr(config, '_pastebinfile'):
# get terminal contents and delete file
config._pastebinfile.seek(0)
sessionlog = config._pastebinfile.read()
config._pastebinfile.close()
del config._pastebinfile
# undo our patching in the terminal reporter
tr = config.pluginmanager.getplugin('terminalreporter')
del tr._tw.__dict__['write']
# write summary
tr.write_sep("=", "Sending information to Paste Service")
pastebinurl = create_new_paste(sessionlog)
tr.write_line("pastebin session-log: %s\n" % pastebinurl)
def create_new_paste(contents):
"""
Creates a new paste using bpaste.net service.
:contents: paste contents as utf-8 encoded bytes
:returns: url to the pasted contents
"""
import re
if sys.version_info < (3, 0):
from urllib import urlopen, urlencode
else:
from urllib.request import urlopen
from urllib.parse import urlencode
params = {
'code': contents,
'lexer': 'python3' if sys.version_info[0] == 3 else 'python',
'expiry': '1week',
}
url = 'https://bpaste.net'
response = urlopen(url, data=urlencode(params).encode('ascii')).read()
m = re.search(r'href="/raw/(\w+)"', response.decode('utf-8'))
if m:
return '%s/show/%s' % (url, m.group(1))
else:
return 'bad response: ' + response
def pytest_terminal_summary(terminalreporter):
import _pytest.config
if terminalreporter.config.option.pastebin != "failed":
return
tr = terminalreporter
if 'failed' in tr.stats:
terminalreporter.write_sep("=", "Sending information to Paste Service")
for rep in terminalreporter.stats.get('failed'):
try:
msg = rep.longrepr.reprtraceback.reprentries[-1].reprfileloc
except AttributeError:
msg = tr._getfailureheadline(rep)
tw = _pytest.config.create_terminal_writer(terminalreporter.config, stringio=True)
rep.toterminal(tw)
s = tw.stringio.getvalue()
assert len(s)
pastebinurl = create_new_paste(s)
tr.write_line("%s --> %s" %(msg, pastebinurl))

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,226 +0,0 @@
""" recording warnings during test function execution. """
import inspect
import _pytest._code
import py
import sys
import warnings
import pytest
@pytest.yield_fixture
def recwarn(request):
"""Return a WarningsRecorder instance that provides these methods:
* ``pop(category=None)``: return last warning matching the category.
* ``clear()``: clear list of warnings
See http://docs.python.org/library/warnings.html for information
on warning categories.
"""
wrec = WarningsRecorder()
with wrec:
warnings.simplefilter('default')
yield wrec
def pytest_namespace():
return {'deprecated_call': deprecated_call,
'warns': warns}
def deprecated_call(func=None, *args, **kwargs):
""" assert that calling ``func(*args, **kwargs)`` triggers a
``DeprecationWarning`` or ``PendingDeprecationWarning``.
This function can be used as a context manager::
>>> import warnings
>>> def api_call_v2():
... warnings.warn('use v3 of this api', DeprecationWarning)
... return 200
>>> with deprecated_call():
... assert api_call_v2() == 200
Note: we cannot use WarningsRecorder here because it is still subject
to the mechanism that prevents warnings of the same type from being
triggered twice for the same module. See #1190.
"""
if not func:
return WarningsChecker(expected_warning=DeprecationWarning)
categories = []
def warn_explicit(message, category, *args, **kwargs):
categories.append(category)
old_warn_explicit(message, category, *args, **kwargs)
def warn(message, category=None, *args, **kwargs):
if isinstance(message, Warning):
categories.append(message.__class__)
else:
categories.append(category)
old_warn(message, category, *args, **kwargs)
old_warn = warnings.warn
old_warn_explicit = warnings.warn_explicit
warnings.warn_explicit = warn_explicit
warnings.warn = warn
try:
ret = func(*args, **kwargs)
finally:
warnings.warn_explicit = old_warn_explicit
warnings.warn = old_warn
deprecation_categories = (DeprecationWarning, PendingDeprecationWarning)
if not any(issubclass(c, deprecation_categories) for c in categories):
__tracebackhide__ = True
raise AssertionError("%r did not produce DeprecationWarning" % (func,))
return ret
def warns(expected_warning, *args, **kwargs):
"""Assert that code raises a particular class of warning.
Specifically, the input @expected_warning can be a warning class or
tuple of warning classes, and the code must return that warning
(if a single class) or one of those warnings (if a tuple).
This helper produces a list of ``warnings.WarningMessage`` objects,
one for each warning raised.
This function can be used as a context manager, or any of the other ways
``pytest.raises`` can be used::
>>> with warns(RuntimeWarning):
... warnings.warn("my warning", RuntimeWarning)
"""
wcheck = WarningsChecker(expected_warning)
if not args:
return wcheck
elif isinstance(args[0], str):
code, = args
assert isinstance(code, str)
frame = sys._getframe(1)
loc = frame.f_locals.copy()
loc.update(kwargs)
with wcheck:
code = _pytest._code.Source(code).compile()
py.builtin.exec_(code, frame.f_globals, loc)
else:
func = args[0]
with wcheck:
return func(*args[1:], **kwargs)
class RecordedWarning(object):
def __init__(self, message, category, filename, lineno, file, line):
self.message = message
self.category = category
self.filename = filename
self.lineno = lineno
self.file = file
self.line = line
class WarningsRecorder(object):
"""A context manager to record raised warnings.
Adapted from `warnings.catch_warnings`.
"""
def __init__(self, module=None):
self._module = sys.modules['warnings'] if module is None else module
self._entered = False
self._list = []
@property
def list(self):
"""The list of recorded warnings."""
return self._list
def __getitem__(self, i):
"""Get a recorded warning by index."""
return self._list[i]
def __iter__(self):
"""Iterate through the recorded warnings."""
return iter(self._list)
def __len__(self):
"""The number of recorded warnings."""
return len(self._list)
def pop(self, cls=Warning):
"""Pop the first recorded warning, raise exception if not exists."""
for i, w in enumerate(self._list):
if issubclass(w.category, cls):
return self._list.pop(i)
__tracebackhide__ = True
raise AssertionError("%r not found in warning list" % cls)
def clear(self):
"""Clear the list of recorded warnings."""
self._list[:] = []
def __enter__(self):
if self._entered:
__tracebackhide__ = True
raise RuntimeError("Cannot enter %r twice" % self)
self._entered = True
self._filters = self._module.filters
self._module.filters = self._filters[:]
self._showwarning = self._module.showwarning
def showwarning(message, category, filename, lineno,
file=None, line=None):
self._list.append(RecordedWarning(
message, category, filename, lineno, file, line))
# still perform old showwarning functionality
self._showwarning(
message, category, filename, lineno, file=file, line=line)
self._module.showwarning = showwarning
# allow the same warning to be raised more than once
self._module.simplefilter('always')
return self
def __exit__(self, *exc_info):
if not self._entered:
__tracebackhide__ = True
raise RuntimeError("Cannot exit %r without entering first" % self)
self._module.filters = self._filters
self._module.showwarning = self._showwarning
class WarningsChecker(WarningsRecorder):
def __init__(self, expected_warning=None, module=None):
super(WarningsChecker, self).__init__(module=module)
msg = ("exceptions must be old-style classes or "
"derived from Warning, not %s")
if isinstance(expected_warning, tuple):
for exc in expected_warning:
if not inspect.isclass(exc):
raise TypeError(msg % type(exc))
elif inspect.isclass(expected_warning):
expected_warning = (expected_warning,)
elif expected_warning is not None:
raise TypeError(msg % type(expected_warning))
self.expected_warning = expected_warning
def __exit__(self, *exc_info):
super(WarningsChecker, self).__exit__(*exc_info)
# only check if we're not currently handling an exception
if all(a is None for a in exc_info):
if self.expected_warning is not None:
if not any(r.category in self.expected_warning for r in self):
__tracebackhide__ = True
pytest.fail("DID NOT WARN")

View File

@@ -1,107 +0,0 @@
""" log machine-parseable test session result information in a plain
text file.
"""
import py
import os
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting", "resultlog plugin options")
group.addoption('--resultlog', '--result-log', action="store",
metavar="path", default=None,
help="DEPRECATED path for machine-readable result log.")
def pytest_configure(config):
resultlog = config.option.resultlog
# prevent opening resultlog on slave nodes (xdist)
if resultlog and not hasattr(config, 'slaveinput'):
dirname = os.path.dirname(os.path.abspath(resultlog))
if not os.path.isdir(dirname):
os.makedirs(dirname)
logfile = open(resultlog, 'w', 1) # line buffered
config._resultlog = ResultLog(config, logfile)
config.pluginmanager.register(config._resultlog)
from _pytest.deprecated import RESULT_LOG
config.warn('C1', RESULT_LOG)
def pytest_unconfigure(config):
resultlog = getattr(config, '_resultlog', None)
if resultlog:
resultlog.logfile.close()
del config._resultlog
config.pluginmanager.unregister(resultlog)
def generic_path(item):
chain = item.listchain()
gpath = [chain[0].name]
fspath = chain[0].fspath
fspart = False
for node in chain[1:]:
newfspath = node.fspath
if newfspath == fspath:
if fspart:
gpath.append(':')
fspart = False
else:
gpath.append('.')
else:
gpath.append('/')
fspart = True
name = node.name
if name[0] in '([':
gpath.pop()
gpath.append(name)
fspath = newfspath
return ''.join(gpath)
class ResultLog(object):
def __init__(self, config, logfile):
self.config = config
self.logfile = logfile # preferably line buffered
def write_log_entry(self, testpath, lettercode, longrepr):
py.builtin.print_("%s %s" % (lettercode, testpath), file=self.logfile)
for line in longrepr.splitlines():
py.builtin.print_(" %s" % line, file=self.logfile)
def log_outcome(self, report, lettercode, longrepr):
testpath = getattr(report, 'nodeid', None)
if testpath is None:
testpath = report.fspath
self.write_log_entry(testpath, lettercode, longrepr)
def pytest_runtest_logreport(self, report):
if report.when != "call" and report.passed:
return
res = self.config.hook.pytest_report_teststatus(report=report)
code = res[1]
if code == 'x':
longrepr = str(report.longrepr)
elif code == 'X':
longrepr = ''
elif report.passed:
longrepr = ""
elif report.failed:
longrepr = str(report.longrepr)
elif report.skipped:
longrepr = str(report.longrepr[2])
self.log_outcome(report, code, longrepr)
def pytest_collectreport(self, report):
if not report.passed:
if report.failed:
code = "F"
longrepr = str(report.longrepr)
else:
assert report.skipped
code = "S"
longrepr = "%s:%d: %s" % report.longrepr
self.log_outcome(report, code, longrepr)
def pytest_internalerror(self, excrepr):
reprcrash = getattr(excrepr, 'reprcrash', None)
path = getattr(reprcrash, "path", None)
if path is None:
path = "cwd:%s" % py.path.local()
self.write_log_entry(path, '!', str(excrepr))

View File

@@ -1,578 +0,0 @@
""" basic collect and runtest protocol implementations """
import bdb
import sys
from time import time
import py
import pytest
from _pytest._code.code import TerminalRepr, ExceptionInfo
def pytest_namespace():
return {
'fail' : fail,
'skip' : skip,
'importorskip' : importorskip,
'exit' : exit,
}
#
# pytest plugin hooks
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting", "reporting", after="general")
group.addoption('--durations',
action="store", type=int, default=None, metavar="N",
help="show N slowest setup/test durations (N=0 for all)."),
def pytest_terminal_summary(terminalreporter):
durations = terminalreporter.config.option.durations
if durations is None:
return
tr = terminalreporter
dlist = []
for replist in tr.stats.values():
for rep in replist:
if hasattr(rep, 'duration'):
dlist.append(rep)
if not dlist:
return
dlist.sort(key=lambda x: x.duration)
dlist.reverse()
if not durations:
tr.write_sep("=", "slowest test durations")
else:
tr.write_sep("=", "slowest %s test durations" % durations)
dlist = dlist[:durations]
for rep in dlist:
nodeid = rep.nodeid.replace("::()::", "::")
tr.write_line("%02.2fs %-8s %s" %
(rep.duration, rep.when, nodeid))
def pytest_sessionstart(session):
session._setupstate = SetupState()
def pytest_sessionfinish(session):
session._setupstate.teardown_all()
class NodeInfo:
def __init__(self, location):
self.location = location
def pytest_runtest_protocol(item, nextitem):
item.ihook.pytest_runtest_logstart(
nodeid=item.nodeid, location=item.location,
)
runtestprotocol(item, nextitem=nextitem)
return True
def runtestprotocol(item, log=True, nextitem=None):
hasrequest = hasattr(item, "_request")
if hasrequest and not item._request:
item._initrequest()
rep = call_and_report(item, "setup", log)
reports = [rep]
if rep.passed:
if item.config.option.setupshow:
show_test_item(item)
if not item.config.option.setuponly:
reports.append(call_and_report(item, "call", log))
reports.append(call_and_report(item, "teardown", log,
nextitem=nextitem))
# after all teardown hooks have been called
# want funcargs and request info to go away
if hasrequest:
item._request = False
item.funcargs = None
return reports
def show_test_item(item):
"""Show test function, parameters and the fixtures of the test item."""
tw = item.config.get_terminal_writer()
tw.line()
tw.write(' ' * 8)
tw.write(item._nodeid)
used_fixtures = sorted(item._fixtureinfo.name2fixturedefs.keys())
if used_fixtures:
tw.write(' (fixtures used: {0})'.format(', '.join(used_fixtures)))
def pytest_runtest_setup(item):
item.session._setupstate.prepare(item)
def pytest_runtest_call(item):
try:
item.runtest()
except Exception:
# Store trace info to allow postmortem debugging
type, value, tb = sys.exc_info()
tb = tb.tb_next # Skip *this* frame
sys.last_type = type
sys.last_value = value
sys.last_traceback = tb
del tb # Get rid of it in this namespace
raise
def pytest_runtest_teardown(item, nextitem):
item.session._setupstate.teardown_exact(item, nextitem)
def pytest_report_teststatus(report):
if report.when in ("setup", "teardown"):
if report.failed:
# category, shortletter, verbose-word
return "error", "E", "ERROR"
elif report.skipped:
return "skipped", "s", "SKIPPED"
else:
return "", "", ""
#
# Implementation
def call_and_report(item, when, log=True, **kwds):
call = call_runtest_hook(item, when, **kwds)
hook = item.ihook
report = hook.pytest_runtest_makereport(item=item, call=call)
if log:
hook.pytest_runtest_logreport(report=report)
if check_interactive_exception(call, report):
hook.pytest_exception_interact(node=item, call=call, report=report)
return report
def check_interactive_exception(call, report):
return call.excinfo and not (
hasattr(report, "wasxfail") or
call.excinfo.errisinstance(skip.Exception) or
call.excinfo.errisinstance(bdb.BdbQuit))
def call_runtest_hook(item, when, **kwds):
hookname = "pytest_runtest_" + when
ihook = getattr(item.ihook, hookname)
return CallInfo(lambda: ihook(item=item, **kwds), when=when)
class CallInfo:
""" Result/Exception info a function invocation. """
#: None or ExceptionInfo object.
excinfo = None
def __init__(self, func, when):
#: context of invocation: one of "setup", "call",
#: "teardown", "memocollect"
self.when = when
self.start = time()
try:
self.result = func()
except KeyboardInterrupt:
self.stop = time()
raise
except:
self.excinfo = ExceptionInfo()
self.stop = time()
def __repr__(self):
if self.excinfo:
status = "exception: %s" % str(self.excinfo.value)
else:
status = "result: %r" % (self.result,)
return "<CallInfo when=%r %s>" % (self.when, status)
def getslaveinfoline(node):
try:
return node._slaveinfocache
except AttributeError:
d = node.slaveinfo
ver = "%s.%s.%s" % d['version_info'][:3]
node._slaveinfocache = s = "[%s] %s -- Python %s %s" % (
d['id'], d['sysplatform'], ver, d['executable'])
return s
class BaseReport(object):
def __init__(self, **kw):
self.__dict__.update(kw)
def toterminal(self, out):
if hasattr(self, 'node'):
out.line(getslaveinfoline(self.node))
longrepr = self.longrepr
if longrepr is None:
return
if hasattr(longrepr, 'toterminal'):
longrepr.toterminal(out)
else:
try:
out.line(longrepr)
except UnicodeEncodeError:
out.line("<unprintable longrepr>")
def get_sections(self, prefix):
for name, content in self.sections:
if name.startswith(prefix):
yield prefix, content
@property
def longreprtext(self):
"""
Read-only property that returns the full string representation
of ``longrepr``.
.. versionadded:: 3.0
"""
tw = py.io.TerminalWriter(stringio=True)
tw.hasmarkup = False
self.toterminal(tw)
exc = tw.stringio.getvalue()
return exc.strip()
@property
def capstdout(self):
"""Return captured text from stdout, if capturing is enabled
.. versionadded:: 3.0
"""
return ''.join(content for (prefix, content) in self.get_sections('Captured stdout'))
@property
def capstderr(self):
"""Return captured text from stderr, if capturing is enabled
.. versionadded:: 3.0
"""
return ''.join(content for (prefix, content) in self.get_sections('Captured stderr'))
passed = property(lambda x: x.outcome == "passed")
failed = property(lambda x: x.outcome == "failed")
skipped = property(lambda x: x.outcome == "skipped")
@property
def fspath(self):
return self.nodeid.split("::")[0]
def pytest_runtest_makereport(item, call):
when = call.when
duration = call.stop-call.start
keywords = dict([(x,1) for x in item.keywords])
excinfo = call.excinfo
sections = []
if not call.excinfo:
outcome = "passed"
longrepr = None
else:
if not isinstance(excinfo, ExceptionInfo):
outcome = "failed"
longrepr = excinfo
elif excinfo.errisinstance(pytest.skip.Exception):
outcome = "skipped"
r = excinfo._getreprcrash()
longrepr = (str(r.path), r.lineno, r.message)
else:
outcome = "failed"
if call.when == "call":
longrepr = item.repr_failure(excinfo)
else: # exception in setup or teardown
longrepr = item._repr_failure_py(excinfo,
style=item.config.option.tbstyle)
for rwhen, key, content in item._report_sections:
sections.append(("Captured %s %s" %(key, rwhen), content))
return TestReport(item.nodeid, item.location,
keywords, outcome, longrepr, when,
sections, duration)
class TestReport(BaseReport):
""" Basic test report object (also used for setup and teardown calls if
they fail).
"""
def __init__(self, nodeid, location, keywords, outcome,
longrepr, when, sections=(), duration=0, **extra):
#: normalized collection node id
self.nodeid = nodeid
#: a (filesystempath, lineno, domaininfo) tuple indicating the
#: actual location of a test item - it might be different from the
#: collected one e.g. if a method is inherited from a different module.
self.location = location
#: a name -> value dictionary containing all keywords and
#: markers associated with a test invocation.
self.keywords = keywords
#: test outcome, always one of "passed", "failed", "skipped".
self.outcome = outcome
#: None or a failure representation.
self.longrepr = longrepr
#: one of 'setup', 'call', 'teardown' to indicate runtest phase.
self.when = when
#: list of pairs ``(str, str)`` of extra information which needs to
#: marshallable. Used by pytest to add captured text
#: from ``stdout`` and ``stderr``, but may be used by other plugins
#: to add arbitrary information to reports.
self.sections = list(sections)
#: time it took to run just the test
self.duration = duration
self.__dict__.update(extra)
def __repr__(self):
return "<TestReport %r when=%r outcome=%r>" % (
self.nodeid, self.when, self.outcome)
class TeardownErrorReport(BaseReport):
outcome = "failed"
when = "teardown"
def __init__(self, longrepr, **extra):
self.longrepr = longrepr
self.sections = []
self.__dict__.update(extra)
def pytest_make_collect_report(collector):
call = CallInfo(collector._memocollect, "memocollect")
longrepr = None
if not call.excinfo:
outcome = "passed"
else:
from _pytest import nose
skip_exceptions = (Skipped,) + nose.get_skip_exceptions()
if call.excinfo.errisinstance(skip_exceptions):
outcome = "skipped"
r = collector._repr_failure_py(call.excinfo, "line").reprcrash
longrepr = (str(r.path), r.lineno, r.message)
else:
outcome = "failed"
errorinfo = collector.repr_failure(call.excinfo)
if not hasattr(errorinfo, "toterminal"):
errorinfo = CollectErrorRepr(errorinfo)
longrepr = errorinfo
rep = CollectReport(collector.nodeid, outcome, longrepr,
getattr(call, 'result', None))
rep.call = call # see collect_one_node
return rep
class CollectReport(BaseReport):
def __init__(self, nodeid, outcome, longrepr, result,
sections=(), **extra):
self.nodeid = nodeid
self.outcome = outcome
self.longrepr = longrepr
self.result = result or []
self.sections = list(sections)
self.__dict__.update(extra)
@property
def location(self):
return (self.fspath, None, self.fspath)
def __repr__(self):
return "<CollectReport %r lenresult=%s outcome=%r>" % (
self.nodeid, len(self.result), self.outcome)
class CollectErrorRepr(TerminalRepr):
def __init__(self, msg):
self.longrepr = msg
def toterminal(self, out):
out.line(self.longrepr, red=True)
class SetupState(object):
""" shared state for setting up/tearing down test items or collectors. """
def __init__(self):
self.stack = []
self._finalizers = {}
def addfinalizer(self, finalizer, colitem):
""" attach a finalizer to the given colitem.
if colitem is None, this will add a finalizer that
is called at the end of teardown_all().
"""
assert colitem and not isinstance(colitem, tuple)
assert py.builtin.callable(finalizer)
#assert colitem in self.stack # some unit tests don't setup stack :/
self._finalizers.setdefault(colitem, []).append(finalizer)
def _pop_and_teardown(self):
colitem = self.stack.pop()
self._teardown_with_finalization(colitem)
def _callfinalizers(self, colitem):
finalizers = self._finalizers.pop(colitem, None)
exc = None
while finalizers:
fin = finalizers.pop()
try:
fin()
except Exception:
# XXX Only first exception will be seen by user,
# ideally all should be reported.
if exc is None:
exc = sys.exc_info()
if exc:
py.builtin._reraise(*exc)
def _teardown_with_finalization(self, colitem):
self._callfinalizers(colitem)
if hasattr(colitem, "teardown"):
colitem.teardown()
for colitem in self._finalizers:
assert colitem is None or colitem in self.stack \
or isinstance(colitem, tuple)
def teardown_all(self):
while self.stack:
self._pop_and_teardown()
for key in list(self._finalizers):
self._teardown_with_finalization(key)
assert not self._finalizers
def teardown_exact(self, item, nextitem):
needed_collectors = nextitem and nextitem.listchain() or []
self._teardown_towards(needed_collectors)
def _teardown_towards(self, needed_collectors):
while self.stack:
if self.stack == needed_collectors[:len(self.stack)]:
break
self._pop_and_teardown()
def prepare(self, colitem):
""" setup objects along the collector chain to the test-method
and teardown previously setup objects."""
needed_collectors = colitem.listchain()
self._teardown_towards(needed_collectors)
# check if the last collection node has raised an error
for col in self.stack:
if hasattr(col, '_prepare_exc'):
py.builtin._reraise(*col._prepare_exc)
for col in needed_collectors[len(self.stack):]:
self.stack.append(col)
try:
col.setup()
except Exception:
col._prepare_exc = sys.exc_info()
raise
def collect_one_node(collector):
ihook = collector.ihook
ihook.pytest_collectstart(collector=collector)
rep = ihook.pytest_make_collect_report(collector=collector)
call = rep.__dict__.pop("call", None)
if call and check_interactive_exception(call, rep):
ihook.pytest_exception_interact(node=collector, call=call, report=rep)
return rep
# =============================================================
# Test OutcomeExceptions and helpers for creating them.
class OutcomeException(Exception):
""" OutcomeException and its subclass instances indicate and
contain info about test and collection outcomes.
"""
def __init__(self, msg=None, pytrace=True):
Exception.__init__(self, msg)
self.msg = msg
self.pytrace = pytrace
def __repr__(self):
if self.msg:
val = self.msg
if isinstance(val, bytes):
val = py._builtin._totext(val, errors='replace')
return val
return "<%s instance>" %(self.__class__.__name__,)
__str__ = __repr__
class Skipped(OutcomeException):
# XXX hackish: on 3k we fake to live in the builtins
# in order to have Skipped exception printing shorter/nicer
__module__ = 'builtins'
def __init__(self, msg=None, pytrace=True, allow_module_level=False):
OutcomeException.__init__(self, msg=msg, pytrace=pytrace)
self.allow_module_level = allow_module_level
class Failed(OutcomeException):
""" raised from an explicit call to pytest.fail() """
__module__ = 'builtins'
class Exit(KeyboardInterrupt):
""" raised for immediate program exits (no tracebacks/summaries)"""
def __init__(self, msg="unknown reason"):
self.msg = msg
KeyboardInterrupt.__init__(self, msg)
# exposed helper methods
def exit(msg):
""" exit testing process as if KeyboardInterrupt was triggered. """
__tracebackhide__ = True
raise Exit(msg)
exit.Exception = Exit
def skip(msg=""):
""" skip an executing test with the given message. Note: it's usually
better to use the pytest.mark.skipif marker to declare a test to be
skipped under certain conditions like mismatching platforms or
dependencies. See the pytest_skipping plugin for details.
"""
__tracebackhide__ = True
raise Skipped(msg=msg)
skip.Exception = Skipped
def fail(msg="", pytrace=True):
""" explicitly fail an currently-executing test with the given Message.
:arg pytrace: if false the msg represents the full failure information
and no python traceback will be reported.
"""
__tracebackhide__ = True
raise Failed(msg=msg, pytrace=pytrace)
fail.Exception = Failed
def importorskip(modname, minversion=None):
""" return imported module if it has at least "minversion" as its
__version__ attribute. If no minversion is specified the a skip
is only triggered if the module can not be imported.
"""
__tracebackhide__ = True
compile(modname, '', 'eval') # to catch syntaxerrors
should_skip = False
try:
__import__(modname)
except ImportError:
# Do not raise chained exception here(#1485)
should_skip = True
if should_skip:
raise Skipped("could not import %r" %(modname,), allow_module_level=True)
mod = sys.modules[modname]
if minversion is None:
return mod
verattr = getattr(mod, '__version__', None)
if minversion is not None:
try:
from pkg_resources import parse_version as pv
except ImportError:
raise Skipped("we have a required version for %r but can not import "
"pkg_resources to parse version strings." % (modname,),
allow_module_level=True)
if verattr is None or pv(verattr) < pv(minversion):
raise Skipped("module %r has __version__ %r, required is: %r" %(
modname, verattr, minversion), allow_module_level=True)
return mod

View File

@@ -1,72 +0,0 @@
import pytest
import sys
def pytest_addoption(parser):
group = parser.getgroup("debugconfig")
group.addoption('--setuponly', '--setup-only', action="store_true",
help="only setup fixtures, do not execute tests.")
group.addoption('--setupshow', '--setup-show', action="store_true",
help="show setup of fixtures while executing tests.")
@pytest.hookimpl(hookwrapper=True)
def pytest_fixture_setup(fixturedef, request):
yield
config = request.config
if config.option.setupshow:
if hasattr(request, 'param'):
# Save the fixture parameter so ._show_fixture_action() can
# display it now and during the teardown (in .finish()).
if fixturedef.ids:
if callable(fixturedef.ids):
fixturedef.cached_param = fixturedef.ids(request.param)
else:
fixturedef.cached_param = fixturedef.ids[
request.param_index]
else:
fixturedef.cached_param = request.param
_show_fixture_action(fixturedef, 'SETUP')
def pytest_fixture_post_finalizer(fixturedef):
if hasattr(fixturedef, "cached_result"):
config = fixturedef._fixturemanager.config
if config.option.setupshow:
_show_fixture_action(fixturedef, 'TEARDOWN')
if hasattr(fixturedef, "cached_param"):
del fixturedef.cached_param
def _show_fixture_action(fixturedef, msg):
config = fixturedef._fixturemanager.config
capman = config.pluginmanager.getplugin('capturemanager')
if capman:
out, err = capman.suspendcapture()
tw = config.get_terminal_writer()
tw.line()
tw.write(' ' * 2 * fixturedef.scopenum)
tw.write('{step} {scope} {fixture}'.format(
step=msg.ljust(8), # align the output to TEARDOWN
scope=fixturedef.scope[0].upper(),
fixture=fixturedef.argname))
if msg == 'SETUP':
deps = sorted(arg for arg in fixturedef.argnames if arg != 'request')
if deps:
tw.write(' (fixtures used: {0})'.format(', '.join(deps)))
if hasattr(fixturedef, 'cached_param'):
tw.write('[{0}]'.format(fixturedef.cached_param))
if capman:
capman.resumecapture()
sys.stdout.write(out)
sys.stderr.write(err)
@pytest.hookimpl(tryfirst=True)
def pytest_cmdline_main(config):
if config.option.setuponly:
config.option.setupshow = True

View File

@@ -1,23 +0,0 @@
import pytest
def pytest_addoption(parser):
group = parser.getgroup("debugconfig")
group.addoption('--setupplan', '--setup-plan', action="store_true",
help="show what fixtures and tests would be executed but "
"don't execute anything.")
@pytest.hookimpl(tryfirst=True)
def pytest_fixture_setup(fixturedef, request):
# Will return a dummy fixture if the setuponly option is provided.
if request.config.option.setupplan:
fixturedef.cached_result = (None, None, None)
return fixturedef.cached_result
@pytest.hookimpl(tryfirst=True)
def pytest_cmdline_main(config):
if config.option.setupplan:
config.option.setuponly = True
config.option.setupshow = True

View File

@@ -1,375 +0,0 @@
""" support for skip/xfail functions and markers. """
import os
import sys
import traceback
import py
import pytest
from _pytest.mark import MarkInfo, MarkDecorator
def pytest_addoption(parser):
group = parser.getgroup("general")
group.addoption('--runxfail',
action="store_true", dest="runxfail", default=False,
help="run tests even if they are marked xfail")
parser.addini("xfail_strict", "default for the strict parameter of xfail "
"markers when not given explicitly (default: "
"False)",
default=False,
type="bool")
def pytest_configure(config):
if config.option.runxfail:
old = pytest.xfail
config._cleanup.append(lambda: setattr(pytest, "xfail", old))
def nop(*args, **kwargs):
pass
nop.Exception = XFailed
setattr(pytest, "xfail", nop)
config.addinivalue_line("markers",
"skip(reason=None): skip the given test function with an optional reason. "
"Example: skip(reason=\"no way of currently testing this\") skips the "
"test."
)
config.addinivalue_line("markers",
"skipif(condition): skip the given test function if eval(condition) "
"results in a True value. Evaluation happens within the "
"module global context. Example: skipif('sys.platform == \"win32\"') "
"skips the test if we are on the win32 platform. see "
"http://pytest.org/latest/skipping.html"
)
config.addinivalue_line("markers",
"xfail(condition, reason=None, run=True, raises=None, strict=False): "
"mark the the test function as an expected failure if eval(condition) "
"has a True value. Optionally specify a reason for better reporting "
"and run=False if you don't even want to execute the test function. "
"If only specific exception(s) are expected, you can list them in "
"raises, and if the test fails in other ways, it will be reported as "
"a true failure. See http://pytest.org/latest/skipping.html"
)
def pytest_namespace():
return dict(xfail=xfail)
class XFailed(pytest.fail.Exception):
""" raised from an explicit call to pytest.xfail() """
def xfail(reason=""):
""" xfail an executing test or setup functions with the given reason."""
__tracebackhide__ = True
raise XFailed(reason)
xfail.Exception = XFailed
class MarkEvaluator:
def __init__(self, item, name):
self.item = item
self.name = name
@property
def holder(self):
return self.item.keywords.get(self.name)
def __bool__(self):
return bool(self.holder)
__nonzero__ = __bool__
def wasvalid(self):
return not hasattr(self, 'exc')
def invalidraise(self, exc):
raises = self.get('raises')
if not raises:
return
return not isinstance(exc, raises)
def istrue(self):
try:
return self._istrue()
except Exception:
self.exc = sys.exc_info()
if isinstance(self.exc[1], SyntaxError):
msg = [" " * (self.exc[1].offset + 4) + "^",]
msg.append("SyntaxError: invalid syntax")
else:
msg = traceback.format_exception_only(*self.exc[:2])
pytest.fail("Error evaluating %r expression\n"
" %s\n"
"%s"
%(self.name, self.expr, "\n".join(msg)),
pytrace=False)
def _getglobals(self):
d = {'os': os, 'sys': sys, 'config': self.item.config}
d.update(self.item.obj.__globals__)
return d
def _istrue(self):
if hasattr(self, 'result'):
return self.result
if self.holder:
d = self._getglobals()
if self.holder.args or 'condition' in self.holder.kwargs:
self.result = False
# "holder" might be a MarkInfo or a MarkDecorator; only
# MarkInfo keeps track of all parameters it received in an
# _arglist attribute
if hasattr(self.holder, '_arglist'):
arglist = self.holder._arglist
else:
arglist = [(self.holder.args, self.holder.kwargs)]
for args, kwargs in arglist:
if 'condition' in kwargs:
args = (kwargs['condition'],)
for expr in args:
self.expr = expr
if isinstance(expr, py.builtin._basestring):
result = cached_eval(self.item.config, expr, d)
else:
if "reason" not in kwargs:
# XXX better be checked at collection time
msg = "you need to specify reason=STRING " \
"when using booleans as conditions."
pytest.fail(msg)
result = bool(expr)
if result:
self.result = True
self.reason = kwargs.get('reason', None)
self.expr = expr
return self.result
else:
self.result = True
return getattr(self, 'result', False)
def get(self, attr, default=None):
return self.holder.kwargs.get(attr, default)
def getexplanation(self):
expl = getattr(self, 'reason', None) or self.get('reason', None)
if not expl:
if not hasattr(self, 'expr'):
return ""
else:
return "condition: " + str(self.expr)
return expl
@pytest.hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
# Check if skip or skipif are specified as pytest marks
skipif_info = item.keywords.get('skipif')
if isinstance(skipif_info, (MarkInfo, MarkDecorator)):
eval_skipif = MarkEvaluator(item, 'skipif')
if eval_skipif.istrue():
item._evalskip = eval_skipif
pytest.skip(eval_skipif.getexplanation())
skip_info = item.keywords.get('skip')
if isinstance(skip_info, (MarkInfo, MarkDecorator)):
item._evalskip = True
if 'reason' in skip_info.kwargs:
pytest.skip(skip_info.kwargs['reason'])
elif skip_info.args:
pytest.skip(skip_info.args[0])
else:
pytest.skip("unconditional skip")
item._evalxfail = MarkEvaluator(item, 'xfail')
check_xfail_no_run(item)
@pytest.mark.hookwrapper
def pytest_pyfunc_call(pyfuncitem):
check_xfail_no_run(pyfuncitem)
outcome = yield
passed = outcome.excinfo is None
if passed:
check_strict_xfail(pyfuncitem)
def check_xfail_no_run(item):
"""check xfail(run=False)"""
if not item.config.option.runxfail:
evalxfail = item._evalxfail
if evalxfail.istrue():
if not evalxfail.get('run', True):
pytest.xfail("[NOTRUN] " + evalxfail.getexplanation())
def check_strict_xfail(pyfuncitem):
"""check xfail(strict=True) for the given PASSING test"""
evalxfail = pyfuncitem._evalxfail
if evalxfail.istrue():
strict_default = pyfuncitem.config.getini('xfail_strict')
is_strict_xfail = evalxfail.get('strict', strict_default)
if is_strict_xfail:
del pyfuncitem._evalxfail
explanation = evalxfail.getexplanation()
pytest.fail('[XPASS(strict)] ' + explanation, pytrace=False)
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
rep = outcome.get_result()
evalxfail = getattr(item, '_evalxfail', None)
evalskip = getattr(item, '_evalskip', None)
# unitttest special case, see setting of _unexpectedsuccess
if hasattr(item, '_unexpectedsuccess') and rep.when == "call":
from _pytest.compat import _is_unittest_unexpected_success_a_failure
if item._unexpectedsuccess:
rep.longrepr = "Unexpected success: {0}".format(item._unexpectedsuccess)
else:
rep.longrepr = "Unexpected success"
if _is_unittest_unexpected_success_a_failure():
rep.outcome = "failed"
else:
rep.outcome = "passed"
rep.wasxfail = rep.longrepr
elif item.config.option.runxfail:
pass # don't interefere
elif call.excinfo and call.excinfo.errisinstance(pytest.xfail.Exception):
rep.wasxfail = "reason: " + call.excinfo.value.msg
rep.outcome = "skipped"
elif evalxfail and not rep.skipped and evalxfail.wasvalid() and \
evalxfail.istrue():
if call.excinfo:
if evalxfail.invalidraise(call.excinfo.value):
rep.outcome = "failed"
else:
rep.outcome = "skipped"
rep.wasxfail = evalxfail.getexplanation()
elif call.when == "call":
strict_default = item.config.getini('xfail_strict')
is_strict_xfail = evalxfail.get('strict', strict_default)
explanation = evalxfail.getexplanation()
if is_strict_xfail:
rep.outcome = "failed"
rep.longrepr = "[XPASS(strict)] {0}".format(explanation)
else:
rep.outcome = "passed"
rep.wasxfail = explanation
elif evalskip is not None and rep.skipped and type(rep.longrepr) is tuple:
# skipped by mark.skipif; change the location of the failure
# to point to the item definition, otherwise it will display
# the location of where the skip exception was raised within pytest
filename, line, reason = rep.longrepr
filename, line = item.location[:2]
rep.longrepr = filename, line, reason
# called by terminalreporter progress reporting
def pytest_report_teststatus(report):
if hasattr(report, "wasxfail"):
if report.skipped:
return "xfailed", "x", "xfail"
elif report.passed:
return "xpassed", "X", ("XPASS", {'yellow': True})
# called by the terminalreporter instance/plugin
def pytest_terminal_summary(terminalreporter):
tr = terminalreporter
if not tr.reportchars:
#for name in "xfailed skipped failed xpassed":
# if not tr.stats.get(name, 0):
# tr.write_line("HINT: use '-r' option to see extra "
# "summary info about tests")
# break
return
lines = []
for char in tr.reportchars:
if char == "x":
show_xfailed(terminalreporter, lines)
elif char == "X":
show_xpassed(terminalreporter, lines)
elif char in "fF":
show_simple(terminalreporter, lines, 'failed', "FAIL %s")
elif char in "sS":
show_skipped(terminalreporter, lines)
elif char == "E":
show_simple(terminalreporter, lines, 'error', "ERROR %s")
elif char == 'p':
show_simple(terminalreporter, lines, 'passed', "PASSED %s")
if lines:
tr._tw.sep("=", "short test summary info")
for line in lines:
tr._tw.line(line)
def show_simple(terminalreporter, lines, stat, format):
failed = terminalreporter.stats.get(stat)
if failed:
for rep in failed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
lines.append(format %(pos,))
def show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
if xfailed:
for rep in xfailed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
lines.append("XFAIL %s" % (pos,))
if reason:
lines.append(" " + str(reason))
def show_xpassed(terminalreporter, lines):
xpassed = terminalreporter.stats.get("xpassed")
if xpassed:
for rep in xpassed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
lines.append("XPASS %s %s" %(pos, reason))
def cached_eval(config, expr, d):
if not hasattr(config, '_evalcache'):
config._evalcache = {}
try:
return config._evalcache[expr]
except KeyError:
import _pytest._code
exprcode = _pytest._code.compile(expr, mode="eval")
config._evalcache[expr] = x = eval(exprcode, d)
return x
def folded_skips(skipped):
d = {}
for event in skipped:
key = event.longrepr
assert len(key) == 3, (event, key)
d.setdefault(key, []).append(event)
l = []
for key, events in d.items():
l.append((len(events),) + key)
return l
def show_skipped(terminalreporter, lines):
tr = terminalreporter
skipped = tr.stats.get('skipped', [])
if skipped:
#if not tr.hasopt('skipped'):
# tr.write_line(
# "%d skipped tests, specify -rs for more info" %
# len(skipped))
# return
fskips = folded_skips(skipped)
if fskips:
#tr.write_sep("_", "skipped test summary")
for num, fspath, lineno, reason in fskips:
if reason.startswith("Skipped: "):
reason = reason[9:]
lines.append("SKIP [%d] %s:%d: %s" %
(num, fspath, lineno, reason))

View File

@@ -1,593 +0,0 @@
""" terminal reporting of the full testing process.
This is a good source for looking at the various reporting hooks.
"""
from _pytest.main import EXIT_OK, EXIT_TESTSFAILED, EXIT_INTERRUPTED, \
EXIT_USAGEERROR, EXIT_NOTESTSCOLLECTED
import pytest
import py
import sys
import time
import platform
import _pytest._pluggy as pluggy
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting", "reporting", after="general")
group._addoption('-v', '--verbose', action="count",
dest="verbose", default=0, help="increase verbosity."),
group._addoption('-q', '--quiet', action="count",
dest="quiet", default=0, help="decrease verbosity."),
group._addoption('-r',
action="store", dest="reportchars", default='', metavar="chars",
help="show extra test summary info as specified by chars (f)ailed, "
"(E)error, (s)skipped, (x)failed, (X)passed, "
"(p)passed, (P)passed with output, (a)all except pP. "
"The pytest warnings are displayed at all times except when "
"--disable-pytest-warnings is set")
group._addoption('--disable-pytest-warnings', default=False,
dest='disablepytestwarnings', action='store_true',
help='disable warnings summary, overrides -r w flag')
group._addoption('-l', '--showlocals',
action="store_true", dest="showlocals", default=False,
help="show locals in tracebacks (disabled by default).")
group._addoption('--tb', metavar="style",
action="store", dest="tbstyle", default='auto',
choices=['auto', 'long', 'short', 'no', 'line', 'native'],
help="traceback print mode (auto/long/short/line/native/no).")
group._addoption('--fulltrace', '--full-trace',
action="store_true", default=False,
help="don't cut any tracebacks (default is to cut).")
group._addoption('--color', metavar="color",
action="store", dest="color", default='auto',
choices=['yes', 'no', 'auto'],
help="color terminal output (yes/no/auto).")
def pytest_configure(config):
config.option.verbose -= config.option.quiet
reporter = TerminalReporter(config, sys.stdout)
config.pluginmanager.register(reporter, 'terminalreporter')
if config.option.debug or config.option.traceconfig:
def mywriter(tags, args):
msg = " ".join(map(str, args))
reporter.write_line("[traceconfig] " + msg)
config.trace.root.setprocessor("pytest:config", mywriter)
def getreportopt(config):
reportopts = ""
reportchars = config.option.reportchars
if not config.option.disablepytestwarnings and 'w' not in reportchars:
reportchars += 'w'
elif config.option.disablepytestwarnings and 'w' in reportchars:
reportchars = reportchars.replace('w', '')
if reportchars:
for char in reportchars:
if char not in reportopts and char != 'a':
reportopts += char
elif char == 'a':
reportopts = 'fEsxXw'
return reportopts
def pytest_report_teststatus(report):
if report.passed:
letter = "."
elif report.skipped:
letter = "s"
elif report.failed:
letter = "F"
if report.when != "call":
letter = "f"
return report.outcome, letter, report.outcome.upper()
class WarningReport:
def __init__(self, code, message, nodeid=None, fslocation=None):
self.code = code
self.message = message
self.nodeid = nodeid
self.fslocation = fslocation
class TerminalReporter:
def __init__(self, config, file=None):
import _pytest.config
self.config = config
self.verbosity = self.config.option.verbose
self.showheader = self.verbosity >= 0
self.showfspath = self.verbosity >= 0
self.showlongtestinfo = self.verbosity > 0
self._numcollected = 0
self.stats = {}
self.startdir = py.path.local()
if file is None:
file = sys.stdout
self._tw = self.writer = _pytest.config.create_terminal_writer(config,
file)
self.currentfspath = None
self.reportchars = getreportopt(config)
self.hasmarkup = self._tw.hasmarkup
self.isatty = file.isatty()
def hasopt(self, char):
char = {'xfailed': 'x', 'skipped': 's'}.get(char, char)
return char in self.reportchars
def write_fspath_result(self, nodeid, res):
fspath = self.config.rootdir.join(nodeid.split("::")[0])
if fspath != self.currentfspath:
self.currentfspath = fspath
fspath = self.startdir.bestrelpath(fspath)
self._tw.line()
self._tw.write(fspath + " ")
self._tw.write(res)
def write_ensure_prefix(self, prefix, extra="", **kwargs):
if self.currentfspath != prefix:
self._tw.line()
self.currentfspath = prefix
self._tw.write(prefix)
if extra:
self._tw.write(extra, **kwargs)
self.currentfspath = -2
def ensure_newline(self):
if self.currentfspath:
self._tw.line()
self.currentfspath = None
def write(self, content, **markup):
self._tw.write(content, **markup)
def write_line(self, line, **markup):
if not py.builtin._istext(line):
line = py.builtin.text(line, errors="replace")
self.ensure_newline()
self._tw.line(line, **markup)
def rewrite(self, line, **markup):
line = str(line)
self._tw.write("\r" + line, **markup)
def write_sep(self, sep, title=None, **markup):
self.ensure_newline()
self._tw.sep(sep, title, **markup)
def section(self, title, sep="=", **kw):
self._tw.sep(sep, title, **kw)
def line(self, msg, **kw):
self._tw.line(msg, **kw)
def pytest_internalerror(self, excrepr):
for line in py.builtin.text(excrepr).split("\n"):
self.write_line("INTERNALERROR> " + line)
return 1
def pytest_logwarning(self, code, fslocation, message, nodeid):
warnings = self.stats.setdefault("warnings", [])
if isinstance(fslocation, tuple):
fslocation = "%s:%d" % fslocation
warning = WarningReport(code=code, fslocation=fslocation,
message=message, nodeid=nodeid)
warnings.append(warning)
def pytest_plugin_registered(self, plugin):
if self.config.option.traceconfig:
msg = "PLUGIN registered: %s" % (plugin,)
# XXX this event may happen during setup/teardown time
# which unfortunately captures our output here
# which garbles our output if we use self.write_line
self.write_line(msg)
def pytest_deselected(self, items):
self.stats.setdefault('deselected', []).extend(items)
def pytest_runtest_logstart(self, nodeid, location):
# ensure that the path is printed before the
# 1st test of a module starts running
if self.showlongtestinfo:
line = self._locationline(nodeid, *location)
self.write_ensure_prefix(line, "")
elif self.showfspath:
fsid = nodeid.split("::")[0]
self.write_fspath_result(fsid, "")
def pytest_runtest_logreport(self, report):
rep = report
res = self.config.hook.pytest_report_teststatus(report=rep)
cat, letter, word = res
self.stats.setdefault(cat, []).append(rep)
self._tests_ran = True
if not letter and not word:
# probably passed setup/teardown
return
if self.verbosity <= 0:
if not hasattr(rep, 'node') and self.showfspath:
self.write_fspath_result(rep.nodeid, letter)
else:
self._tw.write(letter)
else:
if isinstance(word, tuple):
word, markup = word
else:
if rep.passed:
markup = {'green':True}
elif rep.failed:
markup = {'red':True}
elif rep.skipped:
markup = {'yellow':True}
line = self._locationline(rep.nodeid, *rep.location)
if not hasattr(rep, 'node'):
self.write_ensure_prefix(line, word, **markup)
#self._tw.write(word, **markup)
else:
self.ensure_newline()
if hasattr(rep, 'node'):
self._tw.write("[%s] " % rep.node.gateway.id)
self._tw.write(word, **markup)
self._tw.write(" " + line)
self.currentfspath = -2
def pytest_collection(self):
if not self.isatty and self.config.option.verbose >= 1:
self.write("collecting ... ", bold=True)
def pytest_collectreport(self, report):
if report.failed:
self.stats.setdefault("error", []).append(report)
elif report.skipped:
self.stats.setdefault("skipped", []).append(report)
items = [x for x in report.result if isinstance(x, pytest.Item)]
self._numcollected += len(items)
if self.isatty:
#self.write_fspath_result(report.nodeid, 'E')
self.report_collect()
def report_collect(self, final=False):
if self.config.option.verbose < 0:
return
errors = len(self.stats.get('error', []))
skipped = len(self.stats.get('skipped', []))
if final:
line = "collected "
else:
line = "collecting "
line += str(self._numcollected) + " items"
if errors:
line += " / %d errors" % errors
if skipped:
line += " / %d skipped" % skipped
if self.isatty:
if final:
line += " \n"
self.rewrite(line, bold=True)
else:
self.write_line(line)
def pytest_collection_modifyitems(self):
self.report_collect(True)
@pytest.hookimpl(trylast=True)
def pytest_sessionstart(self, session):
self._sessionstarttime = time.time()
if not self.showheader:
return
self.write_sep("=", "test session starts", bold=True)
verinfo = platform.python_version()
msg = "platform %s -- Python %s" % (sys.platform, verinfo)
if hasattr(sys, 'pypy_version_info'):
verinfo = ".".join(map(str, sys.pypy_version_info[:3]))
msg += "[pypy-%s-%s]" % (verinfo, sys.pypy_version_info[3])
msg += ", pytest-%s, py-%s, pluggy-%s" % (
pytest.__version__, py.__version__, pluggy.__version__)
if self.verbosity > 0 or self.config.option.debug or \
getattr(self.config.option, 'pastebin', None):
msg += " -- " + str(sys.executable)
self.write_line(msg)
lines = self.config.hook.pytest_report_header(
config=self.config, startdir=self.startdir)
lines.reverse()
for line in flatten(lines):
self.write_line(line)
def pytest_report_header(self, config):
inifile = ""
if config.inifile:
inifile = config.rootdir.bestrelpath(config.inifile)
lines = ["rootdir: %s, inifile: %s" %(config.rootdir, inifile)]
plugininfo = config.pluginmanager.list_plugin_distinfo()
if plugininfo:
lines.append(
"plugins: %s" % ", ".join(_plugin_nameversions(plugininfo)))
return lines
def pytest_collection_finish(self, session):
if self.config.option.collectonly:
self._printcollecteditems(session.items)
if self.stats.get('failed'):
self._tw.sep("!", "collection failures")
for rep in self.stats.get('failed'):
rep.toterminal(self._tw)
return 1
return 0
if not self.showheader:
return
#for i, testarg in enumerate(self.config.args):
# self.write_line("test path %d: %s" %(i+1, testarg))
def _printcollecteditems(self, items):
# to print out items and their parent collectors
# we take care to leave out Instances aka ()
# because later versions are going to get rid of them anyway
if self.config.option.verbose < 0:
if self.config.option.verbose < -1:
counts = {}
for item in items:
name = item.nodeid.split('::', 1)[0]
counts[name] = counts.get(name, 0) + 1
for name, count in sorted(counts.items()):
self._tw.line("%s: %d" % (name, count))
else:
for item in items:
nodeid = item.nodeid
nodeid = nodeid.replace("::()::", "::")
self._tw.line(nodeid)
return
stack = []
indent = ""
for item in items:
needed_collectors = item.listchain()[1:] # strip root node
while stack:
if stack == needed_collectors[:len(stack)]:
break
stack.pop()
for col in needed_collectors[len(stack):]:
stack.append(col)
#if col.name == "()":
# continue
indent = (len(stack) - 1) * " "
self._tw.line("%s%s" % (indent, col))
@pytest.hookimpl(hookwrapper=True)
def pytest_sessionfinish(self, exitstatus):
outcome = yield
outcome.get_result()
self._tw.line("")
summary_exit_codes = (
EXIT_OK, EXIT_TESTSFAILED, EXIT_INTERRUPTED, EXIT_USAGEERROR,
EXIT_NOTESTSCOLLECTED)
if exitstatus in summary_exit_codes:
self.config.hook.pytest_terminal_summary(terminalreporter=self,
exitstatus=exitstatus)
self.summary_errors()
self.summary_failures()
self.summary_warnings()
self.summary_passes()
if exitstatus == EXIT_INTERRUPTED:
self._report_keyboardinterrupt()
del self._keyboardinterrupt_memo
self.summary_deselected()
self.summary_stats()
def pytest_keyboard_interrupt(self, excinfo):
self._keyboardinterrupt_memo = excinfo.getrepr(funcargs=True)
def pytest_unconfigure(self):
if hasattr(self, '_keyboardinterrupt_memo'):
self._report_keyboardinterrupt()
def _report_keyboardinterrupt(self):
excrepr = self._keyboardinterrupt_memo
msg = excrepr.reprcrash.message
self.write_sep("!", msg)
if "KeyboardInterrupt" in msg:
if self.config.option.fulltrace:
excrepr.toterminal(self._tw)
else:
self._tw.line("to show a full traceback on KeyboardInterrupt use --fulltrace", yellow=True)
excrepr.reprcrash.toterminal(self._tw)
def _locationline(self, nodeid, fspath, lineno, domain):
def mkrel(nodeid):
line = self.config.cwd_relative_nodeid(nodeid)
if domain and line.endswith(domain):
line = line[:-len(domain)]
l = domain.split("[")
l[0] = l[0].replace('.', '::') # don't replace '.' in params
line += "[".join(l)
return line
# collect_fspath comes from testid which has a "/"-normalized path
if fspath:
res = mkrel(nodeid).replace("::()", "") # parens-normalization
if nodeid.split("::")[0] != fspath.replace("\\", "/"):
res += " <- " + self.startdir.bestrelpath(fspath)
else:
res = "[location]"
return res + " "
def _getfailureheadline(self, rep):
if hasattr(rep, 'location'):
fspath, lineno, domain = rep.location
return domain
else:
return "test session" # XXX?
def _getcrashline(self, rep):
try:
return str(rep.longrepr.reprcrash)
except AttributeError:
try:
return str(rep.longrepr)[:50]
except AttributeError:
return ""
#
# summaries for sessionfinish
#
def getreports(self, name):
l = []
for x in self.stats.get(name, []):
if not hasattr(x, '_pdbshown'):
l.append(x)
return l
def summary_warnings(self):
if self.hasopt("w"):
warnings = self.stats.get("warnings")
if not warnings:
return
self.write_sep("=", "pytest-warning summary")
for w in warnings:
self._tw.line("W%s %s %s" % (w.code,
w.fslocation, w.message))
def summary_passes(self):
if self.config.option.tbstyle != "no":
if self.hasopt("P"):
reports = self.getreports('passed')
if not reports:
return
self.write_sep("=", "PASSES")
for rep in reports:
msg = self._getfailureheadline(rep)
self.write_sep("_", msg)
self._outrep_summary(rep)
def print_teardown_sections(self, rep):
for secname, content in rep.sections:
if 'teardown' in secname:
self._tw.sep('-', secname)
if content[-1:] == "\n":
content = content[:-1]
self._tw.line(content)
def summary_failures(self):
if self.config.option.tbstyle != "no":
reports = self.getreports('failed')
if not reports:
return
self.write_sep("=", "FAILURES")
for rep in reports:
if self.config.option.tbstyle == "line":
line = self._getcrashline(rep)
self.write_line(line)
else:
msg = self._getfailureheadline(rep)
markup = {'red': True, 'bold': True}
self.write_sep("_", msg, **markup)
self._outrep_summary(rep)
for report in self.getreports(''):
if report.nodeid == rep.nodeid and report.when == 'teardown':
self.print_teardown_sections(report)
def summary_errors(self):
if self.config.option.tbstyle != "no":
reports = self.getreports('error')
if not reports:
return
self.write_sep("=", "ERRORS")
for rep in self.stats['error']:
msg = self._getfailureheadline(rep)
if not hasattr(rep, 'when'):
# collect
msg = "ERROR collecting " + msg
elif rep.when == "setup":
msg = "ERROR at setup of " + msg
elif rep.when == "teardown":
msg = "ERROR at teardown of " + msg
self.write_sep("_", msg)
self._outrep_summary(rep)
def _outrep_summary(self, rep):
rep.toterminal(self._tw)
for secname, content in rep.sections:
self._tw.sep("-", secname)
if content[-1:] == "\n":
content = content[:-1]
self._tw.line(content)
def summary_stats(self):
session_duration = time.time() - self._sessionstarttime
(line, color) = build_summary_stats_line(self.stats)
msg = "%s in %.2f seconds" % (line, session_duration)
markup = {color: True, 'bold': True}
if self.verbosity >= 0:
self.write_sep("=", msg, **markup)
if self.verbosity == -1:
self.write_line(msg, **markup)
def summary_deselected(self):
if 'deselected' in self.stats:
self.write_sep("=", "%d tests deselected" % (
len(self.stats['deselected'])), bold=True)
def repr_pythonversion(v=None):
if v is None:
v = sys.version_info
try:
return "%s.%s.%s-%s-%s" % v
except (TypeError, ValueError):
return str(v)
def flatten(l):
for x in l:
if isinstance(x, (list, tuple)):
for y in flatten(x):
yield y
else:
yield x
def build_summary_stats_line(stats):
keys = ("failed passed skipped deselected "
"xfailed xpassed warnings error").split()
key_translation = {'warnings': 'pytest-warnings'}
unknown_key_seen = False
for key in stats.keys():
if key not in keys:
if key: # setup/teardown reports have an empty key, ignore them
keys.append(key)
unknown_key_seen = True
parts = []
for key in keys:
val = stats.get(key, None)
if val:
key_name = key_translation.get(key, key)
parts.append("%d %s" % (len(val), key_name))
if parts:
line = ", ".join(parts)
else:
line = "no tests ran"
if 'failed' in stats or 'error' in stats:
color = 'red'
elif 'warnings' in stats or unknown_key_seen:
color = 'yellow'
elif 'passed' in stats:
color = 'green'
else:
color = 'yellow'
return (line, color)
def _plugin_nameversions(plugininfo):
l = []
for plugin, dist in plugininfo:
# gets us name and version!
name = '{dist.project_name}-{dist.version}'.format(dist=dist)
# questionable convenience, but it keeps things short
if name.startswith("pytest-"):
name = name[7:]
# we decided to print python package names
# they can have more than one plugin
if name not in l:
l.append(name)
return l

View File

@@ -1,124 +0,0 @@
""" support for providing temporary directories to test functions. """
import re
import pytest
import py
from _pytest.monkeypatch import MonkeyPatch
class TempdirFactory:
"""Factory for temporary directories under the common base temp directory.
The base directory can be configured using the ``--basetemp`` option.
"""
def __init__(self, config):
self.config = config
self.trace = config.trace.get("tmpdir")
def ensuretemp(self, string, dir=1):
""" (deprecated) return temporary directory path with
the given string as the trailing part. It is usually
better to use the 'tmpdir' function argument which
provides an empty unique-per-test-invocation directory
and is guaranteed to be empty.
"""
#py.log._apiwarn(">1.1", "use tmpdir function argument")
return self.getbasetemp().ensure(string, dir=dir)
def mktemp(self, basename, numbered=True):
"""Create a subdirectory of the base temporary directory and return it.
If ``numbered``, ensure the directory is unique by adding a number
prefix greater than any existing one.
"""
basetemp = self.getbasetemp()
if not numbered:
p = basetemp.mkdir(basename)
else:
p = py.path.local.make_numbered_dir(prefix=basename,
keep=0, rootdir=basetemp, lock_timeout=None)
self.trace("mktemp", p)
return p
def getbasetemp(self):
""" return base temporary directory. """
try:
return self._basetemp
except AttributeError:
basetemp = self.config.option.basetemp
if basetemp:
basetemp = py.path.local(basetemp)
if basetemp.check():
basetemp.remove()
basetemp.mkdir()
else:
temproot = py.path.local.get_temproot()
user = get_user()
if user:
# use a sub-directory in the temproot to speed-up
# make_numbered_dir() call
rootdir = temproot.join('pytest-of-%s' % user)
else:
rootdir = temproot
rootdir.ensure(dir=1)
basetemp = py.path.local.make_numbered_dir(prefix='pytest-',
rootdir=rootdir)
self._basetemp = t = basetemp.realpath()
self.trace("new basetemp", t)
return t
def finish(self):
self.trace("finish")
def get_user():
"""Return the current user name, or None if getuser() does not work
in the current environment (see #1010).
"""
import getpass
try:
return getpass.getuser()
except (ImportError, KeyError):
return None
# backward compatibility
TempdirHandler = TempdirFactory
def pytest_configure(config):
"""Create a TempdirFactory and attach it to the config object.
This is to comply with existing plugins which expect the handler to be
available at pytest_configure time, but ideally should be moved entirely
to the tmpdir_factory session fixture.
"""
mp = MonkeyPatch()
t = TempdirFactory(config)
config._cleanup.extend([mp.undo, t.finish])
mp.setattr(config, '_tmpdirhandler', t, raising=False)
mp.setattr(pytest, 'ensuretemp', t.ensuretemp, raising=False)
@pytest.fixture(scope='session')
def tmpdir_factory(request):
"""Return a TempdirFactory instance for the test session.
"""
return request.config._tmpdirhandler
@pytest.fixture
def tmpdir(request, tmpdir_factory):
"""Return a temporary directory path object
which is unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.
"""
name = request.node.name
name = re.sub("[\W]", "_", name)
MAXVAL = 30
if len(name) > MAXVAL:
name = name[:MAXVAL]
x = tmpdir_factory.mktemp(name, numbered=True)
return x

View File

@@ -1,217 +0,0 @@
""" discovery and running of std-library "unittest" style tests. """
from __future__ import absolute_import
import sys
import traceback
import pytest
# for transfering markers
import _pytest._code
from _pytest.python import transfer_markers
from _pytest.skipping import MarkEvaluator
def pytest_pycollect_makeitem(collector, name, obj):
# has unittest been imported and is obj a subclass of its TestCase?
try:
if not issubclass(obj, sys.modules["unittest"].TestCase):
return
except Exception:
return
# yes, so let's collect it
return UnitTestCase(name, parent=collector)
class UnitTestCase(pytest.Class):
# marker for fixturemanger.getfixtureinfo()
# to declare that our children do not support funcargs
nofuncargs = True
def setup(self):
cls = self.obj
if getattr(cls, '__unittest_skip__', False):
return # skipped
setup = getattr(cls, 'setUpClass', None)
if setup is not None:
setup()
teardown = getattr(cls, 'tearDownClass', None)
if teardown is not None:
self.addfinalizer(teardown)
super(UnitTestCase, self).setup()
def collect(self):
from unittest import TestLoader
cls = self.obj
if not getattr(cls, "__test__", True):
return
self.session._fixturemanager.parsefactories(self, unittest=True)
loader = TestLoader()
module = self.getparent(pytest.Module).obj
foundsomething = False
for name in loader.getTestCaseNames(self.obj):
x = getattr(self.obj, name)
if not getattr(x, '__test__', True):
continue
funcobj = getattr(x, 'im_func', x)
transfer_markers(funcobj, cls, module)
yield TestCaseFunction(name, parent=self)
foundsomething = True
if not foundsomething:
runtest = getattr(self.obj, 'runTest', None)
if runtest is not None:
ut = sys.modules.get("twisted.trial.unittest", None)
if ut is None or runtest != ut.TestCase.runTest:
yield TestCaseFunction('runTest', parent=self)
class TestCaseFunction(pytest.Function):
_excinfo = None
def setup(self):
self._testcase = self.parent.obj(self.name)
self._fix_unittest_skip_decorator()
self._obj = getattr(self._testcase, self.name)
if hasattr(self._testcase, 'setup_method'):
self._testcase.setup_method(self._obj)
if hasattr(self, "_request"):
self._request._fillfixtures()
def _fix_unittest_skip_decorator(self):
"""
The @unittest.skip decorator calls functools.wraps(self._testcase)
The call to functools.wraps() fails unless self._testcase
has a __name__ attribute. This is usually automatically supplied
if the test is a function or method, but we need to add manually
here.
See issue #1169
"""
if sys.version_info[0] == 2:
setattr(self._testcase, "__name__", self.name)
def teardown(self):
if hasattr(self._testcase, 'teardown_method'):
self._testcase.teardown_method(self._obj)
# Allow garbage collection on TestCase instance attributes.
self._testcase = None
self._obj = None
def startTest(self, testcase):
pass
def _addexcinfo(self, rawexcinfo):
# unwrap potential exception info (see twisted trial support below)
rawexcinfo = getattr(rawexcinfo, '_rawexcinfo', rawexcinfo)
try:
excinfo = _pytest._code.ExceptionInfo(rawexcinfo)
except TypeError:
try:
try:
l = traceback.format_exception(*rawexcinfo)
l.insert(0, "NOTE: Incompatible Exception Representation, "
"displaying natively:\n\n")
pytest.fail("".join(l), pytrace=False)
except (pytest.fail.Exception, KeyboardInterrupt):
raise
except:
pytest.fail("ERROR: Unknown Incompatible Exception "
"representation:\n%r" %(rawexcinfo,), pytrace=False)
except KeyboardInterrupt:
raise
except pytest.fail.Exception:
excinfo = _pytest._code.ExceptionInfo()
self.__dict__.setdefault('_excinfo', []).append(excinfo)
def addError(self, testcase, rawexcinfo):
self._addexcinfo(rawexcinfo)
def addFailure(self, testcase, rawexcinfo):
self._addexcinfo(rawexcinfo)
def addSkip(self, testcase, reason):
try:
pytest.skip(reason)
except pytest.skip.Exception:
self._evalskip = MarkEvaluator(self, 'SkipTest')
self._evalskip.result = True
self._addexcinfo(sys.exc_info())
def addExpectedFailure(self, testcase, rawexcinfo, reason=""):
try:
pytest.xfail(str(reason))
except pytest.xfail.Exception:
self._addexcinfo(sys.exc_info())
def addUnexpectedSuccess(self, testcase, reason=""):
self._unexpectedsuccess = reason
def addSuccess(self, testcase):
pass
def stopTest(self, testcase):
pass
def runtest(self):
if self.config.pluginmanager.get_plugin("pdbinvoke") is None:
self._testcase(result=self)
else:
# disables tearDown and cleanups for post mortem debugging (see #1890)
self._testcase.debug()
def _prunetraceback(self, excinfo):
pytest.Function._prunetraceback(self, excinfo)
traceback = excinfo.traceback.filter(
lambda x:not x.frame.f_globals.get('__unittest'))
if traceback:
excinfo.traceback = traceback
@pytest.hookimpl(tryfirst=True)
def pytest_runtest_makereport(item, call):
if isinstance(item, TestCaseFunction):
if item._excinfo:
call.excinfo = item._excinfo.pop(0)
try:
del call.result
except AttributeError:
pass
# twisted trial support
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item):
if isinstance(item, TestCaseFunction) and \
'twisted.trial.unittest' in sys.modules:
ut = sys.modules['twisted.python.failure']
Failure__init__ = ut.Failure.__init__
check_testcase_implements_trial_reporter()
def excstore(self, exc_value=None, exc_type=None, exc_tb=None,
captureVars=None):
if exc_value is None:
self._rawexcinfo = sys.exc_info()
else:
if exc_type is None:
exc_type = type(exc_value)
self._rawexcinfo = (exc_type, exc_value, exc_tb)
try:
Failure__init__(self, exc_value, exc_type, exc_tb,
captureVars=captureVars)
except TypeError:
Failure__init__(self, exc_value, exc_type, exc_tb)
ut.Failure.__init__ = excstore
yield
ut.Failure.__init__ = Failure__init__
else:
yield
def check_testcase_implements_trial_reporter(done=[]):
if done:
return
from zope.interface import classImplements
from twisted.trial.itrial import IReporter
classImplements(TestCaseFunction, IReporter)
done.append(1)

View File

@@ -1,13 +0,0 @@
This directory vendors the `pluggy` module.
For a more detailed discussion for the reasons to vendoring this
package, please see [this issue](https://github.com/pytest-dev/pytest/issues/944).
To update the current version, execute:
```
$ pip install -U pluggy==<version> --no-compile --target=_pytest/vendored_packages
```
And commit the modified files. The `pluggy-<version>.dist-info` directory
created by `pip` should be added as well.

View File

@@ -1,11 +0,0 @@
Plugin registration and hook calling for Python
===============================================
This is the plugin manager as used by pytest but stripped
of pytest specific details.
During the 0.x series this plugin does not have much documentation
except extensive docstrings in the pluggy.py module.

View File

@@ -1,22 +0,0 @@
The MIT License (MIT)
Copyright (c) 2015 holger krekel (rather uses bitbucket/hpk42)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,40 +0,0 @@
Metadata-Version: 2.0
Name: pluggy
Version: 0.4.0
Summary: plugin and hook calling mechanisms for python
Home-page: https://github.com/pytest-dev/pluggy
Author: Holger Krekel
Author-email: holger at merlinux.eu
License: MIT license
Platform: unix
Platform: linux
Platform: osx
Platform: win32
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: POSIX
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Utilities
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Plugin registration and hook calling for Python
===============================================
This is the plugin manager as used by pytest but stripped
of pytest specific details.
During the 0.x series this plugin does not have much documentation
except extensive docstrings in the pluggy.py module.

View File

@@ -1,9 +0,0 @@
pluggy.py,sha256=u0oG9cv-oLOkNvEBlwnnu8pp1AyxpoERgUO00S3rvpQ,31543
pluggy-0.4.0.dist-info/DESCRIPTION.rst,sha256=ltvjkFd40LW_xShthp6RRVM6OB_uACYDFR3kTpKw7o4,307
pluggy-0.4.0.dist-info/LICENSE.txt,sha256=ruwhUOyV1HgE9F35JVL9BCZ9vMSALx369I4xq9rhpkM,1134
pluggy-0.4.0.dist-info/METADATA,sha256=pe2hbsqKFaLHC6wAQPpFPn0KlpcPfLBe_BnS4O70bfk,1364
pluggy-0.4.0.dist-info/RECORD,,
pluggy-0.4.0.dist-info/WHEEL,sha256=9Z5Xm-eel1bTS7e6ogYiKz0zmPEqDwIypurdHN1hR40,116
pluggy-0.4.0.dist-info/metadata.json,sha256=T3go5L2qOa_-H-HpCZi3EoVKb8sZ3R-fOssbkWo2nvM,1119
pluggy-0.4.0.dist-info/top_level.txt,sha256=xKSCRhai-v9MckvMuWqNz16c1tbsmOggoMSwTgcpYHE,7
pluggy-0.4.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4

View File

@@ -1,6 +0,0 @@
Wheel-Version: 1.0
Generator: bdist_wheel (0.29.0)
Root-Is-Purelib: true
Tag: py2-none-any
Tag: py3-none-any

View File

@@ -1 +0,0 @@
{"classifiers": ["Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows", "Operating System :: MacOS :: MacOS X", "Topic :: Software Development :: Testing", "Topic :: Software Development :: Libraries", "Topic :: Utilities", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5"], "extensions": {"python.details": {"contacts": [{"email": "holger at merlinux.eu", "name": "Holger Krekel", "role": "author"}], "document_names": {"description": "DESCRIPTION.rst", "license": "LICENSE.txt"}, "project_urls": {"Home": "https://github.com/pytest-dev/pluggy"}}}, "generator": "bdist_wheel (0.29.0)", "license": "MIT license", "metadata_version": "2.0", "name": "pluggy", "platform": "unix", "summary": "plugin and hook calling mechanisms for python", "version": "0.4.0"}

Some files were not shown because too many files have changed in this diff Show More