Merged in current develop to cflags 042716

This commit is contained in:
Gregory Becker 2016-04-27 19:38:51 -07:00
commit ae5198e5e7
749 changed files with 37679 additions and 4792 deletions

1
.gitignore vendored
View File

@ -8,3 +8,4 @@
/etc/spackconfig
/share/spack/dotkit
/share/spack/modules
/TAGS

View File

@ -1,11 +1,20 @@
Todd Gamblin <tgamblin@llnl.gov> George Todd Gamblin <gamblin2@llnl.gov>
Todd Gamblin <tgamblin@llnl.gov> Todd Gamblin <gamblin2@llnl.gov>
Adam Moody <moody20@llnl.gov> Adam T. Moody <moody20@llnl.gov>
Alfredo Gimenez <gimenez1@llnl.gov> Alfredo Gimenez <alfredo.gimenez@gmail.com>
David Boehme <boehme3@llnl.gov> David Boehme <boehme3@sierra324.llnl.gov>
David Boehme <boehme3@llnl.gov> David Boehme <boehme3@sierra648.llnl.gov>
Kevin Brandstatter <kjbrandstatter@gmail.com> Kevin Brandstatter <kbrandst@hawk.iit.edu>
Luc Jaulmes <luc.jaulmes@bsc.es> Luc Jaulmes <jaulmes1@llnl.gov>
Saravan Pantham <saravan.pantham@gmail.com> Saravan Pantham <pantham1@surface86.llnl.gov
Saravan Pantham <saravan.pantham@gmail.com> Saravan Pantham <pantham1@surface86.llnl.gov>
Tom Scogland <tscogland@llnl.gov> Tom Scogland <scogland1@llnl.gov>
Tom Scogland <tscogland@llnl.gov> Tom Scogland <tom.scogland@gmail.com>
Joachim Protze <protze@rz.rwth-aachen.de> jprotze <protze@rz.rwth-aachen.de>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@surface86.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@cab687.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@cab690.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory L. Lee <lee218@catalyst159.llnl.gov>
Gregory L. Lee <lee218@llnl.gov> Gregory Lee <lee218@llnl.gov>
Massimiliano Culpo <massimiliano.culpo@epfl.ch> Massimiliano Culpo <massimiliano.culpo@googlemail.com>
Massimiliano Culpo <massimiliano.culpo@epfl.ch> alalazo <massimiliano.culpo@googlemail.com>
Mark Miller <miller86@llnl.gov> miller86 <miller86@llnl.gov>

29
.travis.yml Normal file
View File

@ -0,0 +1,29 @@
language: python
python:
- "2.6"
- "2.7"
# Use new Travis infrastructure (Docker can't sudo yet)
sudo: false
# No need to install any deps.
install: true
before_install:
# Need this for the git tests to succeed.
- git config --global user.email "spack@example.com"
- git config --global user.name "Test User"
script:
- . share/spack/setup-env.sh
- spack compilers
- spack config get compilers
- spack test
- spack install -v libdwarf
notifications:
email:
recipients:
- tgamblin@llnl.gov
on_success: change
on_failure: always

View File

@ -5,7 +5,7 @@ This file is part of Spack.
Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
LLNL-CODE-647188
For details, see https://scalability-llnl.github.io/spack
For details, see https://github.com/llnl/spack
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License (as published by

View File

@ -1,6 +1,8 @@
![image](share/spack/logo/spack-logo-text-64.png "Spack")
============
[![Build Status](https://travis-ci.org/LLNL/spack.png?branch=develop)](https://travis-ci.org/LLNL/spack)
Spack is a package management tool designed to support multiple
versions and configurations of software on a wide variety of platforms
and environments. It was designed for large supercomputing centers,
@ -17,20 +19,26 @@ written in pure Python, and specs allow package authors to write a
single build script for many different builds of the same package.
See the
[Feature Overview](http://scalability-llnl.github.io/spack/features.html)
[Feature Overview](http://software.llnl.gov/spack/features.html)
for examples and highlights.
To install spack and install your first package:
$ git clone https://github.com/scalability-llnl/spack.git
$ git clone https://github.com/llnl/spack.git
$ cd spack/bin
$ ./spack install libelf
Documentation
----------------
[Full documentation](http://scalability-llnl.github.io/spack)
for Spack is also available.
[**Full documentation**](http://software.llnl.gov/spack) for Spack is
the first place to look.
See also:
* [Technical paper](http://www.computer.org/csdl/proceedings/sc/2015/3723/00/2807623.pdf) and
[slides](https://tgamblin.github.io/files/Gamblin-Spack-SC15-Talk.pdf) on Spack's design and implementation.
* [Short presentation](https://tgamblin.github.io/files/Gamblin-Spack-Lightning-Talk-BOF-SC15.pdf) from the *Getting Scientific Software Installed* BOF session at Supercomputing 2015.
Get Involved!
------------------------
@ -51,7 +59,8 @@ can join it here:
At the moment, contributing to Spack is relatively simple. Just send us
a [pull request](https://help.github.com/articles/using-pull-requests/).
When you send your request, make ``develop`` the destination branch.
When you send your request, make ``develop`` the destination branch on the
[Spack repository](https://github.com/LLNL/spack).
Spack is using a rough approximation of the [Git
Flow](http://nvie.com/posts/a-successful-git-branching-model/)
@ -62,10 +71,19 @@ latest stable release.
Authors
----------------
Many thanks go to Spack's [contributors](https://github.com/scalability-llnl/spack/graphs/contributors).
Many thanks go to Spack's [contributors](https://github.com/llnl/spack/graphs/contributors).
Spack was originally written by Todd Gamblin, tgamblin@llnl.gov.
### Citing Spack
If you are referencing Spack in a publication, please cite the following paper:
* Todd Gamblin, Matthew P. LeGendre, Michael R. Collette, Gregory L. Lee,
Adam Moody, Bronis R. de Supinski, and W. Scott Futral.
[**The Spack Package Manager: Bringing Order to HPC Software Chaos**](http://www.computer.org/csdl/proceedings/sc/2015/3723/00/2807623.pdf).
In *Supercomputing 2015 (SC15)*, Austin, Texas, November 15-20 2015. LLNL-CONF-669890.
Release
----------------
Spack is released under an LGPL license. For more details see the

84
bin/sbang Executable file
View File

@ -0,0 +1,84 @@
#!/bin/bash
#
# `sbang`: Run scripts with long shebang lines.
#
# Many operating systems limit the length of shebang lines, making it
# hard to use interpreters that are deep in the directory hierarchy.
# `sbang` can run such scripts, either as a shebang interpreter, or
# directly on the command line.
#
# Usage
# -----------------------------
# Suppose you have a script, long-shebang.sh, like this:
#
# 1 #!/very/long/path/to/some/interpreter
# 2
# 3 echo "success!"
#
# Invoking this script will result in an error on some OS's. On
# Linux, you get this:
#
# $ ./long-shebang.sh
# -bash: ./long: /very/long/path/to/some/interp: bad interpreter:
# No such file or directory
#
# On Mac OS X, the system simply assumes the interpreter is the shell
# and tries to run with it, which is likely not what you want.
#
#
# `sbang` on the command line
# -----------------------------
# You can use `sbang` in two ways. The first is to use it directly,
# from the command line, like this:
#
# $ sbang ./long-shebang.sh
# success!
#
#
# `sbang` as the interpreter
# -----------------------------
# You can also use `sbang` *as* the interpreter for your script. Put
# `#!/bin/bash /path/to/sbang` on line 1, and move the original
# shebang to line 2 of the script:
#
# 1 #!/bin/bash /path/to/sbang
# 2 #!/long/path/to/real/interpreter with arguments
# 3
# 4 echo "success!"
#
# $ ./long-shebang.sh
# success!
#
# On Linux, you could shorten line 1 to `#!/path/to/sbang`, but other
# operating systems like Mac OS X require the interpreter to be a
# binary, so it's best to use `sbang` as a `bash` argument.
# Obviously, for this to work, `sbang` needs to have a short enough
# path that *it* will run without hitting OS limits.
#
#
# How it works
# -----------------------------
# `sbang` is a very simple bash script. It looks at the first two
# lines of a script argument and runs the last line starting with
# `#!`, with the script as an argument. It also forwards arguments.
#
# First argument is the script we want to actually run.
script="$1"
# Search the first two lines of script for interpreters.
lines=0
while read line && ((lines < 2)) ; do
if [[ "$line" = '#!'* ]]; then
interpreter="${line#\#!}"
fi
lines=$((lines+1))
done < "$script"
# Invoke any interpreter found, or raise an error if none was found.
if [ -n "$interpreter" ]; then
exec $interpreter "$@"
else
echo "error: sbang found no interpreter in $script"
exit 1
fi

View File

@ -7,7 +7,7 @@
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://scalability-llnl.github.io/spack
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
@ -38,6 +38,31 @@ SPACK_PREFIX = os.path.dirname(os.path.dirname(SPACK_FILE))
# Allow spack libs to be imported in our scripts
SPACK_LIB_PATH = os.path.join(SPACK_PREFIX, "lib", "spack")
sys.path.insert(0, SPACK_LIB_PATH)
SPACK_EXTERNAL_LIBS = os.path.join(SPACK_LIB_PATH, "external")
sys.path.insert(0, SPACK_EXTERNAL_LIBS)
import warnings
# Avoid warnings when nose is installed with the python exe being used to run
# spack. Note this must be done after Spack's external libs directory is added
# to sys.path.
with warnings.catch_warnings():
warnings.filterwarnings("ignore", ".*nose was already imported")
import nose
# Quick and dirty check to clean orphaned .pyc files left over from
# previous revisions. These files were present in earlier versions of
# Spack, were removed, but shadow system modules that Spack still
# imports. If we leave them, Spack will fail in mysterious ways.
# TODO: more elegant solution for orphaned pyc files.
orphaned_pyc_files = [os.path.join(SPACK_EXTERNAL_LIBS, n)
for n in ('functools.pyc', 'ordereddict.pyc')]
for pyc_file in orphaned_pyc_files:
if not os.path.exists(pyc_file):
continue
try:
os.remove(pyc_file)
except OSError as e:
print "WARNING: Spack may fail mysteriously. Couldn't remove orphaned .pyc file: %s" % pyc_file
# If there is no working directory, use the spack prefix.
try:
@ -72,6 +97,8 @@ spec expressions:
parser.add_argument('-d', '--debug', action='store_true',
help="Write out debug logs during compile")
parser.add_argument('-D', '--pdb', action='store_true',
help="Run spack under the pdb debugger")
parser.add_argument('-k', '--insecure', action='store_true',
help="Do not check ssl certificates when downloading.")
parser.add_argument('-m', '--mock', action='store_true',
@ -113,8 +140,8 @@ def main():
spack.spack_working_dir = working_dir
if args.mock:
from spack.packages import PackageDB
spack.db = PackageDB(spack.mock_packages_path)
from spack.repository import RepoPath
spack.repo.swap(RepoPath(spack.mock_packages_path))
# If the user asked for it, don't check ssl certs.
if args.insecure:
@ -131,7 +158,7 @@ def main():
sys.stderr.write('\n')
tty.die("Keyboard interrupt.")
# Allow commands to return values if they want to exit with some ohter code.
# Allow commands to return values if they want to exit with some other code.
if return_val is None:
sys.exit(0)
elif isinstance(return_val, int):
@ -142,5 +169,8 @@ def main():
if args.profile:
import cProfile
cProfile.run('main()', sort='tottime')
elif args.pdb:
import pdb
pdb.run('main()')
else:
main()

View File

@ -7,7 +7,7 @@
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://scalability-llnl.github.io/spack
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify

8
etc/spack/modules.yaml Normal file
View File

@ -0,0 +1,8 @@
# -------------------------------------------------------------------------
# This is the default spack module files generation configuration.
#
# Changes to this file will affect all users of this spack install,
# although users can override these settings in their ~/.spack/modules.yaml.
# -------------------------------------------------------------------------
modules:
enable: ['tcl', 'dotkit']

8
etc/spack/repos.yaml Normal file
View File

@ -0,0 +1,8 @@
# -------------------------------------------------------------------------
# This is the default spack repository configuration.
#
# Changes to this file will affect all users of this spack install,
# although users can override these settings in their ~/.spack/repos.yaml.
# -------------------------------------------------------------------------
repos:
- $spack/var/spack/repos/builtin

View File

@ -149,26 +149,46 @@ customize an installation in :ref:`sec-specs`.
``spack uninstall``
~~~~~~~~~~~~~~~~~~~~~
To uninstall a package, type ``spack uninstall <package>``. This will
completely remove the directory in which the package was installed.
To uninstall a package, type ``spack uninstall <package>``. This will ask the user for
confirmation, and in case will completely remove the directory in which the package was installed.
.. code-block:: sh
spack uninstall mpich
If there are still installed packages that depend on the package to be
uninstalled, spack will refuse to uninstall it. You can override this
behavior with ``spack uninstall -f <package>``, but you risk breaking
other installed packages. In general, it is safer to remove dependent
packages *before* removing their dependencies.
uninstalled, spack will refuse to uninstall it.
A line like ``spack uninstall mpich`` may be ambiguous, if multiple
``mpich`` configurations are installed. For example, if both
To uninstall a package and every package that depends on it, you may give the
`--dependents` option.
.. code-block:: sh
spack uninstall --dependents mpich
will display a list of all the packages that depends on `mpich` and, upon confirmation,
will uninstall them in the right order.
A line like
.. code-block:: sh
spack uninstall mpich
may be ambiguous, if multiple ``mpich`` configurations are installed. For example, if both
``mpich@3.0.2`` and ``mpich@3.1`` are installed, ``mpich`` could refer
to either one. Because it cannot determine which one to uninstall,
Spack will ask you to provide a version number to remove the
ambiguity. As an example, ``spack uninstall mpich@3.1`` is
unambiguous in this scenario.
Spack will ask you either to provide a version number to remove the
ambiguity or use the ``--all`` option to uninstall all of the matching packages.
You may force uninstall a package with the `--force` option
.. code-block:: sh
spack uninstall --force mpich
but you risk breaking other installed packages. In general, it is safer to remove dependent
packages *before* removing their dependencies or use the `--dependents` option.
Seeing installed packages
@ -357,7 +377,7 @@ Spack, you can simply run ``spack compiler add`` with the path to
where the compiler is installed. For example::
$ spack compiler add /usr/local/tools/ic-13.0.079
==> Added 1 new compiler to /Users/gamblin2/.spackconfig
==> Added 1 new compiler to /Users/gamblin2/.spack/compilers.yaml
intel@13.0.079
Or you can run ``spack compiler add`` with no arguments to force
@ -367,7 +387,7 @@ installed, but you know that new compilers have been added to your
$ module load gcc-4.9.0
$ spack compiler add
==> Added 1 new compiler to /Users/gamblin2/.spackconfig
==> Added 1 new compiler to /Users/gamblin2/.spack/compilers.yaml
gcc@4.9.0
This loads the environment module for gcc-4.9.0 to get it into the
@ -398,27 +418,34 @@ Manual compiler configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If auto-detection fails, you can manually configure a compiler by
editing your ``~/.spackconfig`` file. You can do this by running
``spack config edit``, which will open the file in your ``$EDITOR``.
editing your ``~/.spack/compilers.yaml`` file. You can do this by running
``spack config edit compilers``, which will open the file in your ``$EDITOR``.
Each compiler configuration in the file looks like this::
...
[compiler "intel@15.0.0"]
cc = /usr/local/bin/icc-15.0.024-beta
cxx = /usr/local/bin/icpc-15.0.024-beta
f77 = /usr/local/bin/ifort-15.0.024-beta
fc = /usr/local/bin/ifort-15.0.024-beta
...
chaos_5_x86_64_ib:
...
intel@15.0.0:
cc: /usr/local/bin/icc-15.0.024-beta
cxx: /usr/local/bin/icpc-15.0.024-beta
f77: /usr/local/bin/ifort-15.0.024-beta
fc: /usr/local/bin/ifort-15.0.024-beta
...
The chaos_5_x86_64_ib string is an architecture string, and multiple
compilers can be listed underneath an architecture. The architecture
string may be replaced with the string 'all' to signify compilers that
work on all architectures.
For compilers, like ``clang``, that do not support Fortran, put
``None`` for ``f77`` and ``fc``::
[compiler "clang@3.3svn"]
cc = /usr/bin/clang
cxx = /usr/bin/clang++
f77 = None
fc = None
clang@3.3svn:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: None
fc: None
Once you save the file, the configured compilers will show up in the
list displayed by ``spack compilers``.
@ -767,6 +794,34 @@ Environment modules
Spack provides some limited integration with environment module
systems to make it easier to use the packages it provides.
Installing Environment Modules
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to use Spack's generated environment modules, you must have
installed the *Environment Modules* package. On many Linux
distributions, this can be installed from the vendor's repository.
For example: ```yum install environment-modules``
(Fedora/RHEL/CentOS). If your Linux distribution does not have
Environment Modules, you can get it with Spack:
1. Install with::
spack install environment-modules
2. Activate with::
MODULES_HOME=`spack location -i environment-modules`
MODULES_VERSION=`ls -1 $MODULES_HOME/Modules | head -1`
${MODULES_HOME}/Modules/${MODULES_VERSION}/bin/add.modules
This adds to your ``.bashrc`` (or similar) files, enabling Environment
Modules when you log in. It will ask your permission before changing
any files.
Spack and Environment Modules
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can enable shell support by sourcing some files in the
``/share/spack`` directory.
@ -896,7 +951,7 @@ Or, similarly with modules, you could type:
$ spack load mpich %gcc@4.4.7
These commands will add appropriate directories to your ``PATH``,
``MANPATH``, and ``LD_LIBRARY_PATH``. When you no longer want to use
``MANPATH``, ``CPATH``, and ``LD_LIBRARY_PATH``. When you no longer want to use
a package, you can type unload or unuse similarly:
.. code-block:: sh

View File

@ -6,7 +6,7 @@
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://scalability-llnl.github.io/spack
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
@ -43,6 +43,7 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('exts'))
sys.path.insert(0, os.path.abspath('../external'))
# Add the Spack bin directory to the path so that we can use its output in docs.
spack_root = '../../..'

View File

@ -73,19 +73,32 @@ with a high level view of Spack's directory structure::
spack/ <- installation root
bin/
spack <- main spack executable
etc/
spack/ <- Spack config files.
Can be overridden by files in ~/.spack.
var/
spack/ <- build & stage directories
repos/ <- contains package repositories
builtin/ <- pkg repository that comes with Spack
repo.yaml <- descriptor for the builtin repository
packages/ <- directories under here contain packages
opt/
spack/ <- packages are installed here
lib/
spack/
docs/ <- source for this documentation
env/ <- compiler wrappers for build environment
external/ <- external libs included in Spack distro
llnl/ <- some general-use libraries
spack/ <- spack module; contains Python code
cmd/ <- each file in here is a spack subcommand
compilers/ <- compiler description files
packages/ <- each file in here is a spack package
test/ <- unit test modules
util/ <- common code

View File

@ -6,7 +6,7 @@
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://scalability-llnl.github.io/spack
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify

View File

@ -6,7 +6,7 @@
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://scalability-llnl.github.io/spack
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify

View File

@ -103,7 +103,7 @@ creates a simple python file:
It doesn't take much python coding to get from there to a working
package:
.. literalinclude:: ../../../var/spack/packages/libelf/package.py
.. literalinclude:: ../../../var/spack/repos/builtin/packages/libelf/package.py
:lines: 25-
Spack also provides wrapper functions around common commands like

View File

@ -5,11 +5,11 @@ Download
--------------------
Getting spack is easy. You can clone it from the `github repository
<https://github.com/scalability-llnl/spack>`_ using this command:
<https://github.com/llnl/spack>`_ using this command:
.. code-block:: sh
$ git clone https://github.com/scalability-llnl/spack.git
$ git clone https://github.com/llnl/spack.git
This will create a directory called ``spack``. We'll assume that the
full path to this directory is in the ``SPACK_ROOT`` environment
@ -22,7 +22,7 @@ go:
$ spack install libelf
For a richer experience, use Spack's `shell support
<http://scalability-llnl.github.io/spack/basic_usage.html#environment-modules>`_:
<http://software.llnl.gov/spack/basic_usage.html#environment-modules>`_:
.. code-block:: sh

View File

@ -18,18 +18,18 @@ configurations can coexist on the same system.
Most importantly, Spack is *simple*. It offers a simple *spec* syntax
so that users can specify versions and configuration options
concisely. Spack is also simple for package authors: package files
are writtin in pure Python, and specs allow package authors to
are written in pure Python, and specs allow package authors to
maintain a single file for many different builds of the same package.
See the :doc:`features` for examples and highlights.
Get spack from the `github repository
<https://github.com/scalability-llnl/spack>`_ and install your first
<https://github.com/llnl/spack>`_ and install your first
package:
.. code-block:: sh
$ git clone https://github.com/scalability-llnl/spack.git
$ git clone https://github.com/llnl/spack.git
$ cd spack/bin
$ ./spack install libelf

View File

@ -38,7 +38,7 @@ contains tarballs for each package, named after each package.
.. note::
Archives are **not** named exactly they were in the package's fetch
Archives are **not** named exactly the way they were in the package's fetch
URL. They have the form ``<name>-<version>.<extension>``, where
``<name>`` is Spack's name for the package, ``<version>`` is the
version of the tarball, and ``<extension>`` is whatever format the
@ -186,7 +186,7 @@ Each mirror has a name so that you can refer to it again later.
``spack mirror list``
----------------------------
If you want to see all the mirrors Spack knows about you can run ``spack mirror list``::
To see all the mirrors Spack knows about, run ``spack mirror list``::
$ spack mirror list
local_filesystem file:///Users/gamblin2/spack-mirror-2014-06-24
@ -196,7 +196,7 @@ If you want to see all the mirrors Spack knows about you can run ``spack mirror
``spack mirror remove``
----------------------------
And, if you want to remove a mirror, just remove it by name::
To remove a mirror by name::
$ spack mirror remove local_filesystem
$ spack mirror list
@ -205,12 +205,11 @@ And, if you want to remove a mirror, just remove it by name::
Mirror precedence
----------------------------
Adding a mirror really just adds a section in ``~/.spackconfig``::
Adding a mirror really adds a line in ``~/.spack/mirrors.yaml``::
[mirror "local_filesystem"]
url = file:///Users/gamblin2/spack-mirror-2014-06-24
[mirror "remote_server"]
url = https://example.com/some/web-hosted/directory/spack-mirror-2014-06-24
mirrors:
local_filesystem: file:///Users/gamblin2/spack-mirror-2014-06-24
remote_server: https://example.com/some/web-hosted/directory/spack-mirror-2014-06-24
If you want to change the order in which mirrors are searched for
packages, you can edit this file and reorder the sections. Spack will

View File

@ -84,7 +84,7 @@ always choose to download just one tarball initially, and run
If it fails entirely, you can get minimal boilerplate by using
:ref:`spack-edit-f`, or you can manually create a directory and
``package.py`` file for the package in ``var/spack/packages``.
``package.py`` file for the package in ``var/spack/repos/builtin/packages``.
.. note::
@ -203,7 +203,7 @@ edit`` command:
So, if you used ``spack create`` to create a package, then saved and
closed the resulting file, you can get back to it with ``spack edit``.
The ``cmake`` package actually lives in
``$SPACK_ROOT/var/spack/packages/cmake/package.py``, but this provides
``$SPACK_ROOT/var/spack/repos/builtin/packages/cmake/package.py``, but this provides
a much simpler shortcut and saves you the trouble of typing the full
path.
@ -269,18 +269,18 @@ live in Spack's directory structure. In general, `spack-create`_ and
`spack-edit`_ handle creating package files for you, so you can skip
most of the details here.
``var/spack/packages``
``var/spack/repos/builtin/packages``
~~~~~~~~~~~~~~~~~~~~~~~
A Spack installation directory is structured like a standard UNIX
install prefix (``bin``, ``lib``, ``include``, ``var``, ``opt``,
etc.). Most of the code for Spack lives in ``$SPACK_ROOT/lib/spack``.
Packages themselves live in ``$SPACK_ROOT/var/spack/packages``.
Packages themselves live in ``$SPACK_ROOT/var/spack/repos/builtin/packages``.
If you ``cd`` to that directory, you will see directories for each
package:
.. command-output:: cd $SPACK_ROOT/var/spack/packages; ls -CF
.. command-output:: cd $SPACK_ROOT/var/spack/repos/builtin/packages; ls -CF
:shell:
:ellipsis: 10
@ -288,7 +288,7 @@ Each directory contains a file called ``package.py``, which is where
all the python code for the package goes. For example, the ``libelf``
package lives in::
$SPACK_ROOT/var/spack/packages/libelf/package.py
$SPACK_ROOT/var/spack/repos/builtin/packages/libelf/package.py
Alongside the ``package.py`` file, a package may contain extra
directories or files (like patches) that it needs to build.
@ -301,7 +301,7 @@ Packages are named after the directory containing ``package.py``. So,
``libelf``'s ``package.py`` lives in a directory called ``libelf``.
The ``package.py`` file defines a class called ``Libelf``, which
extends Spack's ``Package`` class. for example, here is
``$SPACK_ROOT/var/spack/packages/libelf/package.py``:
``$SPACK_ROOT/var/spack/repos/builtin/packages/libelf/package.py``:
.. code-block:: python
:linenos:
@ -328,7 +328,7 @@ these:
$ spack install libelf@0.8.13
Spack sees the package name in the spec and looks for
``libelf/package.py`` in ``var/spack/packages``. Likewise, if you say
``libelf/package.py`` in ``var/spack/repos/builtin/packages``. Likewise, if you say
``spack install py-numpy``, then Spack looks for
``py-numpy/package.py``.
@ -401,6 +401,35 @@ construct the new one for ``8.2.1``.
When you supply a custom URL for a version, Spack uses that URL
*verbatim* and does not perform extrapolation.
Skipping the expand step
~~~~~~~~~~~~~~~~~~~~~~~~~~
Spack normally expands archives automatically after downloading
them. If you want to skip this step (e.g., for self-extracting
executables and other custom archive types), you can add
``expand=False`` to a ``version`` directive.
.. code-block:: python
version('8.2.1', '4136d7b4c04df68b686570afa26988ac',
url='http://example.com/foo-8.2.1-special-version.tar.gz', 'expand=False')
When ``expand`` is set to ``False``, Spack sets the current working
directory to the directory containing the downloaded archive before it
calls your ``install`` method. Within ``install``, the path to the
downloaded archive is available as ``self.stage.archive_file``.
Here is an example snippet for packages distributed as self-extracting
archives. The example sets permissions on the downloaded file to make
it executable, then runs it with some arguments.
.. code-block:: python
def install(self, spec, prefix):
set_executable(self.stage.archive_file)
installer = Executable(self.stage.archive_file)
installer('--prefix=%s' % prefix, 'arg1', 'arg2', 'etc.')
Checksums
~~~~~~~~~~~~~~~~~
@ -632,7 +661,7 @@ Default
revision instead.
Revisions
Add ``hg`` and ``revision``parameters:
Add ``hg`` and ``revision`` parameters:
.. code-block:: python
@ -703,7 +732,7 @@ supply is a filename, then the patch needs to live within the spack
source tree. For example, the patch above lives in a directory
structure like this::
$SPACK_ROOT/var/spack/packages/
$SPACK_ROOT/var/spack/repos/builtin/packages/
mvapich2/
package.py
ad_lustre_rwcontig_open_source.patch
@ -1524,6 +1553,69 @@ This is useful when you want to know exactly what Spack will do when
you ask for a particular spec.
``Concretization Policies``
~~~~~~~~~~~~~~~~~~~~~~~~~~~
A user may have certain preferences for how packages should
be concretized on their system. For example, one user may prefer packages
built with OpenMPI and the Intel compiler. Another user may prefer
packages be built with MVAPICH and GCC.
Spack can be configured to prefer certain compilers, package
versions, depends_on, and variants during concretization.
The preferred configuration can be controlled via the
``~/.spack/packages.yaml`` file for user configuations, or the
``etc/spack/packages.yaml`` site configuration.
Here's an example packages.yaml file that sets preferred packages:
.. code-block:: sh
packages:
dyninst:
compiler: [gcc@4.9]
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, gcc@4.6:, intel, clang, pgi]
providers:
mpi: [mvapich, mpich, openmpi]
At a high level, this example is specifying how packages should be
concretized. The dyninst package should prefer using gcc 4.9 and
be built with debug options. The gperftools package should prefer version
2.2 over 2.4. Every package on the system should prefer mvapich for
its MPI and gcc 4.4.7 (except for Dyninst, which overrides this by preferring gcc 4.9).
These options are used to fill in implicit defaults. Any of them can be overwritten
on the command line if explicitly requested.
Each packages.yaml file begins with the string ``packages:`` and
package names are specified on the next level. The special string ``all``
applies settings to each package. Underneath each package name is
one or more components: ``compiler``, ``variants``, ``version``,
or ``providers``. Each component has an ordered list of spec
``constraints``, with earlier entries in the list being preferred over
later entries.
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first
legal concretization rule. Going back to the example, if a user
requests gperftools 2.3 or later, then Spack will install version 2.4
as the 2.4 version of gperftools is preferred over 2.3.
An explicit concretization rule in the preferred section will always
take preference over unlisted concretizations. In the above example,
xlc isn't listed in the compiler list. Every listed compiler from
gcc to pgi will thus be preferred over the xlc compiler.
The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depend_on`` (e.g, mpi) and a list of rules for fulfilling that
dependency.
.. _install-method:
Implementing the ``install`` method
@ -1533,7 +1625,7 @@ The last element of a package is its ``install()`` method. This is
where the real work of installation happens, and it's the main part of
the package you'll need to customize for each piece of software.
.. literalinclude:: ../../../var/spack/packages/libelf/package.py
.. literalinclude:: ../../../var/spack/repos/builtin/packages/libelf/package.py
:start-after: 0.8.12
:linenos:
@ -1711,15 +1803,15 @@ Compile-time library search paths
* ``-L$dep_prefix/lib``
* ``-L$dep_prefix/lib64``
Runtime library search paths (RPATHs)
* ``-Wl,-rpath=$dep_prefix/lib``
* ``-Wl,-rpath=$dep_prefix/lib64``
* ``-Wl,-rpath,$dep_prefix/lib``
* ``-Wl,-rpath,$dep_prefix/lib64``
Include search paths
* ``-I$dep_prefix/include``
An example of this would be the ``libdwarf`` build, which has one
dependency: ``libelf``. Every call to ``cc`` in the ``libdwarf``
build will have ``-I$LIBELF_PREFIX/include``,
``-L$LIBELF_PREFIX/lib``, and ``-Wl,-rpath=$LIBELF_PREFIX/lib``
``-L$LIBELF_PREFIX/lib``, and ``-Wl,-rpath,$LIBELF_PREFIX/lib``
inserted on the command line. This is done transparently to the
project's build system, which will just think it's using a system
where ``libelf`` is readily available. Because of this, you **do
@ -1752,6 +1844,20 @@ dedicated process.
.. _prefix-objects:
Failing the build
----------------------
Sometimes you don't want a package to successfully install unless some
condition is true. You can explicitly cause the build to fail from
``install()`` by raising an ``InstallError``, for example:
.. code-block:: python
if spec.architecture.startswith('darwin'):
raise InstallError('This package does not build on Mac OS X!')
Prefix objects
----------------------
@ -2068,6 +2174,62 @@ package, this allows us to avoid race conditions in the library's
build system.
.. _sanity-checks:
Sanity checking an intallation
--------------------------------
By default, Spack assumes that a build has failed if nothing is
written to the install prefix, and that it has succeeded if anything
(a file, a directory, etc.) is written to the install prefix after
``install()`` completes.
Consider a simple autotools build like this:
.. code-block:: python
def install(self, spec, prefix):
configure("--prefix=" + prefix)
make()
make("install")
If you are using using standard autotools or CMake, ``configure`` and
``make`` will not write anything to the install prefix. Only ``make
install`` writes the files, and only once the build is already
complete. Not all builds are like this. Many builds of scientific
software modify the install prefix *before* ``make install``. Builds
like this can falsely report that they were successfully installed if
an error occurs before the install is complete but after files have
been written to the ``prefix``.
``sanity_check_is_file`` and ``sanity_check_is_dir``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can optionally specify *sanity checks* to deal with this problem.
Add properties like this to your package:
.. code-block:: python
class MyPackage(Package):
...
sanity_check_is_file = ['include/libelf.h']
sanity_check_is_dir = [lib]
def install(self, spec, prefix):
configure("--prefix=" + prefix)
make()
make("install")
Now, after ``install()`` runs, Spack will check whether
``$prefix/include/libelf.h`` exists and is a file, and whether
``$prefix/lib`` exists and is a directory. If the checks fail, then
the build will fail and the install prefix will be removed. If they
succeed, Spack considers the build succeeful and keeps the prefix in
place.
.. _file-manipulation:
File manipulation functions
@ -2108,6 +2270,15 @@ Filtering functions
Examples:
#. Filtering a Makefile to force it to use Spack's compiler wrappers:
.. code-block:: python
filter_file(r'^CC\s*=.*', spack_cc, 'Makefile')
filter_file(r'^CXX\s*=.*', spack_cxx, 'Makefile')
filter_file(r'^F77\s*=.*', spack_f77, 'Makefile')
filter_file(r'^FC\s*=.*', spack_fc, 'Makefile')
#. Replacing ``#!/usr/bin/perl`` with ``#!/usr/bin/env perl`` in ``bib2xhtml``:
.. code-block:: python

View File

@ -54,87 +54,73 @@ more elements to the list to indicate where your own site's temporary
directory is.
.. _concretization-policies:
External Packages
~~~~~~~~~~~~~~~~~~~~~
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
Concretization policies
----------------------------
External packages are configured through the ``packages.yaml`` file found
in a Spack installation's ``etc/spack/`` or a user's ``~/.spack/``
directory. Here's an example of an external configuration:
When a user asks for a package like ``mpileaks`` to be installed,
Spack has to make decisions like what version should be installed,
what compiler to use, and how its dependencies should be configured.
This process is called *concretization*, and it's covered in detail in
:ref:`its own section <abstract-and-concrete>`.
.. code-block:: yaml
The default concretization policies are in the
:py:mod:`spack.concretize` module, specifically in the
:py:class:`spack.concretize.DefaultConcretizer` class. These are the
important methods used in the concretization process:
packages:
openmpi:
paths:
openmpi@1.4.3%gcc@4.4.7=chaos_5_x86_64_ib: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7=chaos_5_x86_64_ib+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1=chaos_5_x86_64_ib: /opt/openmpi-1.6.5-intel
* :py:meth:`concretize_version(self, spec) <spack.concretize.DefaultConcretizer.concretize_version>`
* :py:meth:`concretize_architecture(self, spec) <spack.concretize.DefaultConcretizer.concretize_architecture>`
* :py:meth:`concretize_compiler(self, spec) <spack.concretize.DefaultConcretizer.concretize_compiler>`
* :py:meth:`choose_provider(self, spec, providers) <spack.concretize.DefaultConcretizer.choose_provider>`
This example lists three installations of OpenMPI, one built with gcc,
one built with gcc and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the the pre-installed OpenMPI in
the given directory.
The first three take a :py:class:`Spec <spack.spec.Spec>` object and
modify it by adding constraints for the version. For example, if the
input spec had a version range like `1.0:5.0.3`, then the
``concretize_version`` method should set the spec's version to a
*single* version in that range. Likewise, ``concretize_architecture``
selects an architecture when the input spec does not have one, and
``concretize_compiler`` needs to set both a concrete compiler and a
concrete compiler version.
Each ``packages.yaml`` begins with a ``packages:`` token, followed
by a list of package names. To specify externals, add a ``paths``
token under the package name, which lists externals in a
``spec : /path`` format. Each spec should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
``choose_provider()`` affects how concrete implementations are chosen
based on a virtual dependency spec. The input spec is some virtual
dependency and the ``providers`` index is a :py:class:`ProviderIndex
<spack.packages.ProviderIndex>` object. The ``ProviderIndex`` maps
the virtual spec to specs for possible implementations, and
``choose_provider()`` should simply choose one of these. The
``concretize_*`` methods will be called on the chosen implementation
later, so there is no need to fully concretize the spec when returning
it.
Each package version and compilers listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not every be built.
The ``DefaultConcretizer`` is intended to provide sensible defaults
for each policy, but there are certain choices that it can't know
about. For example, one site might prefer ``OpenMPI`` over ``MPICH``,
or another might prefer an old version of some packages. These types
of special cases can be integrated with custom concretizers.
The packages configuration can tell Spack to use an external location
for certain package versions, but it does not restrict Spack to using
external packages. In the above example, if an OpenMPI 1.8.4 became
available Spack may choose to start building and linking with that version
rather than continue using the pre-installed OpenMPI versions.
Writing a custom concretizer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
To write your own concretizer, you need only subclass
``DefaultConcretizer`` and override the methods you want to change.
For example, you might write a class like this to change *only* the
``concretize_version()`` behavior:
.. code-block:: yaml
.. code-block:: python
packages:
openmpi:
paths:
openmpi@1.4.3%gcc@4.4.7=chaos_5_x86_64_ib: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7=chaos_5_x86_64_ib+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1=chaos_5_x86_64_ib: /opt/openmpi-1.6.5-intel
buildable: False
from spack.concretize import DefaultConcretizer
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI, and it will instead always rely on a pre-built
OpenMPI. Similar to ``paths``, ``buildable`` is specified as a property under
a package name.
class MyConcretizer(DefaultConcretizer):
def concretize_version(self, spec):
# implement custom logic here.
Once you have written your custom concretizer, you can make Spack use
it by editing ``globals.py``. Find this part of the file:
.. code-block:: python
#
# This controls how things are concretized in spack.
# Replace it with a subclass if you want different
# policies.
#
concretizer = DefaultConcretizer()
Set concretizer to *your own* class instead of the default:
.. code-block:: python
concretizer = MyConcretizer()
The next time you run Spack, your changes should take effect.
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
Profiling

347
lib/spack/env/cc vendored
View File

@ -7,7 +7,7 @@
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://scalability-llnl.github.io/spack
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
@ -39,7 +39,7 @@
#
# This is the list of environment variables that need to be set before
# the script runs. They are set by routines in spack.build_environment
# the script runs. They are set by routines in spack.build_environment
# as part of spack.package.Package.do_install().
parameters="
SPACK_PREFIX
@ -51,7 +51,7 @@ SPACK_SHORT_SPEC"
# The compiler input variables are checked for sanity later:
# SPACK_CC, SPACK_CXX, SPACK_F77, SPACK_FC
# The default compiler flags are passed from the config files:
# SPACK_CFLAGS, SPACK_CXXFLAGS, SPACK_FFLAGS, SPACK_LDFLAGS
# SPACK_CFLAGS, SPACK_CXXFLAGS, SPACK_FFLAGS, SPACK_LDFLAGS, SPACK_LDLIBS
# Debug flag is optional; set to true for debug logging:
# SPACK_DEBUG
# Test command is used to unit test the compiler script.
@ -67,12 +67,11 @@ function die {
}
for param in $parameters; do
if [ -z "${!param}" ]; then
die "Spack compiler must be run from spack! Input $param was missing!"
if [[ -z ${!param} ]]; then
die "Spack compiler must be run from Spack! Input '$param' is missing."
fi
done
#
# Figure out the type of compiler, the language, and the mode so that
# the compiler script knows what to do.
#
@ -80,249 +79,185 @@ done
# 'command' is set based on the input command to $SPACK_[CC|CXX|F77|F90]
#
# 'mode' is set to one of:
# vcheck version check
# cpp preprocess
# cc compile
# as assemble
# ld link
# ccld compile & link
# cpp preprocessor
# vcheck version check
#
command=$(basename "$0")
case "$command" in
cc|gcc|c89|c99|clang|xlc)
cpp)
mode=cpp
;;
cc|gcc|c89|c99|clang|xlc|icc|pgcc)
command=("$SPACK_CC")
if [ "$SPACK_CFLAGS" ]; then
for flag in ${SPACK_CFLAGS[@]}; do
command+=("$flag");
done
fi
language="C"
lang_flags=C
;;
c++|CC|g++|clang++|xlC)
command=("$SPACK_CXX")
if [ "$SPACK_CXXFLAGS" ]; then
for flag in ${SPACK_CXXFLAGS[@]}; do
command+=("$flag");
done
fi
language="C++"
lang_flags=CXX
;;
f77|xlf)
command=("$SPACK_F77")
if [ "$SPACK_FFLAGS" ]; then
for flag in ${SPACK_FFLAGS[@]}; do
command+=("$flag");
done
fi
language="Fortran 77"
lang_flags=F
;;
fc|f90|f95|xlf90)
fc|f90|f95|xlf90|gfortran|ifort|pgfortrn|nagfor)
command="$SPACK_FC"
if [ "$SPACK_FFLAGS" ]; then
for flag in ${SPACK_FFLAGS[@]}; do
command+=("$flag");
done
fi
language="Fortran 90"
;;
cpp)
mode=cpp
if [ "$SPACK_CPPFLAGS" ]; then
for flag in ${SPACK_CPPFLAGS[@]}; do
command+=("$flag");
done
fi
lang_flags=F
;;
ld)
mode=ld
if [ "$SPACK_LDFLAGS" ]; then
for flag in ${SPACK_LDFLAGS[@]}; do
command+=("$flag");
done
fi
;;
*)
die "Unkown compiler: $command"
;;
esac
# Finish setting up the mode.
# Check for vcheck mode
if [ -z "$mode" ]; then
mode=ccld
if [ "$SPACK_LDFLAGS" ]; then
for flag in ${SPACK_LDFLAGS[@]}; do
command+=("$flag");
done
fi
for arg in "$@"; do
if [ "$arg" = -v -o "$arg" = -V -o "$arg" = --version -o "$arg" = -dumpversion ]; then
if [[ $arg == -v || $arg == -V || $arg == --version || $arg == -dumpversion ]]; then
mode=vcheck
break
elif [ "$arg" = -E ]; then
fi
done
fi
# Finish setting the mode
if [[ -z $mode ]]; then
mode=ccld
for arg in "$@"; do
if [[ $arg == -E ]]; then
mode=cpp
break
elif [ "$arg" = -c ]; then
elif [[ $arg == -S ]]; then
mode=as
break
elif [[ $arg == -c ]]; then
mode=cc
break
fi
done
fi
# Dump the version and exist if we're in testing mode.
if [ "$SPACK_TEST_COMMAND" = "dump-mode" ]; then
# Dump the version and exit if we're in testing mode.
if [[ $SPACK_TEST_COMMAND == dump-mode ]]; then
echo "$mode"
exit
fi
# Check that at least one of the real commands was actually selected,
# otherwise we don't know what to execute.
if [ -z "$command" ]; then
if [[ -z $command ]]; then
die "ERROR: Compiler '$SPACK_COMPILER_SPEC' does not support compiling $language programs."
fi
if [[ $mode == vcheck ]]; then
exec ${command} "$@"
fi
# Darwin's linker has a -r argument that merges object files together.
# It doesn't work with -rpath.
# This variable controls whether they are added.
add_rpaths=true
if [[ mode == ld && $OSTYPE == darwin* ]]; then
for arg in "$@"; do
if [[ $arg == -r ]]; then
add_rpaths=false
break
fi
done
fi
# Save original command for debug logging
input_command="$@"
args=("$@")
#
# Now do real parsing of the command line args, trying hard to keep
# non-rpath linker arguments in the proper order w.r.t. other command
# line arguments. This is important for things like groups.
#
includes=()
libraries=()
libs=()
rpaths=()
other_args=()
while [ -n "$1" ]; do
case "$1" in
-I*)
arg="${1#-I}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
includes+=("$arg")
;;
-L*)
arg="${1#-L}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
libraries+=("$arg")
;;
-l*)
arg="${1#-l}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
libs+=("$arg")
;;
-Wl,*)
arg="${1#-Wl,}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
if [[ "$arg" = -rpath=* ]]; then
rpaths+=("${arg#-rpath=}")
elif [[ "$arg" = -rpath ]]; then
shift; arg="$1"
if [[ "$arg" != -Wl,* ]]; then
die "-Wl,-rpath was not followed by -Wl,*"
fi
rpaths+=("${arg#-Wl,}")
else
other_args+=("-Wl,$arg")
fi
;;
-Xlinker,*)
arg="${1#-Xlinker,}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
if [[ "$arg" = -rpath=* ]]; then
rpaths+=("${arg#-rpath=}")
elif [[ "$arg" = -rpath ]]; then
shift; arg="$1"
if [[ "$arg" != -Xlinker,* ]]; then
die "-Xlinker,-rpath was not followed by -Xlinker,*"
fi
rpaths+=("${arg#-Xlinker,}")
else
other_args+=("-Xlinker,$arg")
fi
;;
*)
other_args+=("$1")
;;
esac
shift
done
# Dump parsed values for unit testing if asked for
if [ -n "$SPACK_TEST_COMMAND" ]; then
IFS=$'\n'
case "$SPACK_TEST_COMMAND" in
dump-includes) echo "${includes[*]}";;
dump-libraries) echo "${libraries[*]}";;
dump-libs) echo "${libs[*]}";;
dump-rpaths) echo "${rpaths[*]}";;
dump-other-args) echo "${other_args[*]}";;
dump-all)
echo "INCLUDES:"
echo "${includes[*]}"
echo
echo "LIBRARIES:"
echo "${libraries[*]}"
echo
echo "LIBS:"
echo "${libs[*]}"
echo
echo "RPATHS:"
echo "${rpaths[*]}"
echo
echo "ARGS:"
echo "${other_args[*]}"
;;
*)
echo "ERROR: Unknown test command"
exit 1 ;;
esac
exit
fi
# Prepend cppflags, cflags, cxxflags, fflags, and ldflags
case "$mode" in
cppas|cc|ccld)
# Add cppflags
args=(${SPACK_CPPFLAGS[@]} "${args[@]}") ;;
*)
;;
esac
case "$mode" in
cc|ccld)
# Add c, cxx, and f flags
case $lang_flags in
C)
args=(${SPACK_CFLAGS[@]} "${args[@]}") ;;
CXX)
args=(${SPACK_CXXFLAGS[@]} "${args[@]}") ;;
F)
args=(${SPACK_FFLAGS[@]} "${args[@]}") ;;
esac
;;
*)
;;
esac
case "$mode" in
ld|ccld)
# Add ldflags
args=(${SPACK_CPPFLAGS[@]} "${args[@]}") ;;
*)
;;
esac
# Read spack dependencies from the path environment variable
IFS=':' read -ra deps <<< "$SPACK_DEPENDENCIES"
for dep in "${deps[@]}"; do
if [ -d "$dep/include" ]; then
includes+=("$dep/include")
# Prepend include directories
if [[ -d $dep/include ]]; then
if [[ $mode == cpp || $mode == cc || $mode == as || $mode == ccld ]]; then
args=("-I$dep/include" "${args[@]}")
fi
fi
if [ -d "$dep/lib" ]; then
libraries+=("$dep/lib")
rpaths+=("$dep/lib")
# Prepend lib and RPATH directories
if [[ -d $dep/lib ]]; then
if [[ $mode == ccld ]]; then
$add_rpaths && args=("-Wl,-rpath,$dep/lib" "${args[@]}")
args=("-L$dep/lib" "${args[@]}")
elif [[ $mode == ld ]]; then
$add_rpaths && args=("-rpath" "$dep/lib" "${args[@]}")
args=("-L$dep/lib" "${args[@]}")
fi
fi
if [ -d "$dep/lib64" ]; then
libraries+=("$dep/lib64")
rpaths+=("$dep/lib64")
# Prepend lib64 and RPATH directories
if [[ -d $dep/lib64 ]]; then
if [[ $mode == ccld ]]; then
$add_rpaths && args=("-Wl,-rpath,$dep/lib64" "${args[@]}")
args=("-L$dep/lib64" "${args[@]}")
elif [[ $mode == ld ]]; then
$add_rpaths && args=("-rpath" "$dep/lib64" "${args[@]}")
args=("-L$dep/lib64" "${args[@]}")
fi
fi
done
# Include all -L's and prefix/whatever dirs in rpath
for dir in "${libraries[@]}"; do
[[ dir = $SPACK_INSTALL* ]] && rpaths+=("$dir")
done
rpaths+=("$SPACK_PREFIX/lib")
rpaths+=("$SPACK_PREFIX/lib64")
# Put the arguments together
args=()
for dir in "${includes[@]}"; do args+=("-I$dir"); done
args+=("${other_args[@]}")
for dir in "${libraries[@]}"; do args+=("-L$dir"); done
for lib in "${libs[@]}"; do args+=("-l$lib"); done
if [ "$mode" = ccld ]; then
for dir in "${rpaths[@]}"; do
args+=("-Wl,-rpath")
args+=("-Wl,$dir");
done
elif [ "$mode" = ld ]; then
for dir in "${rpaths[@]}"; do
args+=("-rpath")
args+=("$dir");
done
if [[ $mode == ccld ]]; then
$add_rpaths && args=("-Wl,-rpath,$SPACK_PREFIX/lib" "-Wl,-rpath,$SPACK_PREFIX/lib64" "${args[@]}")
elif [[ $mode == ld ]]; then
$add_rpaths && args=("-rpath" "$SPACK_PREFIX/lib" "-rpath" "$SPACK_PREFIX/lib64" "${args[@]}")
fi
# Add SPACK_LDLIBS to args
case "$mode" in
ld|ccld)
args=("${args[@]}" ${SPACK_LDLIBS[@]}) ;;
*)
;;
esac
#
# Unset pesky environment variables that could affect build sanity.
#
@ -336,41 +271,41 @@ unset DYLD_LIBRARY_PATH
#
IFS=':' read -ra env_path <<< "$PATH"
IFS=':' read -ra spack_env_dirs <<< "$SPACK_ENV_PATH"
spack_env_dirs+=(".")
spack_env_dirs+=("" ".")
PATH=""
for dir in "${env_path[@]}"; do
remove=""
for rm_dir in "${spack_env_dirs[@]}"; do
if [ "$dir" = "$rm_dir" ]; then remove=True; fi
done
if [ -z "$remove" ]; then
if [ -z "$PATH" ]; then
PATH="$dir"
else
PATH="$PATH:$dir"
addpath=true
for env_dir in "${spack_env_dirs[@]}"; do
if [[ $dir == $env_dir ]]; then
addpath=false
break
fi
done
if $addpath; then
PATH="${PATH:+$PATH:}$dir"
fi
done
export PATH
full_command=("${command[@]}")
full_command+=("${args[@]}")
#ifdef NEW
full_command=("$command" "${args[@]}")
# In test command mode, write out full command for Spack tests.
if [[ $SPACK_TEST_COMMAND == dump-args ]]; then
echo "${full_command[@]}"
exit
elif [[ -n $SPACK_TEST_COMMAND ]]; then
die "ERROR: Unknown test command"
fi
#
# Write the input and output commands to debug logs if it's asked for.
#
if [ "$SPACK_DEBUG" = "TRUE" ]; then
if [[ $SPACK_DEBUG == TRUE ]]; then
input_log="$SPACK_DEBUG_LOG_DIR/spack-cc-$SPACK_SHORT_SPEC.in.log"
output_log="$SPACK_DEBUG_LOG_DIR/spack-cc-$SPACK_SHORT_SPEC.out.log"
echo "$input_command" >> $input_log
echo "$mode ${full_command[@]}" >> $output_log
echo "[$mode] $command $input_command" >> $input_log
echo "[$mode] ${full_command[@]}" >> $output_log
fi
#echo "---------------------------"
#echo "---------------------------"
#echo "---------------------------"
#echo "${full_command[@]}"
#echo "---------------------------"
#echo "---------------------------"
#echo "---------------------------"
exec "${full_command[@]}"

1
lib/spack/env/clang/clang vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/clang/clang++ vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/gcc/g++ vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/gcc/gcc vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/gcc/gfortran vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/intel/icc vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/intel/icpc vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/intel/ifort vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/nag/nagfor vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/pgi/pgc++ vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/pgi/pgcc vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/pgi/pgfortran vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/xl/xlc vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/xl/xlc++ vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/xl/xlf vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

1
lib/spack/env/xl/xlf90 vendored Symbolic link
View File

@ -0,0 +1 @@
../cc

View File

@ -6,7 +6,7 @@
# Written by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://scalability-llnl.github.io/spack
# For details, see https://github.com/llnl/spack
# Please also see the LICENSE file for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify

View File

@ -1067,9 +1067,13 @@ class _SubParsersAction(Action):
class _ChoicesPseudoAction(Action):
def __init__(self, name, help):
def __init__(self, name, aliases, help):
metavar = dest = name
if aliases:
metavar += ' (%s)' % ', '.join(aliases)
sup = super(_SubParsersAction._ChoicesPseudoAction, self)
sup.__init__(option_strings=[], dest=name, help=help)
sup.__init__(option_strings=[], dest=dest, help=help,
metavar=metavar)
def __init__(self,
option_strings,
@ -1097,15 +1101,22 @@ def add_parser(self, name, **kwargs):
if kwargs.get('prog') is None:
kwargs['prog'] = '%s %s' % (self._prog_prefix, name)
aliases = kwargs.pop('aliases', ())
# create a pseudo-action to hold the choice help
if 'help' in kwargs:
help = kwargs.pop('help')
choice_action = self._ChoicesPseudoAction(name, help)
choice_action = self._ChoicesPseudoAction(name, aliases, help)
self._choices_actions.append(choice_action)
# create the parser and add it to the map
parser = self._parser_class(**kwargs)
self._name_parser_map[name] = parser
# make parser available under aliases also
for alias in aliases:
self._name_parser_map[alias] = parser
return parser
def _get_subactions(self):
@ -1123,8 +1134,9 @@ def __call__(self, parser, namespace, values, option_string=None):
try:
parser = self._name_parser_map[parser_name]
except KeyError:
tup = parser_name, ', '.join(self._name_parser_map)
msg = _('unknown parser %r (choices: %s)' % tup)
args = {'parser_name': parser_name,
'choices': ', '.join(self._name_parser_map)}
msg = _('unknown parser %(parser_name)r (choices: %(choices)s)') % args
raise ArgumentError(self, msg)
# parse all the remaining options into the namespace

19
lib/spack/external/jsonschema/COPYING vendored Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2013 Julian Berman
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

104
lib/spack/external/jsonschema/README.rst vendored Normal file
View File

@ -0,0 +1,104 @@
==========
jsonschema
==========
``jsonschema`` is an implementation of `JSON Schema <http://json-schema.org>`_
for Python (supporting 2.6+ including Python 3).
.. code-block:: python
>>> from jsonschema import validate
>>> # A sample schema, like what we'd get from json.load()
>>> schema = {
... "type" : "object",
... "properties" : {
... "price" : {"type" : "number"},
... "name" : {"type" : "string"},
... },
... }
>>> # If no exception is raised by validate(), the instance is valid.
>>> validate({"name" : "Eggs", "price" : 34.99}, schema)
>>> validate(
... {"name" : "Eggs", "price" : "Invalid"}, schema
... ) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
ValidationError: 'Invalid' is not of type 'number'
Features
--------
* Full support for
`Draft 3 <https://python-jsonschema.readthedocs.org/en/latest/validate/#jsonschema.Draft3Validator>`_
**and** `Draft 4 <https://python-jsonschema.readthedocs.org/en/latest/validate/#jsonschema.Draft4Validator>`_
of the schema.
* `Lazy validation <https://python-jsonschema.readthedocs.org/en/latest/validate/#jsonschema.IValidator.iter_errors>`_
that can iteratively report *all* validation errors.
* Small and extensible
* `Programmatic querying <https://python-jsonschema.readthedocs.org/en/latest/errors/#module-jsonschema>`_
of which properties or items failed validation.
Release Notes
-------------
* A simple CLI was added for validation
* Validation errors now keep full absolute paths and absolute schema paths in
their ``absolute_path`` and ``absolute_schema_path`` attributes. The ``path``
and ``schema_path`` attributes are deprecated in favor of ``relative_path``
and ``relative_schema_path``\ .
*Note:* Support for Python 3.2 was dropped in this release, and installation
now uses setuptools.
Running the Test Suite
----------------------
``jsonschema`` uses the wonderful `Tox <http://tox.readthedocs.org>`_ for its
test suite. (It really is wonderful, if for some reason you haven't heard of
it, you really should use it for your projects).
Assuming you have ``tox`` installed (perhaps via ``pip install tox`` or your
package manager), just run ``tox`` in the directory of your source checkout to
run ``jsonschema``'s test suite on all of the versions of Python ``jsonschema``
supports. Note that you'll need to have all of those versions installed in
order to run the tests on each of them, otherwise ``tox`` will skip (and fail)
the tests on that version.
Of course you're also free to just run the tests on a single version with your
favorite test runner. The tests live in the ``jsonschema.tests`` package.
Community
---------
There's a `mailing list <https://groups.google.com/forum/#!forum/jsonschema>`_
for this implementation on Google Groups.
Please join, and feel free to send questions there.
Contributing
------------
I'm Julian Berman.
``jsonschema`` is on `GitHub <http://github.com/Julian/jsonschema>`_.
Get in touch, via GitHub or otherwise, if you've got something to contribute,
it'd be most welcome!
You can also generally find me on Freenode (nick: ``tos9``) in various
channels, including ``#python``.
If you feel overwhelmingly grateful, you can woo me with beer money on
`Gittip <https://www.gittip.com/Julian/>`_ or via Google Wallet with the email
in my GitHub profile.

View File

@ -0,0 +1,26 @@
"""
An implementation of JSON Schema for Python
The main functionality is provided by the validator classes for each of the
supported JSON Schema versions.
Most commonly, :func:`validate` is the quickest way to simply validate a given
instance under a schema, and will create a validator for you.
"""
from jsonschema.exceptions import (
ErrorTree, FormatError, RefResolutionError, SchemaError, ValidationError
)
from jsonschema._format import (
FormatChecker, draft3_format_checker, draft4_format_checker,
)
from jsonschema.validators import (
Draft3Validator, Draft4Validator, RefResolver, validate
)
__version__ = "2.4.0"
# flake8: noqa

View File

@ -0,0 +1,2 @@
from jsonschema.cli import main
main()

240
lib/spack/external/jsonschema/_format.py vendored Normal file
View File

@ -0,0 +1,240 @@
import datetime
import re
import socket
from jsonschema.compat import str_types
from jsonschema.exceptions import FormatError
class FormatChecker(object):
"""
A ``format`` property checker.
JSON Schema does not mandate that the ``format`` property actually do any
validation. If validation is desired however, instances of this class can
be hooked into validators to enable format validation.
:class:`FormatChecker` objects always return ``True`` when asked about
formats that they do not know how to validate.
To check a custom format using a function that takes an instance and
returns a ``bool``, use the :meth:`FormatChecker.checks` or
:meth:`FormatChecker.cls_checks` decorators.
:argument iterable formats: the known formats to validate. This argument
can be used to limit which formats will be used
during validation.
"""
checkers = {}
def __init__(self, formats=None):
if formats is None:
self.checkers = self.checkers.copy()
else:
self.checkers = dict((k, self.checkers[k]) for k in formats)
def checks(self, format, raises=()):
"""
Register a decorated function as validating a new format.
:argument str format: the format that the decorated function will check
:argument Exception raises: the exception(s) raised by the decorated
function when an invalid instance is found. The exception object
will be accessible as the :attr:`ValidationError.cause` attribute
of the resulting validation error.
"""
def _checks(func):
self.checkers[format] = (func, raises)
return func
return _checks
cls_checks = classmethod(checks)
def check(self, instance, format):
"""
Check whether the instance conforms to the given format.
:argument instance: the instance to check
:type: any primitive type (str, number, bool)
:argument str format: the format that instance should conform to
:raises: :exc:`FormatError` if instance does not conform to format
"""
if format not in self.checkers:
return
func, raises = self.checkers[format]
result, cause = None, None
try:
result = func(instance)
except raises as e:
cause = e
if not result:
raise FormatError(
"%r is not a %r" % (instance, format), cause=cause,
)
def conforms(self, instance, format):
"""
Check whether the instance conforms to the given format.
:argument instance: the instance to check
:type: any primitive type (str, number, bool)
:argument str format: the format that instance should conform to
:rtype: bool
"""
try:
self.check(instance, format)
except FormatError:
return False
else:
return True
_draft_checkers = {"draft3": [], "draft4": []}
def _checks_drafts(both=None, draft3=None, draft4=None, raises=()):
draft3 = draft3 or both
draft4 = draft4 or both
def wrap(func):
if draft3:
_draft_checkers["draft3"].append(draft3)
func = FormatChecker.cls_checks(draft3, raises)(func)
if draft4:
_draft_checkers["draft4"].append(draft4)
func = FormatChecker.cls_checks(draft4, raises)(func)
return func
return wrap
@_checks_drafts("email")
def is_email(instance):
if not isinstance(instance, str_types):
return True
return "@" in instance
_ipv4_re = re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$")
@_checks_drafts(draft3="ip-address", draft4="ipv4")
def is_ipv4(instance):
if not isinstance(instance, str_types):
return True
if not _ipv4_re.match(instance):
return False
return all(0 <= int(component) <= 255 for component in instance.split("."))
if hasattr(socket, "inet_pton"):
@_checks_drafts("ipv6", raises=socket.error)
def is_ipv6(instance):
if not isinstance(instance, str_types):
return True
return socket.inet_pton(socket.AF_INET6, instance)
_host_name_re = re.compile(r"^[A-Za-z0-9][A-Za-z0-9\.\-]{1,255}$")
@_checks_drafts(draft3="host-name", draft4="hostname")
def is_host_name(instance):
if not isinstance(instance, str_types):
return True
if not _host_name_re.match(instance):
return False
components = instance.split(".")
for component in components:
if len(component) > 63:
return False
return True
try:
import rfc3987
except ImportError:
pass
else:
@_checks_drafts("uri", raises=ValueError)
def is_uri(instance):
if not isinstance(instance, str_types):
return True
return rfc3987.parse(instance, rule="URI")
try:
import strict_rfc3339
except ImportError:
try:
import isodate
except ImportError:
pass
else:
@_checks_drafts("date-time", raises=(ValueError, isodate.ISO8601Error))
def is_date(instance):
if not isinstance(instance, str_types):
return True
return isodate.parse_datetime(instance)
else:
@_checks_drafts("date-time")
def is_date(instance):
if not isinstance(instance, str_types):
return True
return strict_rfc3339.validate_rfc3339(instance)
@_checks_drafts("regex", raises=re.error)
def is_regex(instance):
if not isinstance(instance, str_types):
return True
return re.compile(instance)
@_checks_drafts(draft3="date", raises=ValueError)
def is_date(instance):
if not isinstance(instance, str_types):
return True
return datetime.datetime.strptime(instance, "%Y-%m-%d")
@_checks_drafts(draft3="time", raises=ValueError)
def is_time(instance):
if not isinstance(instance, str_types):
return True
return datetime.datetime.strptime(instance, "%H:%M:%S")
try:
import webcolors
except ImportError:
pass
else:
def is_css_color_code(instance):
return webcolors.normalize_hex(instance)
@_checks_drafts(draft3="color", raises=(ValueError, TypeError))
def is_css21_color(instance):
if (
not isinstance(instance, str_types) or
instance.lower() in webcolors.css21_names_to_hex
):
return True
return is_css_color_code(instance)
def is_css3_color(instance):
if instance.lower() in webcolors.css3_names_to_hex:
return True
return is_css_color_code(instance)
draft3_format_checker = FormatChecker(_draft_checkers["draft3"])
draft4_format_checker = FormatChecker(_draft_checkers["draft4"])

View File

@ -0,0 +1,155 @@
# -*- test-case-name: twisted.test.test_reflect -*-
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Standardized versions of various cool and/or strange things that you can do
with Python's reflection capabilities.
"""
import sys
from jsonschema.compat import PY3
class _NoModuleFound(Exception):
"""
No module was found because none exists.
"""
class InvalidName(ValueError):
"""
The given name is not a dot-separated list of Python objects.
"""
class ModuleNotFound(InvalidName):
"""
The module associated with the given name doesn't exist and it can't be
imported.
"""
class ObjectNotFound(InvalidName):
"""
The object associated with the given name doesn't exist and it can't be
imported.
"""
if PY3:
def reraise(exception, traceback):
raise exception.with_traceback(traceback)
else:
exec("""def reraise(exception, traceback):
raise exception.__class__, exception, traceback""")
reraise.__doc__ = """
Re-raise an exception, with an optional traceback, in a way that is compatible
with both Python 2 and Python 3.
Note that on Python 3, re-raised exceptions will be mutated, with their
C{__traceback__} attribute being set.
@param exception: The exception instance.
@param traceback: The traceback to use, or C{None} indicating a new traceback.
"""
def _importAndCheckStack(importName):
"""
Import the given name as a module, then walk the stack to determine whether
the failure was the module not existing, or some code in the module (for
example a dependent import) failing. This can be helpful to determine
whether any actual application code was run. For example, to distiguish
administrative error (entering the wrong module name), from programmer
error (writing buggy code in a module that fails to import).
@param importName: The name of the module to import.
@type importName: C{str}
@raise Exception: if something bad happens. This can be any type of
exception, since nobody knows what loading some arbitrary code might
do.
@raise _NoModuleFound: if no module was found.
"""
try:
return __import__(importName)
except ImportError:
excType, excValue, excTraceback = sys.exc_info()
while excTraceback:
execName = excTraceback.tb_frame.f_globals["__name__"]
# in Python 2 execName is None when an ImportError is encountered,
# where in Python 3 execName is equal to the importName.
if execName is None or execName == importName:
reraise(excValue, excTraceback)
excTraceback = excTraceback.tb_next
raise _NoModuleFound()
def namedAny(name):
"""
Retrieve a Python object by its fully qualified name from the global Python
module namespace. The first part of the name, that describes a module,
will be discovered and imported. Each subsequent part of the name is
treated as the name of an attribute of the object specified by all of the
name which came before it. For example, the fully-qualified name of this
object is 'twisted.python.reflect.namedAny'.
@type name: L{str}
@param name: The name of the object to return.
@raise InvalidName: If the name is an empty string, starts or ends with
a '.', or is otherwise syntactically incorrect.
@raise ModuleNotFound: If the name is syntactically correct but the
module it specifies cannot be imported because it does not appear to
exist.
@raise ObjectNotFound: If the name is syntactically correct, includes at
least one '.', but the module it specifies cannot be imported because
it does not appear to exist.
@raise AttributeError: If an attribute of an object along the way cannot be
accessed, or a module along the way is not found.
@return: the Python object identified by 'name'.
"""
if not name:
raise InvalidName('Empty module name')
names = name.split('.')
# if the name starts or ends with a '.' or contains '..', the __import__
# will raise an 'Empty module name' error. This will provide a better error
# message.
if '' in names:
raise InvalidName(
"name must be a string giving a '.'-separated list of Python "
"identifiers, not %r" % (name,))
topLevelPackage = None
moduleNames = names[:]
while not topLevelPackage:
if moduleNames:
trialname = '.'.join(moduleNames)
try:
topLevelPackage = _importAndCheckStack(trialname)
except _NoModuleFound:
moduleNames.pop()
else:
if len(names) == 1:
raise ModuleNotFound("No module named %r" % (name,))
else:
raise ObjectNotFound('%r does not name an object' % (name,))
obj = topLevelPackage
for n in names[1:]:
obj = getattr(obj, n)
return obj

213
lib/spack/external/jsonschema/_utils.py vendored Normal file
View File

@ -0,0 +1,213 @@
import itertools
import json
import pkgutil
import re
from jsonschema.compat import str_types, MutableMapping, urlsplit
class URIDict(MutableMapping):
"""
Dictionary which uses normalized URIs as keys.
"""
def normalize(self, uri):
return urlsplit(uri).geturl()
def __init__(self, *args, **kwargs):
self.store = dict()
self.store.update(*args, **kwargs)
def __getitem__(self, uri):
return self.store[self.normalize(uri)]
def __setitem__(self, uri, value):
self.store[self.normalize(uri)] = value
def __delitem__(self, uri):
del self.store[self.normalize(uri)]
def __iter__(self):
return iter(self.store)
def __len__(self):
return len(self.store)
def __repr__(self):
return repr(self.store)
class Unset(object):
"""
An as-of-yet unset attribute or unprovided default parameter.
"""
def __repr__(self):
return "<unset>"
def load_schema(name):
"""
Load a schema from ./schemas/``name``.json and return it.
"""
data = pkgutil.get_data(__package__, "schemas/{0}.json".format(name))
return json.loads(data.decode("utf-8"))
def indent(string, times=1):
"""
A dumb version of :func:`textwrap.indent` from Python 3.3.
"""
return "\n".join(" " * (4 * times) + line for line in string.splitlines())
def format_as_index(indices):
"""
Construct a single string containing indexing operations for the indices.
For example, [1, 2, "foo"] -> [1][2]["foo"]
:type indices: sequence
"""
if not indices:
return ""
return "[%s]" % "][".join(repr(index) for index in indices)
def find_additional_properties(instance, schema):
"""
Return the set of additional properties for the given ``instance``.
Weeds out properties that should have been validated by ``properties`` and
/ or ``patternProperties``.
Assumes ``instance`` is dict-like already.
"""
properties = schema.get("properties", {})
patterns = "|".join(schema.get("patternProperties", {}))
for property in instance:
if property not in properties:
if patterns and re.search(patterns, property):
continue
yield property
def extras_msg(extras):
"""
Create an error message for extra items or properties.
"""
if len(extras) == 1:
verb = "was"
else:
verb = "were"
return ", ".join(repr(extra) for extra in extras), verb
def types_msg(instance, types):
"""
Create an error message for a failure to match the given types.
If the ``instance`` is an object and contains a ``name`` property, it will
be considered to be a description of that object and used as its type.
Otherwise the message is simply the reprs of the given ``types``.
"""
reprs = []
for type in types:
try:
reprs.append(repr(type["name"]))
except Exception:
reprs.append(repr(type))
return "%r is not of type %s" % (instance, ", ".join(reprs))
def flatten(suitable_for_isinstance):
"""
isinstance() can accept a bunch of really annoying different types:
* a single type
* a tuple of types
* an arbitrary nested tree of tuples
Return a flattened tuple of the given argument.
"""
types = set()
if not isinstance(suitable_for_isinstance, tuple):
suitable_for_isinstance = (suitable_for_isinstance,)
for thing in suitable_for_isinstance:
if isinstance(thing, tuple):
types.update(flatten(thing))
else:
types.add(thing)
return tuple(types)
def ensure_list(thing):
"""
Wrap ``thing`` in a list if it's a single str.
Otherwise, return it unchanged.
"""
if isinstance(thing, str_types):
return [thing]
return thing
def unbool(element, true=object(), false=object()):
"""
A hack to make True and 1 and False and 0 unique for ``uniq``.
"""
if element is True:
return true
elif element is False:
return false
return element
def uniq(container):
"""
Check if all of a container's elements are unique.
Successively tries first to rely that the elements are hashable, then
falls back on them being sortable, and finally falls back on brute
force.
"""
try:
return len(set(unbool(i) for i in container)) == len(container)
except TypeError:
try:
sort = sorted(unbool(i) for i in container)
sliced = itertools.islice(sort, 1, None)
for i, j in zip(sort, sliced):
if i == j:
return False
except (NotImplementedError, TypeError):
seen = []
for e in container:
e = unbool(e)
if e in seen:
return False
seen.append(e)
return True

View File

@ -0,0 +1,358 @@
import re
from jsonschema import _utils
from jsonschema.exceptions import FormatError, ValidationError
from jsonschema.compat import iteritems
FLOAT_TOLERANCE = 10 ** -15
def patternProperties(validator, patternProperties, instance, schema):
if not validator.is_type(instance, "object"):
return
for pattern, subschema in iteritems(patternProperties):
for k, v in iteritems(instance):
if re.search(pattern, k):
for error in validator.descend(
v, subschema, path=k, schema_path=pattern,
):
yield error
def additionalProperties(validator, aP, instance, schema):
if not validator.is_type(instance, "object"):
return
extras = set(_utils.find_additional_properties(instance, schema))
if validator.is_type(aP, "object"):
for extra in extras:
for error in validator.descend(instance[extra], aP, path=extra):
yield error
elif not aP and extras:
error = "Additional properties are not allowed (%s %s unexpected)"
yield ValidationError(error % _utils.extras_msg(extras))
def items(validator, items, instance, schema):
if not validator.is_type(instance, "array"):
return
if validator.is_type(items, "object"):
for index, item in enumerate(instance):
for error in validator.descend(item, items, path=index):
yield error
else:
for (index, item), subschema in zip(enumerate(instance), items):
for error in validator.descend(
item, subschema, path=index, schema_path=index,
):
yield error
def additionalItems(validator, aI, instance, schema):
if (
not validator.is_type(instance, "array") or
validator.is_type(schema.get("items", {}), "object")
):
return
len_items = len(schema.get("items", []))
if validator.is_type(aI, "object"):
for index, item in enumerate(instance[len_items:], start=len_items):
for error in validator.descend(item, aI, path=index):
yield error
elif not aI and len(instance) > len(schema.get("items", [])):
error = "Additional items are not allowed (%s %s unexpected)"
yield ValidationError(
error %
_utils.extras_msg(instance[len(schema.get("items", [])):])
)
def minimum(validator, minimum, instance, schema):
if not validator.is_type(instance, "number"):
return
if schema.get("exclusiveMinimum", False):
failed = float(instance) <= minimum
cmp = "less than or equal to"
else:
failed = float(instance) < minimum
cmp = "less than"
if failed:
yield ValidationError(
"%r is %s the minimum of %r" % (instance, cmp, minimum)
)
def maximum(validator, maximum, instance, schema):
if not validator.is_type(instance, "number"):
return
if schema.get("exclusiveMaximum", False):
failed = instance >= maximum
cmp = "greater than or equal to"
else:
failed = instance > maximum
cmp = "greater than"
if failed:
yield ValidationError(
"%r is %s the maximum of %r" % (instance, cmp, maximum)
)
def multipleOf(validator, dB, instance, schema):
if not validator.is_type(instance, "number"):
return
if isinstance(dB, float):
mod = instance % dB
failed = (mod > FLOAT_TOLERANCE) and (dB - mod) > FLOAT_TOLERANCE
else:
failed = instance % dB
if failed:
yield ValidationError("%r is not a multiple of %r" % (instance, dB))
def minItems(validator, mI, instance, schema):
if validator.is_type(instance, "array") and len(instance) < mI:
yield ValidationError("%r is too short" % (instance,))
def maxItems(validator, mI, instance, schema):
if validator.is_type(instance, "array") and len(instance) > mI:
yield ValidationError("%r is too long" % (instance,))
def uniqueItems(validator, uI, instance, schema):
if (
uI and
validator.is_type(instance, "array") and
not _utils.uniq(instance)
):
yield ValidationError("%r has non-unique elements" % instance)
def pattern(validator, patrn, instance, schema):
if (
validator.is_type(instance, "string") and
not re.search(patrn, instance)
):
yield ValidationError("%r does not match %r" % (instance, patrn))
def format(validator, format, instance, schema):
if validator.format_checker is not None:
try:
validator.format_checker.check(instance, format)
except FormatError as error:
yield ValidationError(error.message, cause=error.cause)
def minLength(validator, mL, instance, schema):
if validator.is_type(instance, "string") and len(instance) < mL:
yield ValidationError("%r is too short" % (instance,))
def maxLength(validator, mL, instance, schema):
if validator.is_type(instance, "string") and len(instance) > mL:
yield ValidationError("%r is too long" % (instance,))
def dependencies(validator, dependencies, instance, schema):
if not validator.is_type(instance, "object"):
return
for property, dependency in iteritems(dependencies):
if property not in instance:
continue
if validator.is_type(dependency, "object"):
for error in validator.descend(
instance, dependency, schema_path=property,
):
yield error
else:
dependencies = _utils.ensure_list(dependency)
for dependency in dependencies:
if dependency not in instance:
yield ValidationError(
"%r is a dependency of %r" % (dependency, property)
)
def enum(validator, enums, instance, schema):
if instance not in enums:
yield ValidationError("%r is not one of %r" % (instance, enums))
def ref(validator, ref, instance, schema):
with validator.resolver.resolving(ref) as resolved:
for error in validator.descend(instance, resolved):
yield error
def type_draft3(validator, types, instance, schema):
types = _utils.ensure_list(types)
all_errors = []
for index, type in enumerate(types):
if type == "any":
return
if validator.is_type(type, "object"):
errors = list(validator.descend(instance, type, schema_path=index))
if not errors:
return
all_errors.extend(errors)
else:
if validator.is_type(instance, type):
return
else:
yield ValidationError(
_utils.types_msg(instance, types), context=all_errors,
)
def properties_draft3(validator, properties, instance, schema):
if not validator.is_type(instance, "object"):
return
for property, subschema in iteritems(properties):
if property in instance:
for error in validator.descend(
instance[property],
subschema,
path=property,
schema_path=property,
):
yield error
elif subschema.get("required", False):
error = ValidationError("%r is a required property" % property)
error._set(
validator="required",
validator_value=subschema["required"],
instance=instance,
schema=schema,
)
error.path.appendleft(property)
error.schema_path.extend([property, "required"])
yield error
def disallow_draft3(validator, disallow, instance, schema):
for disallowed in _utils.ensure_list(disallow):
if validator.is_valid(instance, {"type" : [disallowed]}):
yield ValidationError(
"%r is disallowed for %r" % (disallowed, instance)
)
def extends_draft3(validator, extends, instance, schema):
if validator.is_type(extends, "object"):
for error in validator.descend(instance, extends):
yield error
return
for index, subschema in enumerate(extends):
for error in validator.descend(instance, subschema, schema_path=index):
yield error
def type_draft4(validator, types, instance, schema):
types = _utils.ensure_list(types)
if not any(validator.is_type(instance, type) for type in types):
yield ValidationError(_utils.types_msg(instance, types))
def properties_draft4(validator, properties, instance, schema):
if not validator.is_type(instance, "object"):
return
for property, subschema in iteritems(properties):
if property in instance:
for error in validator.descend(
instance[property],
subschema,
path=property,
schema_path=property,
):
yield error
def required_draft4(validator, required, instance, schema):
if not validator.is_type(instance, "object"):
return
for property in required:
if property not in instance:
yield ValidationError("%r is a required property" % property)
def minProperties_draft4(validator, mP, instance, schema):
if validator.is_type(instance, "object") and len(instance) < mP:
yield ValidationError(
"%r does not have enough properties" % (instance,)
)
def maxProperties_draft4(validator, mP, instance, schema):
if not validator.is_type(instance, "object"):
return
if validator.is_type(instance, "object") and len(instance) > mP:
yield ValidationError("%r has too many properties" % (instance,))
def allOf_draft4(validator, allOf, instance, schema):
for index, subschema in enumerate(allOf):
for error in validator.descend(instance, subschema, schema_path=index):
yield error
def oneOf_draft4(validator, oneOf, instance, schema):
subschemas = enumerate(oneOf)
all_errors = []
for index, subschema in subschemas:
errs = list(validator.descend(instance, subschema, schema_path=index))
if not errs:
first_valid = subschema
break
all_errors.extend(errs)
else:
yield ValidationError(
"%r is not valid under any of the given schemas" % (instance,),
context=all_errors,
)
more_valid = [s for i, s in subschemas if validator.is_valid(instance, s)]
if more_valid:
more_valid.append(first_valid)
reprs = ", ".join(repr(schema) for schema in more_valid)
yield ValidationError(
"%r is valid under each of %s" % (instance, reprs)
)
def anyOf_draft4(validator, anyOf, instance, schema):
all_errors = []
for index, subschema in enumerate(anyOf):
errs = list(validator.descend(instance, subschema, schema_path=index))
if not errs:
break
all_errors.extend(errs)
else:
yield ValidationError(
"%r is not valid under any of the given schemas" % (instance,),
context=all_errors,
)
def not_draft4(validator, not_schema, instance, schema):
if validator.is_valid(instance, not_schema):
yield ValidationError(
"%r is not allowed for %r" % (not_schema, instance)
)

72
lib/spack/external/jsonschema/cli.py vendored Normal file
View File

@ -0,0 +1,72 @@
from __future__ import absolute_import
import argparse
import json
import sys
from jsonschema._reflect import namedAny
from jsonschema.validators import validator_for
def _namedAnyWithDefault(name):
if "." not in name:
name = "jsonschema." + name
return namedAny(name)
def _json_file(path):
with open(path) as file:
return json.load(file)
parser = argparse.ArgumentParser(
description="JSON Schema Validation CLI",
)
parser.add_argument(
"-i", "--instance",
action="append",
dest="instances",
type=_json_file,
help="a path to a JSON instance to validate "
"(may be specified multiple times)",
)
parser.add_argument(
"-F", "--error-format",
default="{error.instance}: {error.message}\n",
help="the format to use for each error output message, specified in "
"a form suitable for passing to str.format, which will be called "
"with 'error' for each error",
)
parser.add_argument(
"-V", "--validator",
type=_namedAnyWithDefault,
help="the fully qualified object name of a validator to use, or, for "
"validators that are registered with jsonschema, simply the name "
"of the class.",
)
parser.add_argument(
"schema",
help="the JSON Schema to validate with",
type=_json_file,
)
def parse_args(args):
arguments = vars(parser.parse_args(args=args or ["--help"]))
if arguments["validator"] is None:
arguments["validator"] = validator_for(arguments["schema"])
return arguments
def main(args=sys.argv[1:]):
sys.exit(run(arguments=parse_args(args=args)))
def run(arguments, stdout=sys.stdout, stderr=sys.stderr):
error_format = arguments["error_format"]
validator = arguments["validator"](schema=arguments["schema"])
errored = False
for instance in arguments["instances"] or ():
for error in validator.iter_errors(instance):
stderr.write(error_format.format(error=error))
errored = True
return errored

53
lib/spack/external/jsonschema/compat.py vendored Normal file
View File

@ -0,0 +1,53 @@
from __future__ import unicode_literals
import sys
import operator
try:
from collections import MutableMapping, Sequence # noqa
except ImportError:
from collections.abc import MutableMapping, Sequence # noqa
PY3 = sys.version_info[0] >= 3
if PY3:
zip = zip
from io import StringIO
from urllib.parse import (
unquote, urljoin, urlunsplit, SplitResult, urlsplit as _urlsplit
)
from urllib.request import urlopen
str_types = str,
int_types = int,
iteritems = operator.methodcaller("items")
else:
from itertools import izip as zip # noqa
from StringIO import StringIO
from urlparse import (
urljoin, urlunsplit, SplitResult, urlsplit as _urlsplit # noqa
)
from urllib import unquote # noqa
from urllib2 import urlopen # noqa
str_types = basestring
int_types = int, long
iteritems = operator.methodcaller("iteritems")
# On python < 3.3 fragments are not handled properly with unknown schemes
def urlsplit(url):
scheme, netloc, path, query, fragment = _urlsplit(url)
if "#" in path:
path, fragment = path.split("#", 1)
return SplitResult(scheme, netloc, path, query, fragment)
def urldefrag(url):
if "#" in url:
s, n, p, q, frag = urlsplit(url)
defrag = urlunsplit((s, n, p, q, ''))
else:
defrag = url
frag = ''
return defrag, frag
# flake8: noqa

View File

@ -0,0 +1,264 @@
from collections import defaultdict, deque
import itertools
import pprint
import textwrap
from jsonschema import _utils
from jsonschema.compat import PY3, iteritems
WEAK_MATCHES = frozenset(["anyOf", "oneOf"])
STRONG_MATCHES = frozenset()
_unset = _utils.Unset()
class _Error(Exception):
def __init__(
self,
message,
validator=_unset,
path=(),
cause=None,
context=(),
validator_value=_unset,
instance=_unset,
schema=_unset,
schema_path=(),
parent=None,
):
self.message = message
self.path = self.relative_path = deque(path)
self.schema_path = self.relative_schema_path = deque(schema_path)
self.context = list(context)
self.cause = self.__cause__ = cause
self.validator = validator
self.validator_value = validator_value
self.instance = instance
self.schema = schema
self.parent = parent
for error in context:
error.parent = self
def __repr__(self):
return "<%s: %r>" % (self.__class__.__name__, self.message)
def __str__(self):
return unicode(self).encode("utf-8")
def __unicode__(self):
essential_for_verbose = (
self.validator, self.validator_value, self.instance, self.schema,
)
if any(m is _unset for m in essential_for_verbose):
return self.message
pschema = pprint.pformat(self.schema, width=72)
pinstance = pprint.pformat(self.instance, width=72)
return self.message + textwrap.dedent("""
Failed validating %r in schema%s:
%s
On instance%s:
%s
""".rstrip()
) % (
self.validator,
_utils.format_as_index(list(self.relative_schema_path)[:-1]),
_utils.indent(pschema),
_utils.format_as_index(self.relative_path),
_utils.indent(pinstance),
)
if PY3:
__str__ = __unicode__
@classmethod
def create_from(cls, other):
return cls(**other._contents())
@property
def absolute_path(self):
parent = self.parent
if parent is None:
return self.relative_path
path = deque(self.relative_path)
path.extendleft(parent.absolute_path)
return path
@property
def absolute_schema_path(self):
parent = self.parent
if parent is None:
return self.relative_schema_path
path = deque(self.relative_schema_path)
path.extendleft(parent.absolute_schema_path)
return path
def _set(self, **kwargs):
for k, v in iteritems(kwargs):
if getattr(self, k) is _unset:
setattr(self, k, v)
def _contents(self):
attrs = (
"message", "cause", "context", "validator", "validator_value",
"path", "schema_path", "instance", "schema", "parent",
)
return dict((attr, getattr(self, attr)) for attr in attrs)
class ValidationError(_Error):
pass
class SchemaError(_Error):
pass
class RefResolutionError(Exception):
pass
class UnknownType(Exception):
def __init__(self, type, instance, schema):
self.type = type
self.instance = instance
self.schema = schema
def __str__(self):
return unicode(self).encode("utf-8")
def __unicode__(self):
pschema = pprint.pformat(self.schema, width=72)
pinstance = pprint.pformat(self.instance, width=72)
return textwrap.dedent("""
Unknown type %r for validator with schema:
%s
While checking instance:
%s
""".rstrip()
) % (self.type, _utils.indent(pschema), _utils.indent(pinstance))
if PY3:
__str__ = __unicode__
class FormatError(Exception):
def __init__(self, message, cause=None):
super(FormatError, self).__init__(message, cause)
self.message = message
self.cause = self.__cause__ = cause
def __str__(self):
return self.message.encode("utf-8")
def __unicode__(self):
return self.message
if PY3:
__str__ = __unicode__
class ErrorTree(object):
"""
ErrorTrees make it easier to check which validations failed.
"""
_instance = _unset
def __init__(self, errors=()):
self.errors = {}
self._contents = defaultdict(self.__class__)
for error in errors:
container = self
for element in error.path:
container = container[element]
container.errors[error.validator] = error
self._instance = error.instance
def __contains__(self, index):
"""
Check whether ``instance[index]`` has any errors.
"""
return index in self._contents
def __getitem__(self, index):
"""
Retrieve the child tree one level down at the given ``index``.
If the index is not in the instance that this tree corresponds to and
is not known by this tree, whatever error would be raised by
``instance.__getitem__`` will be propagated (usually this is some
subclass of :class:`LookupError`.
"""
if self._instance is not _unset and index not in self:
self._instance[index]
return self._contents[index]
def __setitem__(self, index, value):
self._contents[index] = value
def __iter__(self):
"""
Iterate (non-recursively) over the indices in the instance with errors.
"""
return iter(self._contents)
def __len__(self):
"""
Same as :attr:`total_errors`.
"""
return self.total_errors
def __repr__(self):
return "<%s (%s total errors)>" % (self.__class__.__name__, len(self))
@property
def total_errors(self):
"""
The total number of errors in the entire tree, including children.
"""
child_errors = sum(len(tree) for _, tree in iteritems(self._contents))
return len(self.errors) + child_errors
def by_relevance(weak=WEAK_MATCHES, strong=STRONG_MATCHES):
def relevance(error):
validator = error.validator
return -len(error.path), validator not in weak, validator in strong
return relevance
relevance = by_relevance()
def best_match(errors, key=relevance):
errors = iter(errors)
best = next(errors, None)
if best is None:
return
best = max(itertools.chain([best], errors), key=key)
while best.context:
best = min(best.context, key=key)
return best

View File

@ -0,0 +1,201 @@
{
"$schema": "http://json-schema.org/draft-03/schema#",
"dependencies": {
"exclusiveMaximum": "maximum",
"exclusiveMinimum": "minimum"
},
"id": "http://json-schema.org/draft-03/schema#",
"properties": {
"$ref": {
"format": "uri",
"type": "string"
},
"$schema": {
"format": "uri",
"type": "string"
},
"additionalItems": {
"default": {},
"type": [
{
"$ref": "#"
},
"boolean"
]
},
"additionalProperties": {
"default": {},
"type": [
{
"$ref": "#"
},
"boolean"
]
},
"default": {
"type": "any"
},
"dependencies": {
"additionalProperties": {
"items": {
"type": "string"
},
"type": [
"string",
"array",
{
"$ref": "#"
}
]
},
"default": {},
"type": [
"string",
"array",
"object"
]
},
"description": {
"type": "string"
},
"disallow": {
"items": {
"type": [
"string",
{
"$ref": "#"
}
]
},
"type": [
"string",
"array"
],
"uniqueItems": true
},
"divisibleBy": {
"default": 1,
"exclusiveMinimum": true,
"minimum": 0,
"type": "number"
},
"enum": {
"minItems": 1,
"type": "array",
"uniqueItems": true
},
"exclusiveMaximum": {
"default": false,
"type": "boolean"
},
"exclusiveMinimum": {
"default": false,
"type": "boolean"
},
"extends": {
"default": {},
"items": {
"$ref": "#"
},
"type": [
{
"$ref": "#"
},
"array"
]
},
"format": {
"type": "string"
},
"id": {
"format": "uri",
"type": "string"
},
"items": {
"default": {},
"items": {
"$ref": "#"
},
"type": [
{
"$ref": "#"
},
"array"
]
},
"maxDecimal": {
"minimum": 0,
"type": "number"
},
"maxItems": {
"minimum": 0,
"type": "integer"
},
"maxLength": {
"type": "integer"
},
"maximum": {
"type": "number"
},
"minItems": {
"default": 0,
"minimum": 0,
"type": "integer"
},
"minLength": {
"default": 0,
"minimum": 0,
"type": "integer"
},
"minimum": {
"type": "number"
},
"pattern": {
"format": "regex",
"type": "string"
},
"patternProperties": {
"additionalProperties": {
"$ref": "#"
},
"default": {},
"type": "object"
},
"properties": {
"additionalProperties": {
"$ref": "#",
"type": "object"
},
"default": {},
"type": "object"
},
"required": {
"default": false,
"type": "boolean"
},
"title": {
"type": "string"
},
"type": {
"default": "any",
"items": {
"type": [
"string",
{
"$ref": "#"
}
]
},
"type": [
"string",
"array"
],
"uniqueItems": true
},
"uniqueItems": {
"default": false,
"type": "boolean"
}
},
"type": "object"
}

View File

@ -0,0 +1,221 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"default": {},
"definitions": {
"positiveInteger": {
"minimum": 0,
"type": "integer"
},
"positiveIntegerDefault0": {
"allOf": [
{
"$ref": "#/definitions/positiveInteger"
},
{
"default": 0
}
]
},
"schemaArray": {
"items": {
"$ref": "#"
},
"minItems": 1,
"type": "array"
},
"simpleTypes": {
"enum": [
"array",
"boolean",
"integer",
"null",
"number",
"object",
"string"
]
},
"stringArray": {
"items": {
"type": "string"
},
"minItems": 1,
"type": "array",
"uniqueItems": true
}
},
"dependencies": {
"exclusiveMaximum": [
"maximum"
],
"exclusiveMinimum": [
"minimum"
]
},
"description": "Core schema meta-schema",
"id": "http://json-schema.org/draft-04/schema#",
"properties": {
"$schema": {
"format": "uri",
"type": "string"
},
"additionalItems": {
"anyOf": [
{
"type": "boolean"
},
{
"$ref": "#"
}
],
"default": {}
},
"additionalProperties": {
"anyOf": [
{
"type": "boolean"
},
{
"$ref": "#"
}
],
"default": {}
},
"allOf": {
"$ref": "#/definitions/schemaArray"
},
"anyOf": {
"$ref": "#/definitions/schemaArray"
},
"default": {},
"definitions": {
"additionalProperties": {
"$ref": "#"
},
"default": {},
"type": "object"
},
"dependencies": {
"additionalProperties": {
"anyOf": [
{
"$ref": "#"
},
{
"$ref": "#/definitions/stringArray"
}
]
},
"type": "object"
},
"description": {
"type": "string"
},
"enum": {
"minItems": 1,
"type": "array",
"uniqueItems": true
},
"exclusiveMaximum": {
"default": false,
"type": "boolean"
},
"exclusiveMinimum": {
"default": false,
"type": "boolean"
},
"id": {
"format": "uri",
"type": "string"
},
"items": {
"anyOf": [
{
"$ref": "#"
},
{
"$ref": "#/definitions/schemaArray"
}
],
"default": {}
},
"maxItems": {
"$ref": "#/definitions/positiveInteger"
},
"maxLength": {
"$ref": "#/definitions/positiveInteger"
},
"maxProperties": {
"$ref": "#/definitions/positiveInteger"
},
"maximum": {
"type": "number"
},
"minItems": {
"$ref": "#/definitions/positiveIntegerDefault0"
},
"minLength": {
"$ref": "#/definitions/positiveIntegerDefault0"
},
"minProperties": {
"$ref": "#/definitions/positiveIntegerDefault0"
},
"minimum": {
"type": "number"
},
"multipleOf": {
"exclusiveMinimum": true,
"minimum": 0,
"type": "number"
},
"not": {
"$ref": "#"
},
"oneOf": {
"$ref": "#/definitions/schemaArray"
},
"pattern": {
"format": "regex",
"type": "string"
},
"patternProperties": {
"additionalProperties": {
"$ref": "#"
},
"default": {},
"type": "object"
},
"properties": {
"additionalProperties": {
"$ref": "#"
},
"default": {},
"type": "object"
},
"required": {
"$ref": "#/definitions/stringArray"
},
"title": {
"type": "string"
},
"type": {
"anyOf": [
{
"$ref": "#/definitions/simpleTypes"
},
{
"items": {
"$ref": "#/definitions/simpleTypes"
},
"minItems": 1,
"type": "array",
"uniqueItems": true
}
]
},
"uniqueItems": {
"default": false,
"type": "boolean"
}
},
"type": "object"
}

View File

View File

@ -0,0 +1,15 @@
import sys
if sys.version_info[:2] < (2, 7): # pragma: no cover
import unittest2 as unittest
else:
import unittest
try:
from unittest import mock
except ImportError:
import mock
# flake8: noqa

View File

@ -0,0 +1,110 @@
from jsonschema import Draft4Validator, ValidationError, cli
from jsonschema.compat import StringIO
from jsonschema.tests.compat import mock, unittest
def fake_validator(*errors):
errors = list(reversed(errors))
class FakeValidator(object):
def __init__(self, *args, **kwargs):
pass
def iter_errors(self, instance):
if errors:
return errors.pop()
return []
return FakeValidator
class TestParser(unittest.TestCase):
FakeValidator = fake_validator()
def setUp(self):
mock_open = mock.mock_open()
patch_open = mock.patch.object(cli, "open", mock_open, create=True)
patch_open.start()
self.addCleanup(patch_open.stop)
mock_json_load = mock.Mock()
mock_json_load.return_value = {}
patch_json_load = mock.patch("json.load")
patch_json_load.start()
self.addCleanup(patch_json_load.stop)
def test_find_validator_by_fully_qualified_object_name(self):
arguments = cli.parse_args(
[
"--validator",
"jsonschema.tests.test_cli.TestParser.FakeValidator",
"--instance", "foo.json",
"schema.json",
]
)
self.assertIs(arguments["validator"], self.FakeValidator)
def test_find_validator_in_jsonschema(self):
arguments = cli.parse_args(
[
"--validator", "Draft4Validator",
"--instance", "foo.json",
"schema.json",
]
)
self.assertIs(arguments["validator"], Draft4Validator)
class TestCLI(unittest.TestCase):
def test_successful_validation(self):
stdout, stderr = StringIO(), StringIO()
exit_code = cli.run(
{
"validator": fake_validator(),
"schema": {},
"instances": [1],
"error_format": "{error.message}",
},
stdout=stdout,
stderr=stderr,
)
self.assertFalse(stdout.getvalue())
self.assertFalse(stderr.getvalue())
self.assertEqual(exit_code, 0)
def test_unsuccessful_validation(self):
error = ValidationError("I am an error!", instance=1)
stdout, stderr = StringIO(), StringIO()
exit_code = cli.run(
{
"validator": fake_validator([error]),
"schema": {},
"instances": [1],
"error_format": "{error.instance} - {error.message}",
},
stdout=stdout,
stderr=stderr,
)
self.assertFalse(stdout.getvalue())
self.assertEqual(stderr.getvalue(), "1 - I am an error!")
self.assertEqual(exit_code, 1)
def test_unsuccessful_validation_multiple_instances(self):
first_errors = [
ValidationError("9", instance=1),
ValidationError("8", instance=1),
]
second_errors = [ValidationError("7", instance=2)]
stdout, stderr = StringIO(), StringIO()
exit_code = cli.run(
{
"validator": fake_validator(first_errors, second_errors),
"schema": {},
"instances": [1, 2],
"error_format": "{error.instance} - {error.message}\t",
},
stdout=stdout,
stderr=stderr,
)
self.assertFalse(stdout.getvalue())
self.assertEqual(stderr.getvalue(), "1 - 9\t1 - 8\t2 - 7\t")
self.assertEqual(exit_code, 1)

View File

@ -0,0 +1,382 @@
import textwrap
from jsonschema import Draft4Validator, exceptions
from jsonschema.compat import PY3
from jsonschema.tests.compat import mock, unittest
class TestBestMatch(unittest.TestCase):
def best_match(self, errors):
errors = list(errors)
best = exceptions.best_match(errors)
reversed_best = exceptions.best_match(reversed(errors))
self.assertEqual(
best,
reversed_best,
msg="Didn't return a consistent best match!\n"
"Got: {0}\n\nThen: {1}".format(best, reversed_best),
)
return best
def test_shallower_errors_are_better_matches(self):
validator = Draft4Validator(
{
"properties" : {
"foo" : {
"minProperties" : 2,
"properties" : {"bar" : {"type" : "object"}},
}
}
}
)
best = self.best_match(validator.iter_errors({"foo" : {"bar" : []}}))
self.assertEqual(best.validator, "minProperties")
def test_oneOf_and_anyOf_are_weak_matches(self):
"""
A property you *must* match is probably better than one you have to
match a part of.
"""
validator = Draft4Validator(
{
"minProperties" : 2,
"anyOf" : [{"type" : "string"}, {"type" : "number"}],
"oneOf" : [{"type" : "string"}, {"type" : "number"}],
}
)
best = self.best_match(validator.iter_errors({}))
self.assertEqual(best.validator, "minProperties")
def test_if_the_most_relevant_error_is_anyOf_it_is_traversed(self):
"""
If the most relevant error is an anyOf, then we traverse its context
and select the otherwise *least* relevant error, since in this case
that means the most specific, deep, error inside the instance.
I.e. since only one of the schemas must match, we look for the most
relevant one.
"""
validator = Draft4Validator(
{
"properties" : {
"foo" : {
"anyOf" : [
{"type" : "string"},
{"properties" : {"bar" : {"type" : "array"}}},
],
},
},
},
)
best = self.best_match(validator.iter_errors({"foo" : {"bar" : 12}}))
self.assertEqual(best.validator_value, "array")
def test_if_the_most_relevant_error_is_oneOf_it_is_traversed(self):
"""
If the most relevant error is an oneOf, then we traverse its context
and select the otherwise *least* relevant error, since in this case
that means the most specific, deep, error inside the instance.
I.e. since only one of the schemas must match, we look for the most
relevant one.
"""
validator = Draft4Validator(
{
"properties" : {
"foo" : {
"oneOf" : [
{"type" : "string"},
{"properties" : {"bar" : {"type" : "array"}}},
],
},
},
},
)
best = self.best_match(validator.iter_errors({"foo" : {"bar" : 12}}))
self.assertEqual(best.validator_value, "array")
def test_if_the_most_relevant_error_is_allOf_it_is_traversed(self):
"""
Now, if the error is allOf, we traverse but select the *most* relevant
error from the context, because all schemas here must match anyways.
"""
validator = Draft4Validator(
{
"properties" : {
"foo" : {
"allOf" : [
{"type" : "string"},
{"properties" : {"bar" : {"type" : "array"}}},
],
},
},
},
)
best = self.best_match(validator.iter_errors({"foo" : {"bar" : 12}}))
self.assertEqual(best.validator_value, "string")
def test_nested_context_for_oneOf(self):
validator = Draft4Validator(
{
"properties" : {
"foo" : {
"oneOf" : [
{"type" : "string"},
{
"oneOf" : [
{"type" : "string"},
{
"properties" : {
"bar" : {"type" : "array"}
},
},
],
},
],
},
},
},
)
best = self.best_match(validator.iter_errors({"foo" : {"bar" : 12}}))
self.assertEqual(best.validator_value, "array")
def test_one_error(self):
validator = Draft4Validator({"minProperties" : 2})
error, = validator.iter_errors({})
self.assertEqual(
exceptions.best_match(validator.iter_errors({})).validator,
"minProperties",
)
def test_no_errors(self):
validator = Draft4Validator({})
self.assertIsNone(exceptions.best_match(validator.iter_errors({})))
class TestByRelevance(unittest.TestCase):
def test_short_paths_are_better_matches(self):
shallow = exceptions.ValidationError("Oh no!", path=["baz"])
deep = exceptions.ValidationError("Oh yes!", path=["foo", "bar"])
match = max([shallow, deep], key=exceptions.relevance)
self.assertIs(match, shallow)
match = max([deep, shallow], key=exceptions.relevance)
self.assertIs(match, shallow)
def test_global_errors_are_even_better_matches(self):
shallow = exceptions.ValidationError("Oh no!", path=[])
deep = exceptions.ValidationError("Oh yes!", path=["foo"])
errors = sorted([shallow, deep], key=exceptions.relevance)
self.assertEqual(
[list(error.path) for error in errors],
[["foo"], []],
)
errors = sorted([deep, shallow], key=exceptions.relevance)
self.assertEqual(
[list(error.path) for error in errors],
[["foo"], []],
)
def test_weak_validators_are_lower_priority(self):
weak = exceptions.ValidationError("Oh no!", path=[], validator="a")
normal = exceptions.ValidationError("Oh yes!", path=[], validator="b")
best_match = exceptions.by_relevance(weak="a")
match = max([weak, normal], key=best_match)
self.assertIs(match, normal)
match = max([normal, weak], key=best_match)
self.assertIs(match, normal)
def test_strong_validators_are_higher_priority(self):
weak = exceptions.ValidationError("Oh no!", path=[], validator="a")
normal = exceptions.ValidationError("Oh yes!", path=[], validator="b")
strong = exceptions.ValidationError("Oh fine!", path=[], validator="c")
best_match = exceptions.by_relevance(weak="a", strong="c")
match = max([weak, normal, strong], key=best_match)
self.assertIs(match, strong)
match = max([strong, normal, weak], key=best_match)
self.assertIs(match, strong)
class TestErrorTree(unittest.TestCase):
def test_it_knows_how_many_total_errors_it_contains(self):
errors = [mock.MagicMock() for _ in range(8)]
tree = exceptions.ErrorTree(errors)
self.assertEqual(tree.total_errors, 8)
def test_it_contains_an_item_if_the_item_had_an_error(self):
errors = [exceptions.ValidationError("a message", path=["bar"])]
tree = exceptions.ErrorTree(errors)
self.assertIn("bar", tree)
def test_it_does_not_contain_an_item_if_the_item_had_no_error(self):
errors = [exceptions.ValidationError("a message", path=["bar"])]
tree = exceptions.ErrorTree(errors)
self.assertNotIn("foo", tree)
def test_validators_that_failed_appear_in_errors_dict(self):
error = exceptions.ValidationError("a message", validator="foo")
tree = exceptions.ErrorTree([error])
self.assertEqual(tree.errors, {"foo" : error})
def test_it_creates_a_child_tree_for_each_nested_path(self):
errors = [
exceptions.ValidationError("a bar message", path=["bar"]),
exceptions.ValidationError("a bar -> 0 message", path=["bar", 0]),
]
tree = exceptions.ErrorTree(errors)
self.assertIn(0, tree["bar"])
self.assertNotIn(1, tree["bar"])
def test_children_have_their_errors_dicts_built(self):
e1, e2 = (
exceptions.ValidationError("1", validator="foo", path=["bar", 0]),
exceptions.ValidationError("2", validator="quux", path=["bar", 0]),
)
tree = exceptions.ErrorTree([e1, e2])
self.assertEqual(tree["bar"][0].errors, {"foo" : e1, "quux" : e2})
def test_it_does_not_contain_subtrees_that_are_not_in_the_instance(self):
error = exceptions.ValidationError("123", validator="foo", instance=[])
tree = exceptions.ErrorTree([error])
with self.assertRaises(IndexError):
tree[0]
def test_if_its_in_the_tree_anyhow_it_does_not_raise_an_error(self):
"""
If a validator is dumb (like :validator:`required` in draft 3) and
refers to a path that isn't in the instance, the tree still properly
returns a subtree for that path.
"""
error = exceptions.ValidationError(
"a message", validator="foo", instance={}, path=["foo"],
)
tree = exceptions.ErrorTree([error])
self.assertIsInstance(tree["foo"], exceptions.ErrorTree)
class TestErrorReprStr(unittest.TestCase):
def make_error(self, **kwargs):
defaults = dict(
message=u"hello",
validator=u"type",
validator_value=u"string",
instance=5,
schema={u"type": u"string"},
)
defaults.update(kwargs)
return exceptions.ValidationError(**defaults)
def assertShows(self, expected, **kwargs):
if PY3:
expected = expected.replace("u'", "'")
expected = textwrap.dedent(expected).rstrip("\n")
error = self.make_error(**kwargs)
message_line, _, rest = str(error).partition("\n")
self.assertEqual(message_line, error.message)
self.assertEqual(rest, expected)
def test_repr(self):
self.assertEqual(
repr(exceptions.ValidationError(message="Hello!")),
"<ValidationError: %r>" % "Hello!",
)
def test_unset_error(self):
error = exceptions.ValidationError("message")
self.assertEqual(str(error), "message")
kwargs = {
"validator": "type",
"validator_value": "string",
"instance": 5,
"schema": {"type": "string"}
}
# Just the message should show if any of the attributes are unset
for attr in kwargs:
k = dict(kwargs)
del k[attr]
error = exceptions.ValidationError("message", **k)
self.assertEqual(str(error), "message")
def test_empty_paths(self):
self.assertShows(
"""
Failed validating u'type' in schema:
{u'type': u'string'}
On instance:
5
""",
path=[],
schema_path=[],
)
def test_one_item_paths(self):
self.assertShows(
"""
Failed validating u'type' in schema:
{u'type': u'string'}
On instance[0]:
5
""",
path=[0],
schema_path=["items"],
)
def test_multiple_item_paths(self):
self.assertShows(
"""
Failed validating u'type' in schema[u'items'][0]:
{u'type': u'string'}
On instance[0][u'a']:
5
""",
path=[0, u"a"],
schema_path=[u"items", 0, 1],
)
def test_uses_pprint(self):
with mock.patch("pprint.pformat") as pformat:
str(self.make_error())
self.assertEqual(pformat.call_count, 2) # schema + instance
def test_str_works_with_instances_having_overriden_eq_operator(self):
"""
Check for https://github.com/Julian/jsonschema/issues/164 which
rendered exceptions unusable when a `ValidationError` involved
instances with an `__eq__` method that returned truthy values.
"""
instance = mock.MagicMock()
error = exceptions.ValidationError(
"a message",
validator="foo",
instance=instance,
validator_value="some",
schema="schema",
)
str(error)
self.assertFalse(instance.__eq__.called)

View File

@ -0,0 +1,63 @@
"""
Tests for the parts of jsonschema related to the :validator:`format` property.
"""
from jsonschema.tests.compat import mock, unittest
from jsonschema import FormatError, ValidationError, FormatChecker
from jsonschema.validators import Draft4Validator
class TestFormatChecker(unittest.TestCase):
def setUp(self):
self.fn = mock.Mock()
def test_it_can_validate_no_formats(self):
checker = FormatChecker(formats=())
self.assertFalse(checker.checkers)
def test_it_raises_a_key_error_for_unknown_formats(self):
with self.assertRaises(KeyError):
FormatChecker(formats=["o noes"])
def test_it_can_register_cls_checkers(self):
with mock.patch.dict(FormatChecker.checkers, clear=True):
FormatChecker.cls_checks("new")(self.fn)
self.assertEqual(FormatChecker.checkers, {"new" : (self.fn, ())})
def test_it_can_register_checkers(self):
checker = FormatChecker()
checker.checks("new")(self.fn)
self.assertEqual(
checker.checkers,
dict(FormatChecker.checkers, new=(self.fn, ()))
)
def test_it_catches_registered_errors(self):
checker = FormatChecker()
cause = self.fn.side_effect = ValueError()
checker.checks("foo", raises=ValueError)(self.fn)
with self.assertRaises(FormatError) as cm:
checker.check("bar", "foo")
self.assertIs(cm.exception.cause, cause)
self.assertIs(cm.exception.__cause__, cause)
# Unregistered errors should not be caught
self.fn.side_effect = AttributeError
with self.assertRaises(AttributeError):
checker.check("bar", "foo")
def test_format_error_causes_become_validation_error_causes(self):
checker = FormatChecker()
checker.checks("foo", raises=ValueError)(self.fn)
cause = self.fn.side_effect = ValueError()
validator = Draft4Validator({"format" : "foo"}, format_checker=checker)
with self.assertRaises(ValidationError) as cm:
validator.validate("bar")
self.assertIs(cm.exception.__cause__, cause)

View File

@ -0,0 +1,290 @@
"""
Test runner for the JSON Schema official test suite
Tests comprehensive correctness of each draft's validator.
See https://github.com/json-schema/JSON-Schema-Test-Suite for details.
"""
from contextlib import closing
from decimal import Decimal
import glob
import json
import io
import itertools
import os
import re
import subprocess
import sys
try:
from sys import pypy_version_info
except ImportError:
pypy_version_info = None
from jsonschema import (
FormatError, SchemaError, ValidationError, Draft3Validator,
Draft4Validator, FormatChecker, draft3_format_checker,
draft4_format_checker, validate,
)
from jsonschema.compat import PY3
from jsonschema.tests.compat import mock, unittest
import jsonschema
REPO_ROOT = os.path.join(os.path.dirname(jsonschema.__file__), os.path.pardir)
SUITE = os.getenv("JSON_SCHEMA_TEST_SUITE", os.path.join(REPO_ROOT, "json"))
if not os.path.isdir(SUITE):
raise ValueError(
"Can't find the JSON-Schema-Test-Suite directory. Set the "
"'JSON_SCHEMA_TEST_SUITE' environment variable or run the tests from "
"alongside a checkout of the suite."
)
TESTS_DIR = os.path.join(SUITE, "tests")
JSONSCHEMA_SUITE = os.path.join(SUITE, "bin", "jsonschema_suite")
remotes_stdout = subprocess.Popen(
["python", JSONSCHEMA_SUITE, "remotes"], stdout=subprocess.PIPE,
).stdout
with closing(remotes_stdout):
if PY3:
remotes_stdout = io.TextIOWrapper(remotes_stdout)
REMOTES = json.load(remotes_stdout)
def make_case(schema, data, valid, name):
if valid:
def test_case(self):
kwargs = getattr(self, "validator_kwargs", {})
validate(data, schema, cls=self.validator_class, **kwargs)
else:
def test_case(self):
kwargs = getattr(self, "validator_kwargs", {})
with self.assertRaises(ValidationError):
validate(data, schema, cls=self.validator_class, **kwargs)
if not PY3:
name = name.encode("utf-8")
test_case.__name__ = name
return test_case
def maybe_skip(skip, test_case, case, test):
if skip is not None:
reason = skip(case, test)
if reason is not None:
test_case = unittest.skip(reason)(test_case)
return test_case
def load_json_cases(tests_glob, ignore_glob="", basedir=TESTS_DIR, skip=None):
if ignore_glob:
ignore_glob = os.path.join(basedir, ignore_glob)
def add_test_methods(test_class):
ignored = set(glob.iglob(ignore_glob))
for filename in glob.iglob(os.path.join(basedir, tests_glob)):
if filename in ignored:
continue
validating, _ = os.path.splitext(os.path.basename(filename))
id = itertools.count(1)
with open(filename) as test_file:
for case in json.load(test_file):
for test in case["tests"]:
name = "test_%s_%s_%s" % (
validating,
next(id),
re.sub(r"[\W ]+", "_", test["description"]),
)
assert not hasattr(test_class, name), name
test_case = make_case(
data=test["data"],
schema=case["schema"],
valid=test["valid"],
name=name,
)
test_case = maybe_skip(skip, test_case, case, test)
setattr(test_class, name, test_case)
return test_class
return add_test_methods
class TypesMixin(object):
@unittest.skipIf(PY3, "In Python 3 json.load always produces unicode")
def test_string_a_bytestring_is_a_string(self):
self.validator_class({"type" : "string"}).validate(b"foo")
class DecimalMixin(object):
def test_it_can_validate_with_decimals(self):
schema = {"type" : "number"}
validator = self.validator_class(
schema, types={"number" : (int, float, Decimal)}
)
for valid in [1, 1.1, Decimal(1) / Decimal(8)]:
validator.validate(valid)
for invalid in ["foo", {}, [], True, None]:
with self.assertRaises(ValidationError):
validator.validate(invalid)
def missing_format(checker):
def missing_format(case, test):
format = case["schema"].get("format")
if format not in checker.checkers:
return "Format checker {0!r} not found.".format(format)
elif (
format == "date-time" and
pypy_version_info is not None and
pypy_version_info[:2] <= (1, 9)
):
# datetime.datetime is overzealous about typechecking in <=1.9
return "datetime.datetime is broken on this version of PyPy."
return missing_format
class FormatMixin(object):
def test_it_returns_true_for_formats_it_does_not_know_about(self):
validator = self.validator_class(
{"format" : "carrot"}, format_checker=FormatChecker(),
)
validator.validate("bugs")
def test_it_does_not_validate_formats_by_default(self):
validator = self.validator_class({})
self.assertIsNone(validator.format_checker)
def test_it_validates_formats_if_a_checker_is_provided(self):
checker = mock.Mock(spec=FormatChecker)
validator = self.validator_class(
{"format" : "foo"}, format_checker=checker,
)
validator.validate("bar")
checker.check.assert_called_once_with("bar", "foo")
cause = ValueError()
checker.check.side_effect = FormatError('aoeu', cause=cause)
with self.assertRaises(ValidationError) as cm:
validator.validate("bar")
# Make sure original cause is attached
self.assertIs(cm.exception.cause, cause)
def test_it_validates_formats_of_any_type(self):
checker = mock.Mock(spec=FormatChecker)
validator = self.validator_class(
{"format" : "foo"}, format_checker=checker,
)
validator.validate([1, 2, 3])
checker.check.assert_called_once_with([1, 2, 3], "foo")
cause = ValueError()
checker.check.side_effect = FormatError('aoeu', cause=cause)
with self.assertRaises(ValidationError) as cm:
validator.validate([1, 2, 3])
# Make sure original cause is attached
self.assertIs(cm.exception.cause, cause)
if sys.maxunicode == 2 ** 16 - 1: # This is a narrow build.
def narrow_unicode_build(case, test):
if "supplementary Unicode" in test["description"]:
return "Not running surrogate Unicode case, this Python is narrow."
else:
def narrow_unicode_build(case, test): # This isn't, skip nothing.
return
@load_json_cases(
"draft3/*.json",
skip=narrow_unicode_build,
ignore_glob="draft3/refRemote.json",
)
@load_json_cases(
"draft3/optional/format.json", skip=missing_format(draft3_format_checker)
)
@load_json_cases("draft3/optional/bignum.json")
@load_json_cases("draft3/optional/zeroTerminatedFloats.json")
class TestDraft3(unittest.TestCase, TypesMixin, DecimalMixin, FormatMixin):
validator_class = Draft3Validator
validator_kwargs = {"format_checker" : draft3_format_checker}
def test_any_type_is_valid_for_type_any(self):
validator = self.validator_class({"type" : "any"})
validator.validate(mock.Mock())
# TODO: we're in need of more meta schema tests
def test_invalid_properties(self):
with self.assertRaises(SchemaError):
validate({}, {"properties": {"test": True}},
cls=self.validator_class)
def test_minItems_invalid_string(self):
with self.assertRaises(SchemaError):
# needs to be an integer
validate([1], {"minItems" : "1"}, cls=self.validator_class)
@load_json_cases(
"draft4/*.json",
skip=narrow_unicode_build,
ignore_glob="draft4/refRemote.json",
)
@load_json_cases(
"draft4/optional/format.json", skip=missing_format(draft4_format_checker)
)
@load_json_cases("draft4/optional/bignum.json")
@load_json_cases("draft4/optional/zeroTerminatedFloats.json")
class TestDraft4(unittest.TestCase, TypesMixin, DecimalMixin, FormatMixin):
validator_class = Draft4Validator
validator_kwargs = {"format_checker" : draft4_format_checker}
# TODO: we're in need of more meta schema tests
def test_invalid_properties(self):
with self.assertRaises(SchemaError):
validate({}, {"properties": {"test": True}},
cls=self.validator_class)
def test_minItems_invalid_string(self):
with self.assertRaises(SchemaError):
# needs to be an integer
validate([1], {"minItems" : "1"}, cls=self.validator_class)
class RemoteRefResolutionMixin(object):
def setUp(self):
patch = mock.patch("jsonschema.validators.requests")
requests = patch.start()
requests.get.side_effect = self.resolve
self.addCleanup(patch.stop)
def resolve(self, reference):
_, _, reference = reference.partition("http://localhost:1234/")
return mock.Mock(**{"json.return_value" : REMOTES.get(reference)})
@load_json_cases("draft3/refRemote.json")
class Draft3RemoteResolution(RemoteRefResolutionMixin, unittest.TestCase):
validator_class = Draft3Validator
@load_json_cases("draft4/refRemote.json")
class Draft4RemoteResolution(RemoteRefResolutionMixin, unittest.TestCase):
validator_class = Draft4Validator

View File

@ -0,0 +1,786 @@
from collections import deque
from contextlib import contextmanager
import json
from jsonschema import FormatChecker, ValidationError
from jsonschema.tests.compat import mock, unittest
from jsonschema.validators import (
RefResolutionError, UnknownType, Draft3Validator,
Draft4Validator, RefResolver, create, extend, validator_for, validate,
)
class TestCreateAndExtend(unittest.TestCase):
def setUp(self):
self.meta_schema = {u"properties" : {u"smelly" : {}}}
self.smelly = mock.MagicMock()
self.validators = {u"smelly" : self.smelly}
self.types = {u"dict" : dict}
self.Validator = create(
meta_schema=self.meta_schema,
validators=self.validators,
default_types=self.types,
)
self.validator_value = 12
self.schema = {u"smelly" : self.validator_value}
self.validator = self.Validator(self.schema)
def test_attrs(self):
self.assertEqual(self.Validator.VALIDATORS, self.validators)
self.assertEqual(self.Validator.META_SCHEMA, self.meta_schema)
self.assertEqual(self.Validator.DEFAULT_TYPES, self.types)
def test_init(self):
self.assertEqual(self.validator.schema, self.schema)
def test_iter_errors(self):
instance = "hello"
self.smelly.return_value = []
self.assertEqual(list(self.validator.iter_errors(instance)), [])
error = mock.Mock()
self.smelly.return_value = [error]
self.assertEqual(list(self.validator.iter_errors(instance)), [error])
self.smelly.assert_called_with(
self.validator, self.validator_value, instance, self.schema,
)
def test_if_a_version_is_provided_it_is_registered(self):
with mock.patch("jsonschema.validators.validates") as validates:
validates.side_effect = lambda version : lambda cls : cls
Validator = create(meta_schema={u"id" : ""}, version="my version")
validates.assert_called_once_with("my version")
self.assertEqual(Validator.__name__, "MyVersionValidator")
def test_if_a_version_is_not_provided_it_is_not_registered(self):
with mock.patch("jsonschema.validators.validates") as validates:
create(meta_schema={u"id" : "id"})
self.assertFalse(validates.called)
def test_extend(self):
validators = dict(self.Validator.VALIDATORS)
new = mock.Mock()
Extended = extend(self.Validator, validators={u"a new one" : new})
validators.update([(u"a new one", new)])
self.assertEqual(Extended.VALIDATORS, validators)
self.assertNotIn(u"a new one", self.Validator.VALIDATORS)
self.assertEqual(Extended.META_SCHEMA, self.Validator.META_SCHEMA)
self.assertEqual(Extended.DEFAULT_TYPES, self.Validator.DEFAULT_TYPES)
class TestIterErrors(unittest.TestCase):
def setUp(self):
self.validator = Draft3Validator({})
def test_iter_errors(self):
instance = [1, 2]
schema = {
u"disallow" : u"array",
u"enum" : [["a", "b", "c"], ["d", "e", "f"]],
u"minItems" : 3
}
got = (e.message for e in self.validator.iter_errors(instance, schema))
expected = [
"%r is disallowed for [1, 2]" % (schema["disallow"],),
"[1, 2] is too short",
"[1, 2] is not one of %r" % (schema["enum"],),
]
self.assertEqual(sorted(got), sorted(expected))
def test_iter_errors_multiple_failures_one_validator(self):
instance = {"foo" : 2, "bar" : [1], "baz" : 15, "quux" : "spam"}
schema = {
u"properties" : {
"foo" : {u"type" : "string"},
"bar" : {u"minItems" : 2},
"baz" : {u"maximum" : 10, u"enum" : [2, 4, 6, 8]},
}
}
errors = list(self.validator.iter_errors(instance, schema))
self.assertEqual(len(errors), 4)
class TestValidationErrorMessages(unittest.TestCase):
def message_for(self, instance, schema, *args, **kwargs):
kwargs.setdefault("cls", Draft3Validator)
with self.assertRaises(ValidationError) as e:
validate(instance, schema, *args, **kwargs)
return e.exception.message
def test_single_type_failure(self):
message = self.message_for(instance=1, schema={u"type" : u"string"})
self.assertEqual(message, "1 is not of type %r" % u"string")
def test_single_type_list_failure(self):
message = self.message_for(instance=1, schema={u"type" : [u"string"]})
self.assertEqual(message, "1 is not of type %r" % u"string")
def test_multiple_type_failure(self):
types = u"string", u"object"
message = self.message_for(instance=1, schema={u"type" : list(types)})
self.assertEqual(message, "1 is not of type %r, %r" % types)
def test_object_without_title_type_failure(self):
type = {u"type" : [{u"minimum" : 3}]}
message = self.message_for(instance=1, schema={u"type" : [type]})
self.assertEqual(message, "1 is not of type %r" % (type,))
def test_object_with_name_type_failure(self):
name = "Foo"
schema = {u"type" : [{u"name" : name, u"minimum" : 3}]}
message = self.message_for(instance=1, schema=schema)
self.assertEqual(message, "1 is not of type %r" % (name,))
def test_minimum(self):
message = self.message_for(instance=1, schema={"minimum" : 2})
self.assertEqual(message, "1 is less than the minimum of 2")
def test_maximum(self):
message = self.message_for(instance=1, schema={"maximum" : 0})
self.assertEqual(message, "1 is greater than the maximum of 0")
def test_dependencies_failure_has_single_element_not_list(self):
depend, on = "bar", "foo"
schema = {u"dependencies" : {depend : on}}
message = self.message_for({"bar" : 2}, schema)
self.assertEqual(message, "%r is a dependency of %r" % (on, depend))
def test_additionalItems_single_failure(self):
message = self.message_for(
[2], {u"items" : [], u"additionalItems" : False},
)
self.assertIn("(2 was unexpected)", message)
def test_additionalItems_multiple_failures(self):
message = self.message_for(
[1, 2, 3], {u"items" : [], u"additionalItems" : False}
)
self.assertIn("(1, 2, 3 were unexpected)", message)
def test_additionalProperties_single_failure(self):
additional = "foo"
schema = {u"additionalProperties" : False}
message = self.message_for({additional : 2}, schema)
self.assertIn("(%r was unexpected)" % (additional,), message)
def test_additionalProperties_multiple_failures(self):
schema = {u"additionalProperties" : False}
message = self.message_for(dict.fromkeys(["foo", "bar"]), schema)
self.assertIn(repr("foo"), message)
self.assertIn(repr("bar"), message)
self.assertIn("were unexpected)", message)
def test_invalid_format_default_message(self):
checker = FormatChecker(formats=())
check_fn = mock.Mock(return_value=False)
checker.checks(u"thing")(check_fn)
schema = {u"format" : u"thing"}
message = self.message_for("bla", schema, format_checker=checker)
self.assertIn(repr("bla"), message)
self.assertIn(repr("thing"), message)
self.assertIn("is not a", message)
class TestValidationErrorDetails(unittest.TestCase):
# TODO: These really need unit tests for each individual validator, rather
# than just these higher level tests.
def test_anyOf(self):
instance = 5
schema = {
"anyOf": [
{"minimum": 20},
{"type": "string"}
]
}
validator = Draft4Validator(schema)
errors = list(validator.iter_errors(instance))
self.assertEqual(len(errors), 1)
e = errors[0]
self.assertEqual(e.validator, "anyOf")
self.assertEqual(e.validator_value, schema["anyOf"])
self.assertEqual(e.instance, instance)
self.assertEqual(e.schema, schema)
self.assertIsNone(e.parent)
self.assertEqual(e.path, deque([]))
self.assertEqual(e.relative_path, deque([]))
self.assertEqual(e.absolute_path, deque([]))
self.assertEqual(e.schema_path, deque(["anyOf"]))
self.assertEqual(e.relative_schema_path, deque(["anyOf"]))
self.assertEqual(e.absolute_schema_path, deque(["anyOf"]))
self.assertEqual(len(e.context), 2)
e1, e2 = sorted_errors(e.context)
self.assertEqual(e1.validator, "minimum")
self.assertEqual(e1.validator_value, schema["anyOf"][0]["minimum"])
self.assertEqual(e1.instance, instance)
self.assertEqual(e1.schema, schema["anyOf"][0])
self.assertIs(e1.parent, e)
self.assertEqual(e1.path, deque([]))
self.assertEqual(e1.absolute_path, deque([]))
self.assertEqual(e1.relative_path, deque([]))
self.assertEqual(e1.schema_path, deque([0, "minimum"]))
self.assertEqual(e1.relative_schema_path, deque([0, "minimum"]))
self.assertEqual(
e1.absolute_schema_path, deque(["anyOf", 0, "minimum"]),
)
self.assertFalse(e1.context)
self.assertEqual(e2.validator, "type")
self.assertEqual(e2.validator_value, schema["anyOf"][1]["type"])
self.assertEqual(e2.instance, instance)
self.assertEqual(e2.schema, schema["anyOf"][1])
self.assertIs(e2.parent, e)
self.assertEqual(e2.path, deque([]))
self.assertEqual(e2.relative_path, deque([]))
self.assertEqual(e2.absolute_path, deque([]))
self.assertEqual(e2.schema_path, deque([1, "type"]))
self.assertEqual(e2.relative_schema_path, deque([1, "type"]))
self.assertEqual(e2.absolute_schema_path, deque(["anyOf", 1, "type"]))
self.assertEqual(len(e2.context), 0)
def test_type(self):
instance = {"foo": 1}
schema = {
"type": [
{"type": "integer"},
{
"type": "object",
"properties": {
"foo": {"enum": [2]}
}
}
]
}
validator = Draft3Validator(schema)
errors = list(validator.iter_errors(instance))
self.assertEqual(len(errors), 1)
e = errors[0]
self.assertEqual(e.validator, "type")
self.assertEqual(e.validator_value, schema["type"])
self.assertEqual(e.instance, instance)
self.assertEqual(e.schema, schema)
self.assertIsNone(e.parent)
self.assertEqual(e.path, deque([]))
self.assertEqual(e.relative_path, deque([]))
self.assertEqual(e.absolute_path, deque([]))
self.assertEqual(e.schema_path, deque(["type"]))
self.assertEqual(e.relative_schema_path, deque(["type"]))
self.assertEqual(e.absolute_schema_path, deque(["type"]))
self.assertEqual(len(e.context), 2)
e1, e2 = sorted_errors(e.context)
self.assertEqual(e1.validator, "type")
self.assertEqual(e1.validator_value, schema["type"][0]["type"])
self.assertEqual(e1.instance, instance)
self.assertEqual(e1.schema, schema["type"][0])
self.assertIs(e1.parent, e)
self.assertEqual(e1.path, deque([]))
self.assertEqual(e1.relative_path, deque([]))
self.assertEqual(e1.absolute_path, deque([]))
self.assertEqual(e1.schema_path, deque([0, "type"]))
self.assertEqual(e1.relative_schema_path, deque([0, "type"]))
self.assertEqual(e1.absolute_schema_path, deque(["type", 0, "type"]))
self.assertFalse(e1.context)
self.assertEqual(e2.validator, "enum")
self.assertEqual(e2.validator_value, [2])
self.assertEqual(e2.instance, 1)
self.assertEqual(e2.schema, {u"enum" : [2]})
self.assertIs(e2.parent, e)
self.assertEqual(e2.path, deque(["foo"]))
self.assertEqual(e2.relative_path, deque(["foo"]))
self.assertEqual(e2.absolute_path, deque(["foo"]))
self.assertEqual(
e2.schema_path, deque([1, "properties", "foo", "enum"]),
)
self.assertEqual(
e2.relative_schema_path, deque([1, "properties", "foo", "enum"]),
)
self.assertEqual(
e2.absolute_schema_path,
deque(["type", 1, "properties", "foo", "enum"]),
)
self.assertFalse(e2.context)
def test_single_nesting(self):
instance = {"foo" : 2, "bar" : [1], "baz" : 15, "quux" : "spam"}
schema = {
"properties" : {
"foo" : {"type" : "string"},
"bar" : {"minItems" : 2},
"baz" : {"maximum" : 10, "enum" : [2, 4, 6, 8]},
}
}
validator = Draft3Validator(schema)
errors = validator.iter_errors(instance)
e1, e2, e3, e4 = sorted_errors(errors)
self.assertEqual(e1.path, deque(["bar"]))
self.assertEqual(e2.path, deque(["baz"]))
self.assertEqual(e3.path, deque(["baz"]))
self.assertEqual(e4.path, deque(["foo"]))
self.assertEqual(e1.relative_path, deque(["bar"]))
self.assertEqual(e2.relative_path, deque(["baz"]))
self.assertEqual(e3.relative_path, deque(["baz"]))
self.assertEqual(e4.relative_path, deque(["foo"]))
self.assertEqual(e1.absolute_path, deque(["bar"]))
self.assertEqual(e2.absolute_path, deque(["baz"]))
self.assertEqual(e3.absolute_path, deque(["baz"]))
self.assertEqual(e4.absolute_path, deque(["foo"]))
self.assertEqual(e1.validator, "minItems")
self.assertEqual(e2.validator, "enum")
self.assertEqual(e3.validator, "maximum")
self.assertEqual(e4.validator, "type")
def test_multiple_nesting(self):
instance = [1, {"foo" : 2, "bar" : {"baz" : [1]}}, "quux"]
schema = {
"type" : "string",
"items" : {
"type" : ["string", "object"],
"properties" : {
"foo" : {"enum" : [1, 3]},
"bar" : {
"type" : "array",
"properties" : {
"bar" : {"required" : True},
"baz" : {"minItems" : 2},
}
}
}
}
}
validator = Draft3Validator(schema)
errors = validator.iter_errors(instance)
e1, e2, e3, e4, e5, e6 = sorted_errors(errors)
self.assertEqual(e1.path, deque([]))
self.assertEqual(e2.path, deque([0]))
self.assertEqual(e3.path, deque([1, "bar"]))
self.assertEqual(e4.path, deque([1, "bar", "bar"]))
self.assertEqual(e5.path, deque([1, "bar", "baz"]))
self.assertEqual(e6.path, deque([1, "foo"]))
self.assertEqual(e1.schema_path, deque(["type"]))
self.assertEqual(e2.schema_path, deque(["items", "type"]))
self.assertEqual(
list(e3.schema_path), ["items", "properties", "bar", "type"],
)
self.assertEqual(
list(e4.schema_path),
["items", "properties", "bar", "properties", "bar", "required"],
)
self.assertEqual(
list(e5.schema_path),
["items", "properties", "bar", "properties", "baz", "minItems"]
)
self.assertEqual(
list(e6.schema_path), ["items", "properties", "foo", "enum"],
)
self.assertEqual(e1.validator, "type")
self.assertEqual(e2.validator, "type")
self.assertEqual(e3.validator, "type")
self.assertEqual(e4.validator, "required")
self.assertEqual(e5.validator, "minItems")
self.assertEqual(e6.validator, "enum")
def test_additionalProperties(self):
instance = {"bar": "bar", "foo": 2}
schema = {
"additionalProperties" : {"type": "integer", "minimum": 5}
}
validator = Draft3Validator(schema)
errors = validator.iter_errors(instance)
e1, e2 = sorted_errors(errors)
self.assertEqual(e1.path, deque(["bar"]))
self.assertEqual(e2.path, deque(["foo"]))
self.assertEqual(e1.validator, "type")
self.assertEqual(e2.validator, "minimum")
def test_patternProperties(self):
instance = {"bar": 1, "foo": 2}
schema = {
"patternProperties" : {
"bar": {"type": "string"},
"foo": {"minimum": 5}
}
}
validator = Draft3Validator(schema)
errors = validator.iter_errors(instance)
e1, e2 = sorted_errors(errors)
self.assertEqual(e1.path, deque(["bar"]))
self.assertEqual(e2.path, deque(["foo"]))
self.assertEqual(e1.validator, "type")
self.assertEqual(e2.validator, "minimum")
def test_additionalItems(self):
instance = ["foo", 1]
schema = {
"items": [],
"additionalItems" : {"type": "integer", "minimum": 5}
}
validator = Draft3Validator(schema)
errors = validator.iter_errors(instance)
e1, e2 = sorted_errors(errors)
self.assertEqual(e1.path, deque([0]))
self.assertEqual(e2.path, deque([1]))
self.assertEqual(e1.validator, "type")
self.assertEqual(e2.validator, "minimum")
def test_additionalItems_with_items(self):
instance = ["foo", "bar", 1]
schema = {
"items": [{}],
"additionalItems" : {"type": "integer", "minimum": 5}
}
validator = Draft3Validator(schema)
errors = validator.iter_errors(instance)
e1, e2 = sorted_errors(errors)
self.assertEqual(e1.path, deque([1]))
self.assertEqual(e2.path, deque([2]))
self.assertEqual(e1.validator, "type")
self.assertEqual(e2.validator, "minimum")
class ValidatorTestMixin(object):
def setUp(self):
self.instance = mock.Mock()
self.schema = {}
self.resolver = mock.Mock()
self.validator = self.validator_class(self.schema)
def test_valid_instances_are_valid(self):
errors = iter([])
with mock.patch.object(
self.validator, "iter_errors", return_value=errors,
):
self.assertTrue(
self.validator.is_valid(self.instance, self.schema)
)
def test_invalid_instances_are_not_valid(self):
errors = iter([mock.Mock()])
with mock.patch.object(
self.validator, "iter_errors", return_value=errors,
):
self.assertFalse(
self.validator.is_valid(self.instance, self.schema)
)
def test_non_existent_properties_are_ignored(self):
instance, my_property, my_value = mock.Mock(), mock.Mock(), mock.Mock()
validate(instance=instance, schema={my_property : my_value})
def test_it_creates_a_ref_resolver_if_not_provided(self):
self.assertIsInstance(self.validator.resolver, RefResolver)
def test_it_delegates_to_a_ref_resolver(self):
resolver = RefResolver("", {})
schema = {"$ref" : mock.Mock()}
@contextmanager
def resolving():
yield {"type": "integer"}
with mock.patch.object(resolver, "resolving") as resolve:
resolve.return_value = resolving()
with self.assertRaises(ValidationError):
self.validator_class(schema, resolver=resolver).validate(None)
resolve.assert_called_once_with(schema["$ref"])
def test_is_type_is_true_for_valid_type(self):
self.assertTrue(self.validator.is_type("foo", "string"))
def test_is_type_is_false_for_invalid_type(self):
self.assertFalse(self.validator.is_type("foo", "array"))
def test_is_type_evades_bool_inheriting_from_int(self):
self.assertFalse(self.validator.is_type(True, "integer"))
self.assertFalse(self.validator.is_type(True, "number"))
def test_is_type_raises_exception_for_unknown_type(self):
with self.assertRaises(UnknownType):
self.validator.is_type("foo", object())
class TestDraft3Validator(ValidatorTestMixin, unittest.TestCase):
validator_class = Draft3Validator
def test_is_type_is_true_for_any_type(self):
self.assertTrue(self.validator.is_valid(mock.Mock(), {"type": "any"}))
def test_is_type_does_not_evade_bool_if_it_is_being_tested(self):
self.assertTrue(self.validator.is_type(True, "boolean"))
self.assertTrue(self.validator.is_valid(True, {"type": "any"}))
def test_non_string_custom_types(self):
schema = {'type': [None]}
cls = self.validator_class(schema, types={None: type(None)})
cls.validate(None, schema)
class TestDraft4Validator(ValidatorTestMixin, unittest.TestCase):
validator_class = Draft4Validator
class TestBuiltinFormats(unittest.TestCase):
"""
The built-in (specification-defined) formats do not raise type errors.
If an instance or value is not a string, it should be ignored.
"""
for format in FormatChecker.checkers:
def test(self, format=format):
v = Draft4Validator({"format": format}, format_checker=FormatChecker())
v.validate(123)
name = "test_{0}_ignores_non_strings".format(format)
test.__name__ = name
setattr(TestBuiltinFormats, name, test)
del test # Ugh py.test. Stop discovering top level tests.
class TestValidatorFor(unittest.TestCase):
def test_draft_3(self):
schema = {"$schema" : "http://json-schema.org/draft-03/schema"}
self.assertIs(validator_for(schema), Draft3Validator)
schema = {"$schema" : "http://json-schema.org/draft-03/schema#"}
self.assertIs(validator_for(schema), Draft3Validator)
def test_draft_4(self):
schema = {"$schema" : "http://json-schema.org/draft-04/schema"}
self.assertIs(validator_for(schema), Draft4Validator)
schema = {"$schema" : "http://json-schema.org/draft-04/schema#"}
self.assertIs(validator_for(schema), Draft4Validator)
def test_custom_validator(self):
Validator = create(meta_schema={"id" : "meta schema id"}, version="12")
schema = {"$schema" : "meta schema id"}
self.assertIs(validator_for(schema), Validator)
def test_validator_for_jsonschema_default(self):
self.assertIs(validator_for({}), Draft4Validator)
def test_validator_for_custom_default(self):
self.assertIs(validator_for({}, default=None), None)
class TestValidate(unittest.TestCase):
def test_draft3_validator_is_chosen(self):
schema = {"$schema" : "http://json-schema.org/draft-03/schema#"}
with mock.patch.object(Draft3Validator, "check_schema") as chk_schema:
validate({}, schema)
chk_schema.assert_called_once_with(schema)
# Make sure it works without the empty fragment
schema = {"$schema" : "http://json-schema.org/draft-03/schema"}
with mock.patch.object(Draft3Validator, "check_schema") as chk_schema:
validate({}, schema)
chk_schema.assert_called_once_with(schema)
def test_draft4_validator_is_chosen(self):
schema = {"$schema" : "http://json-schema.org/draft-04/schema#"}
with mock.patch.object(Draft4Validator, "check_schema") as chk_schema:
validate({}, schema)
chk_schema.assert_called_once_with(schema)
def test_draft4_validator_is_the_default(self):
with mock.patch.object(Draft4Validator, "check_schema") as chk_schema:
validate({}, {})
chk_schema.assert_called_once_with({})
class TestRefResolver(unittest.TestCase):
base_uri = ""
stored_uri = "foo://stored"
stored_schema = {"stored" : "schema"}
def setUp(self):
self.referrer = {}
self.store = {self.stored_uri : self.stored_schema}
self.resolver = RefResolver(self.base_uri, self.referrer, self.store)
def test_it_does_not_retrieve_schema_urls_from_the_network(self):
ref = Draft3Validator.META_SCHEMA["id"]
with mock.patch.object(self.resolver, "resolve_remote") as remote:
with self.resolver.resolving(ref) as resolved:
self.assertEqual(resolved, Draft3Validator.META_SCHEMA)
self.assertFalse(remote.called)
def test_it_resolves_local_refs(self):
ref = "#/properties/foo"
self.referrer["properties"] = {"foo" : object()}
with self.resolver.resolving(ref) as resolved:
self.assertEqual(resolved, self.referrer["properties"]["foo"])
def test_it_resolves_local_refs_with_id(self):
schema = {"id": "foo://bar/schema#", "a": {"foo": "bar"}}
resolver = RefResolver.from_schema(schema)
with resolver.resolving("#/a") as resolved:
self.assertEqual(resolved, schema["a"])
with resolver.resolving("foo://bar/schema#/a") as resolved:
self.assertEqual(resolved, schema["a"])
def test_it_retrieves_stored_refs(self):
with self.resolver.resolving(self.stored_uri) as resolved:
self.assertIs(resolved, self.stored_schema)
self.resolver.store["cached_ref"] = {"foo" : 12}
with self.resolver.resolving("cached_ref#/foo") as resolved:
self.assertEqual(resolved, 12)
def test_it_retrieves_unstored_refs_via_requests(self):
ref = "http://bar#baz"
schema = {"baz" : 12}
with mock.patch("jsonschema.validators.requests") as requests:
requests.get.return_value.json.return_value = schema
with self.resolver.resolving(ref) as resolved:
self.assertEqual(resolved, 12)
requests.get.assert_called_once_with("http://bar")
def test_it_retrieves_unstored_refs_via_urlopen(self):
ref = "http://bar#baz"
schema = {"baz" : 12}
with mock.patch("jsonschema.validators.requests", None):
with mock.patch("jsonschema.validators.urlopen") as urlopen:
urlopen.return_value.read.return_value = (
json.dumps(schema).encode("utf8"))
with self.resolver.resolving(ref) as resolved:
self.assertEqual(resolved, 12)
urlopen.assert_called_once_with("http://bar")
def test_it_can_construct_a_base_uri_from_a_schema(self):
schema = {"id" : "foo"}
resolver = RefResolver.from_schema(schema)
self.assertEqual(resolver.base_uri, "foo")
with resolver.resolving("") as resolved:
self.assertEqual(resolved, schema)
with resolver.resolving("#") as resolved:
self.assertEqual(resolved, schema)
with resolver.resolving("foo") as resolved:
self.assertEqual(resolved, schema)
with resolver.resolving("foo#") as resolved:
self.assertEqual(resolved, schema)
def test_it_can_construct_a_base_uri_from_a_schema_without_id(self):
schema = {}
resolver = RefResolver.from_schema(schema)
self.assertEqual(resolver.base_uri, "")
with resolver.resolving("") as resolved:
self.assertEqual(resolved, schema)
with resolver.resolving("#") as resolved:
self.assertEqual(resolved, schema)
def test_custom_uri_scheme_handlers(self):
schema = {"foo": "bar"}
ref = "foo://bar"
foo_handler = mock.Mock(return_value=schema)
resolver = RefResolver("", {}, handlers={"foo": foo_handler})
with resolver.resolving(ref) as resolved:
self.assertEqual(resolved, schema)
foo_handler.assert_called_once_with(ref)
def test_cache_remote_on(self):
ref = "foo://bar"
foo_handler = mock.Mock()
resolver = RefResolver(
"", {}, cache_remote=True, handlers={"foo" : foo_handler},
)
with resolver.resolving(ref):
pass
with resolver.resolving(ref):
pass
foo_handler.assert_called_once_with(ref)
def test_cache_remote_off(self):
ref = "foo://bar"
foo_handler = mock.Mock()
resolver = RefResolver(
"", {}, cache_remote=False, handlers={"foo" : foo_handler},
)
with resolver.resolving(ref):
pass
with resolver.resolving(ref):
pass
self.assertEqual(foo_handler.call_count, 2)
def test_if_you_give_it_junk_you_get_a_resolution_error(self):
ref = "foo://bar"
foo_handler = mock.Mock(side_effect=ValueError("Oh no! What's this?"))
resolver = RefResolver("", {}, handlers={"foo" : foo_handler})
with self.assertRaises(RefResolutionError) as err:
with resolver.resolving(ref):
pass
self.assertEqual(str(err.exception), "Oh no! What's this?")
def sorted_errors(errors):
def key(error):
return (
[str(e) for e in error.path],
[str(e) for e in error.schema_path]
)
return sorted(errors, key=key)

View File

@ -0,0 +1,428 @@
from __future__ import division
import contextlib
import json
import numbers
try:
import requests
except ImportError:
requests = None
from jsonschema import _utils, _validators
from jsonschema.compat import (
Sequence, urljoin, urlsplit, urldefrag, unquote, urlopen,
str_types, int_types, iteritems,
)
from jsonschema.exceptions import ErrorTree # Backwards compatibility # noqa
from jsonschema.exceptions import RefResolutionError, SchemaError, UnknownType
_unset = _utils.Unset()
validators = {}
meta_schemas = _utils.URIDict()
def validates(version):
"""
Register the decorated validator for a ``version`` of the specification.
Registered validators and their meta schemas will be considered when
parsing ``$schema`` properties' URIs.
:argument str version: an identifier to use as the version's name
:returns: a class decorator to decorate the validator with the version
"""
def _validates(cls):
validators[version] = cls
if u"id" in cls.META_SCHEMA:
meta_schemas[cls.META_SCHEMA[u"id"]] = cls
return cls
return _validates
def create(meta_schema, validators=(), version=None, default_types=None): # noqa
if default_types is None:
default_types = {
u"array" : list, u"boolean" : bool, u"integer" : int_types,
u"null" : type(None), u"number" : numbers.Number, u"object" : dict,
u"string" : str_types,
}
class Validator(object):
VALIDATORS = dict(validators)
META_SCHEMA = dict(meta_schema)
DEFAULT_TYPES = dict(default_types)
def __init__(
self, schema, types=(), resolver=None, format_checker=None,
):
self._types = dict(self.DEFAULT_TYPES)
self._types.update(types)
if resolver is None:
resolver = RefResolver.from_schema(schema)
self.resolver = resolver
self.format_checker = format_checker
self.schema = schema
@classmethod
def check_schema(cls, schema):
for error in cls(cls.META_SCHEMA).iter_errors(schema):
raise SchemaError.create_from(error)
def iter_errors(self, instance, _schema=None):
if _schema is None:
_schema = self.schema
with self.resolver.in_scope(_schema.get(u"id", u"")):
ref = _schema.get(u"$ref")
if ref is not None:
validators = [(u"$ref", ref)]
else:
validators = iteritems(_schema)
for k, v in validators:
validator = self.VALIDATORS.get(k)
if validator is None:
continue
errors = validator(self, v, instance, _schema) or ()
for error in errors:
# set details if not already set by the called fn
error._set(
validator=k,
validator_value=v,
instance=instance,
schema=_schema,
)
if k != u"$ref":
error.schema_path.appendleft(k)
yield error
def descend(self, instance, schema, path=None, schema_path=None):
for error in self.iter_errors(instance, schema):
if path is not None:
error.path.appendleft(path)
if schema_path is not None:
error.schema_path.appendleft(schema_path)
yield error
def validate(self, *args, **kwargs):
for error in self.iter_errors(*args, **kwargs):
raise error
def is_type(self, instance, type):
if type not in self._types:
raise UnknownType(type, instance, self.schema)
pytypes = self._types[type]
# bool inherits from int, so ensure bools aren't reported as ints
if isinstance(instance, bool):
pytypes = _utils.flatten(pytypes)
is_number = any(
issubclass(pytype, numbers.Number) for pytype in pytypes
)
if is_number and bool not in pytypes:
return False
return isinstance(instance, pytypes)
def is_valid(self, instance, _schema=None):
error = next(self.iter_errors(instance, _schema), None)
return error is None
if version is not None:
Validator = validates(version)(Validator)
Validator.__name__ = version.title().replace(" ", "") + "Validator"
return Validator
def extend(validator, validators, version=None):
all_validators = dict(validator.VALIDATORS)
all_validators.update(validators)
return create(
meta_schema=validator.META_SCHEMA,
validators=all_validators,
version=version,
default_types=validator.DEFAULT_TYPES,
)
Draft3Validator = create(
meta_schema=_utils.load_schema("draft3"),
validators={
u"$ref" : _validators.ref,
u"additionalItems" : _validators.additionalItems,
u"additionalProperties" : _validators.additionalProperties,
u"dependencies" : _validators.dependencies,
u"disallow" : _validators.disallow_draft3,
u"divisibleBy" : _validators.multipleOf,
u"enum" : _validators.enum,
u"extends" : _validators.extends_draft3,
u"format" : _validators.format,
u"items" : _validators.items,
u"maxItems" : _validators.maxItems,
u"maxLength" : _validators.maxLength,
u"maximum" : _validators.maximum,
u"minItems" : _validators.minItems,
u"minLength" : _validators.minLength,
u"minimum" : _validators.minimum,
u"multipleOf" : _validators.multipleOf,
u"pattern" : _validators.pattern,
u"patternProperties" : _validators.patternProperties,
u"properties" : _validators.properties_draft3,
u"type" : _validators.type_draft3,
u"uniqueItems" : _validators.uniqueItems,
},
version="draft3",
)
Draft4Validator = create(
meta_schema=_utils.load_schema("draft4"),
validators={
u"$ref" : _validators.ref,
u"additionalItems" : _validators.additionalItems,
u"additionalProperties" : _validators.additionalProperties,
u"allOf" : _validators.allOf_draft4,
u"anyOf" : _validators.anyOf_draft4,
u"dependencies" : _validators.dependencies,
u"enum" : _validators.enum,
u"format" : _validators.format,
u"items" : _validators.items,
u"maxItems" : _validators.maxItems,
u"maxLength" : _validators.maxLength,
u"maxProperties" : _validators.maxProperties_draft4,
u"maximum" : _validators.maximum,
u"minItems" : _validators.minItems,
u"minLength" : _validators.minLength,
u"minProperties" : _validators.minProperties_draft4,
u"minimum" : _validators.minimum,
u"multipleOf" : _validators.multipleOf,
u"not" : _validators.not_draft4,
u"oneOf" : _validators.oneOf_draft4,
u"pattern" : _validators.pattern,
u"patternProperties" : _validators.patternProperties,
u"properties" : _validators.properties_draft4,
u"required" : _validators.required_draft4,
u"type" : _validators.type_draft4,
u"uniqueItems" : _validators.uniqueItems,
},
version="draft4",
)
class RefResolver(object):
"""
Resolve JSON References.
:argument str base_uri: URI of the referring document
:argument referrer: the actual referring document
:argument dict store: a mapping from URIs to documents to cache
:argument bool cache_remote: whether remote refs should be cached after
first resolution
:argument dict handlers: a mapping from URI schemes to functions that
should be used to retrieve them
"""
def __init__(
self, base_uri, referrer, store=(), cache_remote=True, handlers=(),
):
self.base_uri = base_uri
self.resolution_scope = base_uri
# This attribute is not used, it is for backwards compatibility
self.referrer = referrer
self.cache_remote = cache_remote
self.handlers = dict(handlers)
self.store = _utils.URIDict(
(id, validator.META_SCHEMA)
for id, validator in iteritems(meta_schemas)
)
self.store.update(store)
self.store[base_uri] = referrer
@classmethod
def from_schema(cls, schema, *args, **kwargs):
"""
Construct a resolver from a JSON schema object.
:argument schema schema: the referring schema
:rtype: :class:`RefResolver`
"""
return cls(schema.get(u"id", u""), schema, *args, **kwargs)
@contextlib.contextmanager
def in_scope(self, scope):
old_scope = self.resolution_scope
self.resolution_scope = urljoin(old_scope, scope)
try:
yield
finally:
self.resolution_scope = old_scope
@contextlib.contextmanager
def resolving(self, ref):
"""
Context manager which resolves a JSON ``ref`` and enters the
resolution scope of this ref.
:argument str ref: reference to resolve
"""
full_uri = urljoin(self.resolution_scope, ref)
uri, fragment = urldefrag(full_uri)
if not uri:
uri = self.base_uri
if uri in self.store:
document = self.store[uri]
else:
try:
document = self.resolve_remote(uri)
except Exception as exc:
raise RefResolutionError(exc)
old_base_uri, self.base_uri = self.base_uri, uri
try:
with self.in_scope(uri):
yield self.resolve_fragment(document, fragment)
finally:
self.base_uri = old_base_uri
def resolve_fragment(self, document, fragment):
"""
Resolve a ``fragment`` within the referenced ``document``.
:argument document: the referrant document
:argument str fragment: a URI fragment to resolve within it
"""
fragment = fragment.lstrip(u"/")
parts = unquote(fragment).split(u"/") if fragment else []
for part in parts:
part = part.replace(u"~1", u"/").replace(u"~0", u"~")
if isinstance(document, Sequence):
# Array indexes should be turned into integers
try:
part = int(part)
except ValueError:
pass
try:
document = document[part]
except (TypeError, LookupError):
raise RefResolutionError(
"Unresolvable JSON pointer: %r" % fragment
)
return document
def resolve_remote(self, uri):
"""
Resolve a remote ``uri``.
Does not check the store first, but stores the retrieved document in
the store if :attr:`RefResolver.cache_remote` is True.
.. note::
If the requests_ library is present, ``jsonschema`` will use it to
request the remote ``uri``, so that the correct encoding is
detected and used.
If it isn't, or if the scheme of the ``uri`` is not ``http`` or
``https``, UTF-8 is assumed.
:argument str uri: the URI to resolve
:returns: the retrieved document
.. _requests: http://pypi.python.org/pypi/requests/
"""
scheme = urlsplit(uri).scheme
if scheme in self.handlers:
result = self.handlers[scheme](uri)
elif (
scheme in [u"http", u"https"] and
requests and
getattr(requests.Response, "json", None) is not None
):
# Requests has support for detecting the correct encoding of
# json over http
if callable(requests.Response.json):
result = requests.get(uri).json()
else:
result = requests.get(uri).json
else:
# Otherwise, pass off to urllib and assume utf-8
result = json.loads(urlopen(uri).read().decode("utf-8"))
if self.cache_remote:
self.store[uri] = result
return result
def validator_for(schema, default=_unset):
if default is _unset:
default = Draft4Validator
return meta_schemas.get(schema.get(u"$schema", u""), default)
def validate(instance, schema, cls=None, *args, **kwargs):
"""
Validate an instance under the given schema.
>>> validate([2, 3, 4], {"maxItems" : 2})
Traceback (most recent call last):
...
ValidationError: [2, 3, 4] is too long
:func:`validate` will first verify that the provided schema is itself
valid, since not doing so can lead to less obvious error messages and fail
in less obvious or consistent ways. If you know you have a valid schema
already or don't care, you might prefer using the
:meth:`~IValidator.validate` method directly on a specific validator
(e.g. :meth:`Draft4Validator.validate`).
:argument instance: the instance to validate
:argument schema: the schema to validate with
:argument cls: an :class:`IValidator` class that will be used to validate
the instance.
If the ``cls`` argument is not provided, two things will happen in
accordance with the specification. First, if the schema has a
:validator:`$schema` property containing a known meta-schema [#]_ then the
proper validator will be used. The specification recommends that all
schemas contain :validator:`$schema` properties for this reason. If no
:validator:`$schema` property is found, the default validator class is
:class:`Draft4Validator`.
Any other provided positional and keyword arguments will be passed on when
instantiating the ``cls``.
:raises:
:exc:`ValidationError` if the instance is invalid
:exc:`SchemaError` if the schema itself is invalid
.. rubric:: Footnotes
.. [#] known by a validator registered with :func:`validates`
"""
if cls is None:
cls = validator_for(schema)
cls.check_schema(schema)
cls(schema, *args, **kwargs).validate(instance)

504
lib/spack/external/nose/LICENSE vendored Normal file
View File

@ -0,0 +1,504 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[This is the first released version of the Lesser GPL. It also counts
as the successor of the GNU Library Public License, version 2, hence
the version number 2.1.]
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.
This license, the Lesser General Public License, applies to some
specially designated software packages--typically libraries--of the
Free Software Foundation and other authors who decide to use it. You
can use it too, but we suggest you first think carefully about whether
this license or the ordinary General Public License is the better
strategy to use in any particular case, based on the explanations below.
When we speak of free software, we are referring to freedom of use,
not price. Our General Public Licenses are designed to make sure that
you have the freedom to distribute copies of free software (and charge
for this service if you wish); that you receive source code or can get
it if you want it; that you can change the software and use pieces of
it in new free programs; and that you are informed that you can do
these things.
To protect your rights, we need to make restrictions that forbid
distributors to deny you these rights or to ask you to surrender these
rights. These restrictions translate to certain responsibilities for
you if you distribute copies of the library or if you modify it.
For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you. You must make sure that they, too, receive or can get the source
code. If you link other code with the library, you must provide
complete object files to the recipients, so that they can relink them
with the library after making changes to the library and recompiling
it. And you must show them these terms so they know their rights.
We protect your rights with a two-step method: (1) we copyright the
library, and (2) we offer you this license, which gives you legal
permission to copy, distribute and/or modify the library.
To protect each distributor, we want to make it very clear that
there is no warranty for the free library. Also, if the library is
modified by someone else and passed on, the recipients should know
that what they have is not the original version, so that the original
author's reputation will not be affected by problems that might be
introduced by others.
Finally, software patents pose a constant threat to the existence of
any free program. We wish to make sure that a company cannot
effectively restrict the users of a free program by obtaining a
restrictive license from a patent holder. Therefore, we insist that
any patent license obtained for a version of the library must be
consistent with the full freedom of use specified in this license.
Most GNU software, including some libraries, is covered by the
ordinary GNU General Public License. This license, the GNU Lesser
General Public License, applies to certain designated libraries, and
is quite different from the ordinary General Public License. We use
this license for certain libraries in order to permit linking those
libraries into non-free programs.
When a program is linked with a library, whether statically or using
a shared library, the combination of the two is legally speaking a
combined work, a derivative of the original library. The ordinary
General Public License therefore permits such linking only if the
entire combination fits its criteria of freedom. The Lesser General
Public License permits more lax criteria for linking other code with
the library.
We call this license the "Lesser" General Public License because it
does Less to protect the user's freedom than the ordinary General
Public License. It also provides other free software developers Less
of an advantage over competing non-free programs. These disadvantages
are the reason we use the ordinary General Public License for many
libraries. However, the Lesser license provides advantages in certain
special circumstances.
For example, on rare occasions, there may be a special need to
encourage the widest possible use of a certain library, so that it becomes
a de-facto standard. To achieve this, non-free programs must be
allowed to use the library. A more frequent case is that a free
library does the same job as widely used non-free libraries. In this
case, there is little to gain by limiting the free library to free
software only, so we use the Lesser General Public License.
In other cases, permission to use a particular library in non-free
programs enables a greater number of people to use a large body of
free software. For example, permission to use the GNU C Library in
non-free programs enables many more people to use the whole GNU
operating system, as well as its variant, the GNU/Linux operating
system.
Although the Lesser General Public License is Less protective of the
users' freedom, it does ensure that the user of a program that is
linked with the Library has the freedom and the wherewithal to run
that program using a modified version of the Library.
The precise terms and conditions for copying, distribution and
modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.
GNU LESSER GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library or other
program which contains a notice placed by the copyright holder or
other authorized party saying it may be distributed under the terms of
this Lesser General Public License (also called "this License").
Each licensee is addressed as "you".
A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.
The "Library", below, refers to any such software library or work
which has been distributed under these terms. A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language. (Hereinafter, translation is
included without limitation in the term "modification".)
"Source code" for a work means the preferred form of the work for
making modifications to it. For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control compilation
and installation of the library.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.
You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.
2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) The modified work must itself be a software library.
b) You must cause the files modified to carry prominent notices
stating that you changed the files and the date of any change.
c) You must cause the whole of the work to be licensed at no
charge to all third parties under the terms of this License.
d) If a facility in the modified Library refers to a function or a
table of data to be supplied by an application program that uses
the facility, other than as an argument passed when the facility
is invoked, then you must make a good faith effort to ensure that,
in the event an application does not supply such function or
table, the facility still operates, and performs whatever part of
its purpose remains meaningful.
(For example, a function in a library to compute square roots has
a purpose that is entirely well-defined independent of the
application. Therefore, Subsection 2d requires that any
application-supplied function or table used by this function must
be optional: if the application does not supply it, the square
root function must still compute square roots.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote
it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.
In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License. (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.) Do not make any other change in
these notices.
Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
This option is useful when you wish to copy part of the code of
the Library into a program that is not a library.
4. You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the
source code from the same place satisfies the requirement to
distribute the source code, even though third parties are not
compelled to copy the source along with the object code.
5. A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library". Such a
work, in isolation, is not a derivative work of the Library, and
therefore falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library". The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library. The
threshold for this to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and small inline
functions (ten lines or less in length), then the use of the object
file is unrestricted, regardless of whether it is legally a derivative
work. (Executables containing this object code plus portions of the
Library will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also combine or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse
engineering for debugging such modifications.
You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License. You must supply a copy of this License. If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License. Also, you must do one
of these things:
a) Accompany the work with the complete corresponding
machine-readable source code for the Library including whatever
changes were used in the work (which must be distributed under
Sections 1 and 2 above); and, if the work is an executable linked
with the Library, with the complete machine-readable "work that
uses the Library", as object code and/or source code, so that the
user can modify the Library and then relink to produce a modified
executable containing the modified Library. (It is understood
that the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application
to use the modified definitions.)
b) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (1) uses at run time a
copy of the library already present on the user's computer system,
rather than copying library functions into the executable, and (2)
will operate properly with a modified version of the library, if
the user installs one, as long as the modified version is
interface-compatible with the version that the work was made with.
c) Accompany the work with a written offer, valid for at
least three years, to give the same user the materials
specified in Subsection 6a, above, for a charge no more
than the cost of performing this distribution.
d) If distribution of the work is made by offering access to copy
from a designated place, offer equivalent access to copy the above
specified materials from the same place.
e) Verify that the user has already received a copy of these
materials or that you have already sent this user a copy.
For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it. However, as a special exception,
the materials to be distributed need not include anything that is
normally distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.
It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:
a) Accompany the combined library with a copy of the same work
based on the Library, uncombined with any other library
facilities. This must be distributed under the terms of the
Sections above.
b) Give prominent notice with the combined library of the fact
that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
8. You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License. Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License. However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
9. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Library or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.
10. Each time you redistribute the Library (or any work based on the
Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties with
this License.
11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Library at all. For example, if a patent
license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Library.
If any portion of this section is held invalid or unenforceable under any
particular circumstance, the balance of the section is intended to apply,
and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
12. If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License may add
an explicit geographical distribution limitation excluding those countries,
so that distribution is permitted only in or among countries not thus
excluded. In such case, this License incorporates the limitation as if
written in the body of this License.
13. The Free Software Foundation may publish revised and/or new
versions of the Lesser General Public License from time to time.
Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission. For software which is
copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Libraries
If you develop a new library, and you want it to be of the greatest
possible use to the public, we recommend making it free software that
everyone can redistribute and change. You can do so by permitting
redistribution under these terms (or, alternatively, under the terms of the
ordinary General Public License).
To apply these terms, attach the following notices to the library. It is
safest to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.
<one line to give the library's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper mail.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the library, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the
library `Frob' (a library for tweaking knobs) written by James Random Hacker.
<signature of Ty Coon>, 1 April 1990
Ty Coon, President of Vice
That's all there is to it!

15
lib/spack/external/nose/__init__.py vendored Normal file
View File

@ -0,0 +1,15 @@
from nose.core import collector, main, run, run_exit, runmodule
# backwards compatibility
from nose.exc import SkipTest, DeprecatedTest
from nose.tools import with_setup
__author__ = 'Jason Pellerin'
__versioninfo__ = (1, 3, 7)
__version__ = '.'.join(map(str, __versioninfo__))
__all__ = [
'main', 'run', 'run_exit', 'runmodule', 'with_setup',
'SkipTest', 'DeprecatedTest', 'collector'
]

8
lib/spack/external/nose/__main__.py vendored Normal file
View File

@ -0,0 +1,8 @@
import sys
from nose.core import run_exit
if sys.argv[0].endswith('__main__.py'):
sys.argv[0] = '%s -m nose' % sys.executable
run_exit()

397
lib/spack/external/nose/case.py vendored Normal file
View File

@ -0,0 +1,397 @@
"""nose unittest.TestCase subclasses. It is not necessary to subclass these
classes when writing tests; they are used internally by nose.loader.TestLoader
to create test cases from test functions and methods in test classes.
"""
import logging
import sys
import unittest
from inspect import isfunction
from nose.config import Config
from nose.failure import Failure # for backwards compatibility
from nose.util import resolve_name, test_address, try_run
log = logging.getLogger(__name__)
__all__ = ['Test']
class Test(unittest.TestCase):
"""The universal test case wrapper.
When a plugin sees a test, it will always see an instance of this
class. To access the actual test case that will be run, access the
test property of the nose.case.Test instance.
"""
__test__ = False # do not collect
def __init__(self, test, config=None, resultProxy=None):
# sanity check
if not callable(test):
raise TypeError("nose.case.Test called with argument %r that "
"is not callable. A callable is required."
% test)
self.test = test
if config is None:
config = Config()
self.config = config
self.tbinfo = None
self.capturedOutput = None
self.resultProxy = resultProxy
self.plugins = config.plugins
self.passed = None
unittest.TestCase.__init__(self)
def __call__(self, *arg, **kwarg):
return self.run(*arg, **kwarg)
def __str__(self):
name = self.plugins.testName(self)
if name is not None:
return name
return str(self.test)
def __repr__(self):
return "Test(%r)" % self.test
def afterTest(self, result):
"""Called after test is complete (after result.stopTest)
"""
try:
afterTest = result.afterTest
except AttributeError:
pass
else:
afterTest(self.test)
def beforeTest(self, result):
"""Called before test is run (before result.startTest)
"""
try:
beforeTest = result.beforeTest
except AttributeError:
pass
else:
beforeTest(self.test)
def exc_info(self):
"""Extract exception info.
"""
exc, exv, tb = sys.exc_info()
return (exc, exv, tb)
def id(self):
"""Get a short(er) description of the test
"""
return self.test.id()
def address(self):
"""Return a round-trip name for this test, a name that can be
fed back as input to loadTestByName and (assuming the same
plugin configuration) result in the loading of this test.
"""
if hasattr(self.test, 'address'):
return self.test.address()
else:
# not a nose case
return test_address(self.test)
def _context(self):
try:
return self.test.context
except AttributeError:
pass
try:
return self.test.__class__
except AttributeError:
pass
try:
return resolve_name(self.test.__module__)
except AttributeError:
pass
return None
context = property(_context, None, None,
"""Get the context object of this test (if any).""")
def run(self, result):
"""Modified run for the test wrapper.
From here we don't call result.startTest or stopTest or
addSuccess. The wrapper calls addError/addFailure only if its
own setup or teardown fails, or running the wrapped test fails
(eg, if the wrapped "test" is not callable).
Two additional methods are called, beforeTest and
afterTest. These give plugins a chance to modify the wrapped
test before it is called and do cleanup after it is
called. They are called unconditionally.
"""
if self.resultProxy:
result = self.resultProxy(result, self)
try:
try:
self.beforeTest(result)
self.runTest(result)
except KeyboardInterrupt:
raise
except:
err = sys.exc_info()
result.addError(self, err)
finally:
self.afterTest(result)
def runTest(self, result):
"""Run the test. Plugins may alter the test by returning a
value from prepareTestCase. The value must be callable and
must accept one argument, the result instance.
"""
test = self.test
plug_test = self.config.plugins.prepareTestCase(self)
if plug_test is not None:
test = plug_test
test(result)
def shortDescription(self):
desc = self.plugins.describeTest(self)
if desc is not None:
return desc
# work around bug in unittest.TestCase.shortDescription
# with multiline docstrings.
test = self.test
try:
test._testMethodDoc = test._testMethodDoc.strip()# 2.5
except AttributeError:
try:
# 2.4 and earlier
test._TestCase__testMethodDoc = \
test._TestCase__testMethodDoc.strip()
except AttributeError:
pass
# 2.7 compat: shortDescription() always returns something
# which is a change from 2.6 and below, and breaks the
# testName plugin call.
try:
desc = self.test.shortDescription()
except Exception:
# this is probably caused by a problem in test.__str__() and is
# only triggered by python 3.1's unittest!
pass
try:
if desc == str(self.test):
return
except Exception:
# If str() triggers an exception then ignore it.
# see issue 422
pass
return desc
class TestBase(unittest.TestCase):
"""Common functionality for FunctionTestCase and MethodTestCase.
"""
__test__ = False # do not collect
def id(self):
return str(self)
def runTest(self):
self.test(*self.arg)
def shortDescription(self):
if hasattr(self.test, 'description'):
return self.test.description
func, arg = self._descriptors()
doc = getattr(func, '__doc__', None)
if not doc:
doc = str(self)
return doc.strip().split("\n")[0].strip()
class FunctionTestCase(TestBase):
"""TestCase wrapper for test functions.
Don't use this class directly; it is used internally in nose to
create test cases for test functions.
"""
__test__ = False # do not collect
def __init__(self, test, setUp=None, tearDown=None, arg=tuple(),
descriptor=None):
"""Initialize the MethodTestCase.
Required argument:
* test -- the test function to call.
Optional arguments:
* setUp -- function to run at setup.
* tearDown -- function to run at teardown.
* arg -- arguments to pass to the test function. This is to support
generator functions that yield arguments.
* descriptor -- the function, other than the test, that should be used
to construct the test name. This is to support generator functions.
"""
self.test = test
self.setUpFunc = setUp
self.tearDownFunc = tearDown
self.arg = arg
self.descriptor = descriptor
TestBase.__init__(self)
def address(self):
"""Return a round-trip name for this test, a name that can be
fed back as input to loadTestByName and (assuming the same
plugin configuration) result in the loading of this test.
"""
if self.descriptor is not None:
return test_address(self.descriptor)
else:
return test_address(self.test)
def _context(self):
return resolve_name(self.test.__module__)
context = property(_context, None, None,
"""Get context (module) of this test""")
def setUp(self):
"""Run any setup function attached to the test function
"""
if self.setUpFunc:
self.setUpFunc()
else:
names = ('setup', 'setUp', 'setUpFunc')
try_run(self.test, names)
def tearDown(self):
"""Run any teardown function attached to the test function
"""
if self.tearDownFunc:
self.tearDownFunc()
else:
names = ('teardown', 'tearDown', 'tearDownFunc')
try_run(self.test, names)
def __str__(self):
func, arg = self._descriptors()
if hasattr(func, 'compat_func_name'):
name = func.compat_func_name
else:
name = func.__name__
name = "%s.%s" % (func.__module__, name)
if arg:
name = "%s%s" % (name, arg)
# FIXME need to include the full dir path to disambiguate
# in cases where test module of the same name was seen in
# another directory (old fromDirectory)
return name
__repr__ = __str__
def _descriptors(self):
"""Get the descriptors of the test function: the function and
arguments that will be used to construct the test name. In
most cases, this is the function itself and no arguments. For
tests generated by generator functions, the original
(generator) function and args passed to the generated function
are returned.
"""
if self.descriptor:
return self.descriptor, self.arg
else:
return self.test, self.arg
class MethodTestCase(TestBase):
"""Test case wrapper for test methods.
Don't use this class directly; it is used internally in nose to
create test cases for test methods.
"""
__test__ = False # do not collect
def __init__(self, method, test=None, arg=tuple(), descriptor=None):
"""Initialize the MethodTestCase.
Required argument:
* method -- the method to call, may be bound or unbound. In either
case, a new instance of the method's class will be instantiated to
make the call. Note: In Python 3.x, if using an unbound method, you
must wrap it using pyversion.unbound_method.
Optional arguments:
* test -- the test function to call. If this is passed, it will be
called instead of getting a new bound method of the same name as the
desired method from the test instance. This is to support generator
methods that yield inline functions.
* arg -- arguments to pass to the test function. This is to support
generator methods that yield arguments.
* descriptor -- the function, other than the test, that should be used
to construct the test name. This is to support generator methods.
"""
self.method = method
self.test = test
self.arg = arg
self.descriptor = descriptor
if isfunction(method):
raise ValueError("Unbound methods must be wrapped using pyversion.unbound_method before passing to MethodTestCase")
self.cls = method.im_class
self.inst = self.cls()
if self.test is None:
method_name = self.method.__name__
self.test = getattr(self.inst, method_name)
TestBase.__init__(self)
def __str__(self):
func, arg = self._descriptors()
if hasattr(func, 'compat_func_name'):
name = func.compat_func_name
else:
name = func.__name__
name = "%s.%s.%s" % (self.cls.__module__,
self.cls.__name__,
name)
if arg:
name = "%s%s" % (name, arg)
return name
__repr__ = __str__
def address(self):
"""Return a round-trip name for this test, a name that can be
fed back as input to loadTestByName and (assuming the same
plugin configuration) result in the loading of this test.
"""
if self.descriptor is not None:
return test_address(self.descriptor)
else:
return test_address(self.method)
def _context(self):
return self.cls
context = property(_context, None, None,
"""Get context (class) of this test""")
def setUp(self):
try_run(self.inst, ('setup', 'setUp'))
def tearDown(self):
try_run(self.inst, ('teardown', 'tearDown'))
def _descriptors(self):
"""Get the descriptors of the test method: the method and
arguments that will be used to construct the test name. In
most cases, this is the method itself and no arguments. For
tests generated by generator methods, the original
(generator) method and args passed to the generated method
or function are returned.
"""
if self.descriptor:
return self.descriptor, self.arg
else:
return self.method, self.arg

172
lib/spack/external/nose/commands.py vendored Normal file
View File

@ -0,0 +1,172 @@
"""
nosetests setuptools command
----------------------------
The easiest way to run tests with nose is to use the `nosetests` setuptools
command::
python setup.py nosetests
This command has one *major* benefit over the standard `test` command: *all
nose plugins are supported*.
To configure the `nosetests` command, add a [nosetests] section to your
setup.cfg. The [nosetests] section can contain any command line arguments that
nosetests supports. The differences between issuing an option on the command
line and adding it to setup.cfg are:
* In setup.cfg, the -- prefix must be excluded
* In setup.cfg, command line flags that take no arguments must be given an
argument flag (1, T or TRUE for active, 0, F or FALSE for inactive)
Here's an example [nosetests] setup.cfg section::
[nosetests]
verbosity=1
detailed-errors=1
with-coverage=1
cover-package=nose
debug=nose.loader
pdb=1
pdb-failures=1
If you commonly run nosetests with a large number of options, using
the nosetests setuptools command and configuring with setup.cfg can
make running your tests much less tedious. (Note that the same options
and format supported in setup.cfg are supported in all other config
files, and the nosetests script will also load config files.)
Another reason to run tests with the command is that the command will
install packages listed in your `tests_require`, as well as doing a
complete build of your package before running tests. For packages with
dependencies or that build C extensions, using the setuptools command
can be more convenient than building by hand and running the nosetests
script.
Bootstrapping
-------------
If you are distributing your project and want users to be able to run tests
without having to install nose themselves, add nose to the setup_requires
section of your setup()::
setup(
# ...
setup_requires=['nose>=1.0']
)
This will direct setuptools to download and activate nose during the setup
process, making the ``nosetests`` command available.
"""
try:
from setuptools import Command
except ImportError:
Command = nosetests = None
else:
from nose.config import Config, option_blacklist, user_config_files, \
flag, _bool
from nose.core import TestProgram
from nose.plugins import DefaultPluginManager
def get_user_options(parser):
"""convert a optparse option list into a distutils option tuple list"""
opt_list = []
for opt in parser.option_list:
if opt._long_opts[0][2:] in option_blacklist:
continue
long_name = opt._long_opts[0][2:]
if opt.action not in ('store_true', 'store_false'):
long_name = long_name + "="
short_name = None
if opt._short_opts:
short_name = opt._short_opts[0][1:]
opt_list.append((long_name, short_name, opt.help or ""))
return opt_list
class nosetests(Command):
description = "Run unit tests using nosetests"
__config = Config(files=user_config_files(),
plugins=DefaultPluginManager())
__parser = __config.getParser()
user_options = get_user_options(__parser)
def initialize_options(self):
"""create the member variables, but change hyphens to
underscores
"""
self.option_to_cmds = {}
for opt in self.__parser.option_list:
cmd_name = opt._long_opts[0][2:]
option_name = cmd_name.replace('-', '_')
self.option_to_cmds[option_name] = cmd_name
setattr(self, option_name, None)
self.attr = None
def finalize_options(self):
"""nothing to do here"""
pass
def run(self):
"""ensure tests are capable of being run, then
run nose.main with a reconstructed argument list"""
if getattr(self.distribution, 'use_2to3', False):
# If we run 2to3 we can not do this inplace:
# Ensure metadata is up-to-date
build_py = self.get_finalized_command('build_py')
build_py.inplace = 0
build_py.run()
bpy_cmd = self.get_finalized_command("build_py")
build_path = bpy_cmd.build_lib
# Build extensions
egg_info = self.get_finalized_command('egg_info')
egg_info.egg_base = build_path
egg_info.run()
build_ext = self.get_finalized_command('build_ext')
build_ext.inplace = 0
build_ext.run()
else:
self.run_command('egg_info')
# Build extensions in-place
build_ext = self.get_finalized_command('build_ext')
build_ext.inplace = 1
build_ext.run()
if self.distribution.install_requires:
self.distribution.fetch_build_eggs(
self.distribution.install_requires)
if self.distribution.tests_require:
self.distribution.fetch_build_eggs(
self.distribution.tests_require)
ei_cmd = self.get_finalized_command("egg_info")
argv = ['nosetests', '--where', ei_cmd.egg_base]
for (option_name, cmd_name) in self.option_to_cmds.items():
if option_name in option_blacklist:
continue
value = getattr(self, option_name)
if value is not None:
argv.extend(
self.cfgToArg(option_name.replace('_', '-'), value))
TestProgram(argv=argv, config=self.__config)
def cfgToArg(self, optname, value):
argv = []
long_optname = '--' + optname
opt = self.__parser.get_option(long_optname)
if opt.action in ('store_true', 'store_false'):
if not flag(value):
raise ValueError("Invalid value '%s' for '%s'" % (
value, optname))
if _bool(value):
argv.append(long_optname)
else:
argv.extend([long_optname, value])
return argv

661
lib/spack/external/nose/config.py vendored Normal file
View File

@ -0,0 +1,661 @@
import logging
import optparse
import os
import re
import sys
import ConfigParser
from optparse import OptionParser
from nose.util import absdir, tolist
from nose.plugins.manager import NoPlugins
from warnings import warn, filterwarnings
log = logging.getLogger(__name__)
# not allowed in config files
option_blacklist = ['help', 'verbose']
config_files = [
# Linux users will prefer this
"~/.noserc",
# Windows users will prefer this
"~/nose.cfg"
]
# plaforms on which the exe check defaults to off
# Windows and IronPython
exe_allowed_platforms = ('win32', 'cli')
filterwarnings("always", category=DeprecationWarning,
module=r'(.*\.)?nose\.config')
class NoSuchOptionError(Exception):
def __init__(self, name):
Exception.__init__(self, name)
self.name = name
class ConfigError(Exception):
pass
class ConfiguredDefaultsOptionParser(object):
"""
Handler for options from commandline and config files.
"""
def __init__(self, parser, config_section, error=None, file_error=None):
self._parser = parser
self._config_section = config_section
if error is None:
error = self._parser.error
self._error = error
if file_error is None:
file_error = lambda msg, **kw: error(msg)
self._file_error = file_error
def _configTuples(self, cfg, filename):
config = []
if self._config_section in cfg.sections():
for name, value in cfg.items(self._config_section):
config.append((name, value, filename))
return config
def _readFromFilenames(self, filenames):
config = []
for filename in filenames:
cfg = ConfigParser.RawConfigParser()
try:
cfg.read(filename)
except ConfigParser.Error, exc:
raise ConfigError("Error reading config file %r: %s" %
(filename, str(exc)))
config.extend(self._configTuples(cfg, filename))
return config
def _readFromFileObject(self, fh):
cfg = ConfigParser.RawConfigParser()
try:
filename = fh.name
except AttributeError:
filename = '<???>'
try:
cfg.readfp(fh)
except ConfigParser.Error, exc:
raise ConfigError("Error reading config file %r: %s" %
(filename, str(exc)))
return self._configTuples(cfg, filename)
def _readConfiguration(self, config_files):
try:
config_files.readline
except AttributeError:
filename_or_filenames = config_files
if isinstance(filename_or_filenames, basestring):
filenames = [filename_or_filenames]
else:
filenames = filename_or_filenames
config = self._readFromFilenames(filenames)
else:
fh = config_files
config = self._readFromFileObject(fh)
return config
def _processConfigValue(self, name, value, values, parser):
opt_str = '--' + name
option = parser.get_option(opt_str)
if option is None:
raise NoSuchOptionError(name)
else:
option.process(opt_str, value, values, parser)
def _applyConfigurationToValues(self, parser, config, values):
for name, value, filename in config:
if name in option_blacklist:
continue
try:
self._processConfigValue(name, value, values, parser)
except NoSuchOptionError, exc:
self._file_error(
"Error reading config file %r: "
"no such option %r" % (filename, exc.name),
name=name, filename=filename)
except optparse.OptionValueError, exc:
msg = str(exc).replace('--' + name, repr(name), 1)
self._file_error("Error reading config file %r: "
"%s" % (filename, msg),
name=name, filename=filename)
def parseArgsAndConfigFiles(self, args, config_files):
values = self._parser.get_default_values()
try:
config = self._readConfiguration(config_files)
except ConfigError, exc:
self._error(str(exc))
else:
try:
self._applyConfigurationToValues(self._parser, config, values)
except ConfigError, exc:
self._error(str(exc))
return self._parser.parse_args(args, values)
class Config(object):
"""nose configuration.
Instances of Config are used throughout nose to configure
behavior, including plugin lists. Here are the default values for
all config keys::
self.env = env = kw.pop('env', {})
self.args = ()
self.testMatch = re.compile(r'(?:^|[\\b_\\.%s-])[Tt]est' % os.sep)
self.addPaths = not env.get('NOSE_NOPATH', False)
self.configSection = 'nosetests'
self.debug = env.get('NOSE_DEBUG')
self.debugLog = env.get('NOSE_DEBUG_LOG')
self.exclude = None
self.getTestCaseNamesCompat = False
self.includeExe = env.get('NOSE_INCLUDE_EXE',
sys.platform in exe_allowed_platforms)
self.ignoreFiles = (re.compile(r'^\.'),
re.compile(r'^_'),
re.compile(r'^setup\.py$')
)
self.include = None
self.loggingConfig = None
self.logStream = sys.stderr
self.options = NoOptions()
self.parser = None
self.plugins = NoPlugins()
self.srcDirs = ('lib', 'src')
self.runOnInit = True
self.stopOnError = env.get('NOSE_STOP', False)
self.stream = sys.stderr
self.testNames = ()
self.verbosity = int(env.get('NOSE_VERBOSE', 1))
self.where = ()
self.py3where = ()
self.workingDir = None
"""
def __init__(self, **kw):
self.env = env = kw.pop('env', {})
self.args = ()
self.testMatchPat = env.get('NOSE_TESTMATCH',
r'(?:^|[\b_\.%s-])[Tt]est' % os.sep)
self.testMatch = re.compile(self.testMatchPat)
self.addPaths = not env.get('NOSE_NOPATH', False)
self.configSection = 'nosetests'
self.debug = env.get('NOSE_DEBUG')
self.debugLog = env.get('NOSE_DEBUG_LOG')
self.exclude = None
self.getTestCaseNamesCompat = False
self.includeExe = env.get('NOSE_INCLUDE_EXE',
sys.platform in exe_allowed_platforms)
self.ignoreFilesDefaultStrings = [r'^\.',
r'^_',
r'^setup\.py$',
]
self.ignoreFiles = map(re.compile, self.ignoreFilesDefaultStrings)
self.include = None
self.loggingConfig = None
self.logStream = sys.stderr
self.options = NoOptions()
self.parser = None
self.plugins = NoPlugins()
self.srcDirs = ('lib', 'src')
self.runOnInit = True
self.stopOnError = env.get('NOSE_STOP', False)
self.stream = sys.stderr
self.testNames = []
self.verbosity = int(env.get('NOSE_VERBOSE', 1))
self.where = ()
self.py3where = ()
self.workingDir = os.getcwd()
self.traverseNamespace = False
self.firstPackageWins = False
self.parserClass = OptionParser
self.worker = False
self._default = self.__dict__.copy()
self.update(kw)
self._orig = self.__dict__.copy()
def __getstate__(self):
state = self.__dict__.copy()
del state['stream']
del state['_orig']
del state['_default']
del state['env']
del state['logStream']
# FIXME remove plugins, have only plugin manager class
state['plugins'] = self.plugins.__class__
return state
def __setstate__(self, state):
plugincls = state.pop('plugins')
self.update(state)
self.worker = True
# FIXME won't work for static plugin lists
self.plugins = plugincls()
self.plugins.loadPlugins()
# needed so .can_configure gets set appropriately
dummy_parser = self.parserClass()
self.plugins.addOptions(dummy_parser, {})
self.plugins.configure(self.options, self)
def __repr__(self):
d = self.__dict__.copy()
# don't expose env, could include sensitive info
d['env'] = {}
keys = [ k for k in d.keys()
if not k.startswith('_') ]
keys.sort()
return "Config(%s)" % ', '.join([ '%s=%r' % (k, d[k])
for k in keys ])
__str__ = __repr__
def _parseArgs(self, argv, cfg_files):
def warn_sometimes(msg, name=None, filename=None):
if (hasattr(self.plugins, 'excludedOption') and
self.plugins.excludedOption(name)):
msg = ("Option %r in config file %r ignored: "
"excluded by runtime environment" %
(name, filename))
warn(msg, RuntimeWarning)
else:
raise ConfigError(msg)
parser = ConfiguredDefaultsOptionParser(
self.getParser(), self.configSection, file_error=warn_sometimes)
return parser.parseArgsAndConfigFiles(argv[1:], cfg_files)
def configure(self, argv=None, doc=None):
"""Configure the nose running environment. Execute configure before
collecting tests with nose.TestCollector to enable output capture and
other features.
"""
env = self.env
if argv is None:
argv = sys.argv
cfg_files = getattr(self, 'files', [])
options, args = self._parseArgs(argv, cfg_files)
# If -c --config has been specified on command line,
# load those config files and reparse
if getattr(options, 'files', []):
options, args = self._parseArgs(argv, options.files)
self.options = options
if args:
self.testNames = args
if options.testNames is not None:
self.testNames.extend(tolist(options.testNames))
if options.py3where is not None:
if sys.version_info >= (3,):
options.where = options.py3where
# `where` is an append action, so it can't have a default value
# in the parser, or that default will always be in the list
if not options.where:
options.where = env.get('NOSE_WHERE', None)
# include and exclude also
if not options.ignoreFiles:
options.ignoreFiles = env.get('NOSE_IGNORE_FILES', [])
if not options.include:
options.include = env.get('NOSE_INCLUDE', [])
if not options.exclude:
options.exclude = env.get('NOSE_EXCLUDE', [])
self.addPaths = options.addPaths
self.stopOnError = options.stopOnError
self.verbosity = options.verbosity
self.includeExe = options.includeExe
self.traverseNamespace = options.traverseNamespace
self.debug = options.debug
self.debugLog = options.debugLog
self.loggingConfig = options.loggingConfig
self.firstPackageWins = options.firstPackageWins
self.configureLogging()
if not options.byteCompile:
sys.dont_write_bytecode = True
if options.where is not None:
self.configureWhere(options.where)
if options.testMatch:
self.testMatch = re.compile(options.testMatch)
if options.ignoreFiles:
self.ignoreFiles = map(re.compile, tolist(options.ignoreFiles))
log.info("Ignoring files matching %s", options.ignoreFiles)
else:
log.info("Ignoring files matching %s", self.ignoreFilesDefaultStrings)
if options.include:
self.include = map(re.compile, tolist(options.include))
log.info("Including tests matching %s", options.include)
if options.exclude:
self.exclude = map(re.compile, tolist(options.exclude))
log.info("Excluding tests matching %s", options.exclude)
# When listing plugins we don't want to run them
if not options.showPlugins:
self.plugins.configure(options, self)
self.plugins.begin()
def configureLogging(self):
"""Configure logging for nose, or optionally other packages. Any logger
name may be set with the debug option, and that logger will be set to
debug level and be assigned the same handler as the nose loggers, unless
it already has a handler.
"""
if self.loggingConfig:
from logging.config import fileConfig
fileConfig(self.loggingConfig)
return
format = logging.Formatter('%(name)s: %(levelname)s: %(message)s')
if self.debugLog:
handler = logging.FileHandler(self.debugLog)
else:
handler = logging.StreamHandler(self.logStream)
handler.setFormatter(format)
logger = logging.getLogger('nose')
logger.propagate = 0
# only add our default handler if there isn't already one there
# this avoids annoying duplicate log messages.
found = False
if self.debugLog:
debugLogAbsPath = os.path.abspath(self.debugLog)
for h in logger.handlers:
if type(h) == logging.FileHandler and \
h.baseFilename == debugLogAbsPath:
found = True
else:
for h in logger.handlers:
if type(h) == logging.StreamHandler and \
h.stream == self.logStream:
found = True
if not found:
logger.addHandler(handler)
# default level
lvl = logging.WARNING
if self.verbosity >= 5:
lvl = 0
elif self.verbosity >= 4:
lvl = logging.DEBUG
elif self.verbosity >= 3:
lvl = logging.INFO
logger.setLevel(lvl)
# individual overrides
if self.debug:
# no blanks
debug_loggers = [ name for name in self.debug.split(',')
if name ]
for logger_name in debug_loggers:
l = logging.getLogger(logger_name)
l.setLevel(logging.DEBUG)
if not l.handlers and not logger_name.startswith('nose'):
l.addHandler(handler)
def configureWhere(self, where):
"""Configure the working directory or directories for the test run.
"""
from nose.importer import add_path
self.workingDir = None
where = tolist(where)
warned = False
for path in where:
if not self.workingDir:
abs_path = absdir(path)
if abs_path is None:
raise ValueError("Working directory '%s' not found, or "
"not a directory" % path)
log.info("Set working dir to %s", abs_path)
self.workingDir = abs_path
if self.addPaths and \
os.path.exists(os.path.join(abs_path, '__init__.py')):
log.info("Working directory %s is a package; "
"adding to sys.path" % abs_path)
add_path(abs_path)
continue
if not warned:
warn("Use of multiple -w arguments is deprecated and "
"support may be removed in a future release. You can "
"get the same behavior by passing directories without "
"the -w argument on the command line, or by using the "
"--tests argument in a configuration file.",
DeprecationWarning)
warned = True
self.testNames.append(path)
def default(self):
"""Reset all config values to defaults.
"""
self.__dict__.update(self._default)
def getParser(self, doc=None):
"""Get the command line option parser.
"""
if self.parser:
return self.parser
env = self.env
parser = self.parserClass(doc)
parser.add_option(
"-V","--version", action="store_true",
dest="version", default=False,
help="Output nose version and exit")
parser.add_option(
"-p", "--plugins", action="store_true",
dest="showPlugins", default=False,
help="Output list of available plugins and exit. Combine with "
"higher verbosity for greater detail")
parser.add_option(
"-v", "--verbose",
action="count", dest="verbosity",
default=self.verbosity,
help="Be more verbose. [NOSE_VERBOSE]")
parser.add_option(
"--verbosity", action="store", dest="verbosity",
metavar='VERBOSITY',
type="int", help="Set verbosity; --verbosity=2 is "
"the same as -v")
parser.add_option(
"-q", "--quiet", action="store_const", const=0, dest="verbosity",
help="Be less verbose")
parser.add_option(
"-c", "--config", action="append", dest="files",
metavar="FILES",
help="Load configuration from config file(s). May be specified "
"multiple times; in that case, all config files will be "
"loaded and combined")
parser.add_option(
"-w", "--where", action="append", dest="where",
metavar="WHERE",
help="Look for tests in this directory. "
"May be specified multiple times. The first directory passed "
"will be used as the working directory, in place of the current "
"working directory, which is the default. Others will be added "
"to the list of tests to execute. [NOSE_WHERE]"
)
parser.add_option(
"--py3where", action="append", dest="py3where",
metavar="PY3WHERE",
help="Look for tests in this directory under Python 3.x. "
"Functions the same as 'where', but only applies if running under "
"Python 3.x or above. Note that, if present under 3.x, this "
"option completely replaces any directories specified with "
"'where', so the 'where' option becomes ineffective. "
"[NOSE_PY3WHERE]"
)
parser.add_option(
"-m", "--match", "--testmatch", action="store",
dest="testMatch", metavar="REGEX",
help="Files, directories, function names, and class names "
"that match this regular expression are considered tests. "
"Default: %s [NOSE_TESTMATCH]" % self.testMatchPat,
default=self.testMatchPat)
parser.add_option(
"--tests", action="store", dest="testNames", default=None,
metavar='NAMES',
help="Run these tests (comma-separated list). This argument is "
"useful mainly from configuration files; on the command line, "
"just pass the tests to run as additional arguments with no "
"switch.")
parser.add_option(
"-l", "--debug", action="store",
dest="debug", default=self.debug,
help="Activate debug logging for one or more systems. "
"Available debug loggers: nose, nose.importer, "
"nose.inspector, nose.plugins, nose.result and "
"nose.selector. Separate multiple names with a comma.")
parser.add_option(
"--debug-log", dest="debugLog", action="store",
default=self.debugLog, metavar="FILE",
help="Log debug messages to this file "
"(default: sys.stderr)")
parser.add_option(
"--logging-config", "--log-config",
dest="loggingConfig", action="store",
default=self.loggingConfig, metavar="FILE",
help="Load logging config from this file -- bypasses all other"
" logging config settings.")
parser.add_option(
"-I", "--ignore-files", action="append", dest="ignoreFiles",
metavar="REGEX",
help="Completely ignore any file that matches this regular "
"expression. Takes precedence over any other settings or "
"plugins. "
"Specifying this option will replace the default setting. "
"Specify this option multiple times "
"to add more regular expressions [NOSE_IGNORE_FILES]")
parser.add_option(
"-e", "--exclude", action="append", dest="exclude",
metavar="REGEX",
help="Don't run tests that match regular "
"expression [NOSE_EXCLUDE]")
parser.add_option(
"-i", "--include", action="append", dest="include",
metavar="REGEX",
help="This regular expression will be applied to files, "
"directories, function names, and class names for a chance "
"to include additional tests that do not match TESTMATCH. "
"Specify this option multiple times "
"to add more regular expressions [NOSE_INCLUDE]")
parser.add_option(
"-x", "--stop", action="store_true", dest="stopOnError",
default=self.stopOnError,
help="Stop running tests after the first error or failure")
parser.add_option(
"-P", "--no-path-adjustment", action="store_false",
dest="addPaths",
default=self.addPaths,
help="Don't make any changes to sys.path when "
"loading tests [NOSE_NOPATH]")
parser.add_option(
"--exe", action="store_true", dest="includeExe",
default=self.includeExe,
help="Look for tests in python modules that are "
"executable. Normal behavior is to exclude executable "
"modules, since they may not be import-safe "
"[NOSE_INCLUDE_EXE]")
parser.add_option(
"--noexe", action="store_false", dest="includeExe",
help="DO NOT look for tests in python modules that are "
"executable. (The default on the windows platform is to "
"do so.)")
parser.add_option(
"--traverse-namespace", action="store_true",
default=self.traverseNamespace, dest="traverseNamespace",
help="Traverse through all path entries of a namespace package")
parser.add_option(
"--first-package-wins", "--first-pkg-wins", "--1st-pkg-wins",
action="store_true", default=False, dest="firstPackageWins",
help="nose's importer will normally evict a package from sys."
"modules if it sees a package with the same name in a different "
"location. Set this option to disable that behavior.")
parser.add_option(
"--no-byte-compile",
action="store_false", default=True, dest="byteCompile",
help="Prevent nose from byte-compiling the source into .pyc files "
"while nose is scanning for and running tests.")
self.plugins.loadPlugins()
self.pluginOpts(parser)
self.parser = parser
return parser
def help(self, doc=None):
"""Return the generated help message
"""
return self.getParser(doc).format_help()
def pluginOpts(self, parser):
self.plugins.addOptions(parser, self.env)
def reset(self):
self.__dict__.update(self._orig)
def todict(self):
return self.__dict__.copy()
def update(self, d):
self.__dict__.update(d)
class NoOptions(object):
"""Options container that returns None for all options.
"""
def __getstate__(self):
return {}
def __setstate__(self, state):
pass
def __getnewargs__(self):
return ()
def __nonzero__(self):
return False
def user_config_files():
"""Return path to any existing user config files
"""
return filter(os.path.exists,
map(os.path.expanduser, config_files))
def all_config_files():
"""Return path to any existing user config files, plus any setup.cfg
in the current working directory.
"""
user = user_config_files()
if os.path.exists('setup.cfg'):
return user + ['setup.cfg']
return user
# used when parsing config files
def flag(val):
"""Does the value look like an on/off flag?"""
if val == 1:
return True
elif val == 0:
return False
val = str(val)
if len(val) > 5:
return False
return val.upper() in ('1', '0', 'F', 'T', 'TRUE', 'FALSE', 'ON', 'OFF')
def _bool(val):
return str(val).upper() in ('1', 'T', 'TRUE', 'ON')

341
lib/spack/external/nose/core.py vendored Normal file
View File

@ -0,0 +1,341 @@
"""Implements nose test program and collector.
"""
from __future__ import generators
import logging
import os
import sys
import time
import unittest
from nose.config import Config, all_config_files
from nose.loader import defaultTestLoader
from nose.plugins.manager import PluginManager, DefaultPluginManager, \
RestrictedPluginManager
from nose.result import TextTestResult
from nose.suite import FinalizingSuiteWrapper
from nose.util import isclass, tolist
log = logging.getLogger('nose.core')
compat_24 = sys.version_info >= (2, 4)
__all__ = ['TestProgram', 'main', 'run', 'run_exit', 'runmodule', 'collector',
'TextTestRunner']
class TextTestRunner(unittest.TextTestRunner):
"""Test runner that uses nose's TextTestResult to enable errorClasses,
as well as providing hooks for plugins to override or replace the test
output stream, results, and the test case itself.
"""
def __init__(self, stream=sys.stderr, descriptions=1, verbosity=1,
config=None):
if config is None:
config = Config()
self.config = config
unittest.TextTestRunner.__init__(self, stream, descriptions, verbosity)
def _makeResult(self):
return TextTestResult(self.stream,
self.descriptions,
self.verbosity,
self.config)
def run(self, test):
"""Overrides to provide plugin hooks and defer all output to
the test result class.
"""
wrapper = self.config.plugins.prepareTest(test)
if wrapper is not None:
test = wrapper
# plugins can decorate or capture the output stream
wrapped = self.config.plugins.setOutputStream(self.stream)
if wrapped is not None:
self.stream = wrapped
result = self._makeResult()
start = time.time()
try:
test(result)
except KeyboardInterrupt:
pass
stop = time.time()
result.printErrors()
result.printSummary(start, stop)
self.config.plugins.finalize(result)
return result
class TestProgram(unittest.TestProgram):
"""Collect and run tests, returning success or failure.
The arguments to TestProgram() are the same as to
:func:`main()` and :func:`run()`:
* module: All tests are in this module (default: None)
* defaultTest: Tests to load (default: '.')
* argv: Command line arguments (default: None; sys.argv is read)
* testRunner: Test runner instance (default: None)
* testLoader: Test loader instance (default: None)
* env: Environment; ignored if config is provided (default: None;
os.environ is read)
* config: :class:`nose.config.Config` instance (default: None)
* suite: Suite or list of tests to run (default: None). Passing a
suite or lists of tests will bypass all test discovery and
loading. *ALSO NOTE* that if you pass a unittest.TestSuite
instance as the suite, context fixtures at the class, module and
package level will not be used, and many plugin hooks will not
be called. If you want normal nose behavior, either pass a list
of tests, or a fully-configured :class:`nose.suite.ContextSuite`.
* exit: Exit after running tests and printing report (default: True)
* plugins: List of plugins to use; ignored if config is provided
(default: load plugins with DefaultPluginManager)
* addplugins: List of **extra** plugins to use. Pass a list of plugin
instances in this argument to make custom plugins available while
still using the DefaultPluginManager.
"""
verbosity = 1
def __init__(self, module=None, defaultTest='.', argv=None,
testRunner=None, testLoader=None, env=None, config=None,
suite=None, exit=True, plugins=None, addplugins=None):
if env is None:
env = os.environ
if config is None:
config = self.makeConfig(env, plugins)
if addplugins:
config.plugins.addPlugins(extraplugins=addplugins)
self.config = config
self.suite = suite
self.exit = exit
extra_args = {}
version = sys.version_info[0:2]
if version >= (2,7) and version != (3,0):
extra_args['exit'] = exit
unittest.TestProgram.__init__(
self, module=module, defaultTest=defaultTest,
argv=argv, testRunner=testRunner, testLoader=testLoader,
**extra_args)
def getAllConfigFiles(self, env=None):
env = env or {}
if env.get('NOSE_IGNORE_CONFIG_FILES', False):
return []
else:
return all_config_files()
def makeConfig(self, env, plugins=None):
"""Load a Config, pre-filled with user config files if any are
found.
"""
cfg_files = self.getAllConfigFiles(env)
if plugins:
manager = PluginManager(plugins=plugins)
else:
manager = DefaultPluginManager()
return Config(
env=env, files=cfg_files, plugins=manager)
def parseArgs(self, argv):
"""Parse argv and env and configure running environment.
"""
self.config.configure(argv, doc=self.usage())
log.debug("configured %s", self.config)
# quick outs: version, plugins (optparse would have already
# caught and exited on help)
if self.config.options.version:
from nose import __version__
sys.stdout = sys.__stdout__
print "%s version %s" % (os.path.basename(sys.argv[0]), __version__)
sys.exit(0)
if self.config.options.showPlugins:
self.showPlugins()
sys.exit(0)
if self.testLoader is None:
self.testLoader = defaultTestLoader(config=self.config)
elif isclass(self.testLoader):
self.testLoader = self.testLoader(config=self.config)
plug_loader = self.config.plugins.prepareTestLoader(self.testLoader)
if plug_loader is not None:
self.testLoader = plug_loader
log.debug("test loader is %s", self.testLoader)
# FIXME if self.module is a string, add it to self.testNames? not sure
if self.config.testNames:
self.testNames = self.config.testNames
else:
self.testNames = tolist(self.defaultTest)
log.debug('defaultTest %s', self.defaultTest)
log.debug('Test names are %s', self.testNames)
if self.config.workingDir is not None:
os.chdir(self.config.workingDir)
self.createTests()
def createTests(self):
"""Create the tests to run. If a self.suite
is set, then that suite will be used. Otherwise, tests will be
loaded from the given test names (self.testNames) using the
test loader.
"""
log.debug("createTests called with %s", self.suite)
if self.suite is not None:
# We were given an explicit suite to run. Make sure it's
# loaded and wrapped correctly.
self.test = self.testLoader.suiteClass(self.suite)
else:
self.test = self.testLoader.loadTestsFromNames(self.testNames)
def runTests(self):
"""Run Tests. Returns true on success, false on failure, and sets
self.success to the same value.
"""
log.debug("runTests called")
if self.testRunner is None:
self.testRunner = TextTestRunner(stream=self.config.stream,
verbosity=self.config.verbosity,
config=self.config)
plug_runner = self.config.plugins.prepareTestRunner(self.testRunner)
if plug_runner is not None:
self.testRunner = plug_runner
result = self.testRunner.run(self.test)
self.success = result.wasSuccessful()
if self.exit:
sys.exit(not self.success)
return self.success
def showPlugins(self):
"""Print list of available plugins.
"""
import textwrap
class DummyParser:
def __init__(self):
self.options = []
def add_option(self, *arg, **kw):
self.options.append((arg, kw.pop('help', '')))
v = self.config.verbosity
self.config.plugins.sort()
for p in self.config.plugins:
print "Plugin %s" % p.name
if v >= 2:
print " score: %s" % p.score
print '\n'.join(textwrap.wrap(p.help().strip(),
initial_indent=' ',
subsequent_indent=' '))
if v >= 3:
parser = DummyParser()
p.addOptions(parser)
if len(parser.options):
print
print " Options:"
for opts, help in parser.options:
print ' %s' % (', '.join(opts))
if help:
print '\n'.join(
textwrap.wrap(help.strip(),
initial_indent=' ',
subsequent_indent=' '))
print
def usage(cls):
import nose
try:
ld = nose.__loader__
text = ld.get_data(os.path.join(
os.path.dirname(__file__), 'usage.txt'))
except AttributeError:
f = open(os.path.join(
os.path.dirname(__file__), 'usage.txt'), 'r')
try:
text = f.read()
finally:
f.close()
# Ensure that we return str, not bytes.
if not isinstance(text, str):
text = text.decode('utf-8')
return text
usage = classmethod(usage)
# backwards compatibility
run_exit = main = TestProgram
def run(*arg, **kw):
"""Collect and run tests, returning success or failure.
The arguments to `run()` are the same as to `main()`:
* module: All tests are in this module (default: None)
* defaultTest: Tests to load (default: '.')
* argv: Command line arguments (default: None; sys.argv is read)
* testRunner: Test runner instance (default: None)
* testLoader: Test loader instance (default: None)
* env: Environment; ignored if config is provided (default: None;
os.environ is read)
* config: :class:`nose.config.Config` instance (default: None)
* suite: Suite or list of tests to run (default: None). Passing a
suite or lists of tests will bypass all test discovery and
loading. *ALSO NOTE* that if you pass a unittest.TestSuite
instance as the suite, context fixtures at the class, module and
package level will not be used, and many plugin hooks will not
be called. If you want normal nose behavior, either pass a list
of tests, or a fully-configured :class:`nose.suite.ContextSuite`.
* plugins: List of plugins to use; ignored if config is provided
(default: load plugins with DefaultPluginManager)
* addplugins: List of **extra** plugins to use. Pass a list of plugin
instances in this argument to make custom plugins available while
still using the DefaultPluginManager.
With the exception that the ``exit`` argument is always set
to False.
"""
kw['exit'] = False
return TestProgram(*arg, **kw).success
def runmodule(name='__main__', **kw):
"""Collect and run tests in a single module only. Defaults to running
tests in __main__. Additional arguments to TestProgram may be passed
as keyword arguments.
"""
main(defaultTest=name, **kw)
def collector():
"""TestSuite replacement entry point. Use anywhere you might use a
unittest.TestSuite. The collector will, by default, load options from
all config files and execute loader.loadTestsFromNames() on the
configured testNames, or '.' if no testNames are configured.
"""
# plugins that implement any of these methods are disabled, since
# we don't control the test runner and won't be able to run them
# finalize() is also not called, but plugins that use it aren't disabled,
# because capture needs it.
setuptools_incompat = ('report', 'prepareTest',
'prepareTestLoader', 'prepareTestRunner',
'setOutputStream')
plugins = RestrictedPluginManager(exclude=setuptools_incompat)
conf = Config(files=all_config_files(),
plugins=plugins)
conf.configure(argv=['collector'])
loader = defaultTestLoader(conf)
if conf.testNames:
suite = loader.loadTestsFromNames(conf.testNames)
else:
suite = loader.loadTestsFromNames(('.',))
return FinalizingSuiteWrapper(suite, plugins.finalize)
if __name__ == '__main__':
main()

9
lib/spack/external/nose/exc.py vendored Normal file
View File

@ -0,0 +1,9 @@
"""Exceptions for marking tests as skipped or deprecated.
This module exists to provide backwards compatibility with previous
versions of nose where skipped and deprecated tests were core
functionality, rather than being provided by plugins. It may be
removed in a future release.
"""
from nose.plugins.skip import SkipTest
from nose.plugins.deprecated import DeprecatedTest

View File

@ -0,0 +1,3 @@
"""
External or vendor files
"""

2272
lib/spack/external/nose/ext/dtcompat.py vendored Normal file

File diff suppressed because it is too large Load Diff

42
lib/spack/external/nose/failure.py vendored Normal file
View File

@ -0,0 +1,42 @@
import logging
import unittest
from traceback import format_tb
from nose.pyversion import is_base_exception
log = logging.getLogger(__name__)
__all__ = ['Failure']
class Failure(unittest.TestCase):
"""Unloadable or unexecutable test.
A Failure case is placed in a test suite to indicate the presence of a
test that could not be loaded or executed. A common example is a test
module that fails to import.
"""
__test__ = False # do not collect
def __init__(self, exc_class, exc_val, tb=None, address=None):
log.debug("A failure! %s %s %s", exc_class, exc_val, format_tb(tb))
self.exc_class = exc_class
self.exc_val = exc_val
self.tb = tb
self._address = address
unittest.TestCase.__init__(self)
def __str__(self):
return "Failure: %s (%s)" % (
getattr(self.exc_class, '__name__', self.exc_class), self.exc_val)
def address(self):
return self._address
def runTest(self):
if self.tb is not None:
if is_base_exception(self.exc_val):
raise self.exc_val, None, self.tb
raise self.exc_class, self.exc_val, self.tb
else:
raise self.exc_class(self.exc_val)

167
lib/spack/external/nose/importer.py vendored Normal file
View File

@ -0,0 +1,167 @@
"""Implements an importer that looks only in specific path (ignoring
sys.path), and uses a per-path cache in addition to sys.modules. This is
necessary because test modules in different directories frequently have the
same names, which means that the first loaded would mask the rest when using
the builtin importer.
"""
import logging
import os
import sys
from nose.config import Config
from imp import find_module, load_module, acquire_lock, release_lock
log = logging.getLogger(__name__)
try:
_samefile = os.path.samefile
except AttributeError:
def _samefile(src, dst):
return (os.path.normcase(os.path.realpath(src)) ==
os.path.normcase(os.path.realpath(dst)))
class Importer(object):
"""An importer class that does only path-specific imports. That
is, the given module is not searched for on sys.path, but only at
the path or in the directory specified.
"""
def __init__(self, config=None):
if config is None:
config = Config()
self.config = config
def importFromPath(self, path, fqname):
"""Import a dotted-name package whose tail is at path. In other words,
given foo.bar and path/to/foo/bar.py, import foo from path/to/foo then
bar from path/to/foo/bar, returning bar.
"""
# find the base dir of the package
path_parts = os.path.normpath(os.path.abspath(path)).split(os.sep)
name_parts = fqname.split('.')
if path_parts[-1] == '__init__.py':
path_parts.pop()
path_parts = path_parts[:-(len(name_parts))]
dir_path = os.sep.join(path_parts)
# then import fqname starting from that dir
return self.importFromDir(dir_path, fqname)
def importFromDir(self, dir, fqname):
"""Import a module *only* from path, ignoring sys.path and
reloading if the version in sys.modules is not the one we want.
"""
dir = os.path.normpath(os.path.abspath(dir))
log.debug("Import %s from %s", fqname, dir)
# FIXME reimplement local per-dir cache?
# special case for __main__
if fqname == '__main__':
return sys.modules[fqname]
if self.config.addPaths:
add_path(dir, self.config)
path = [dir]
parts = fqname.split('.')
part_fqname = ''
mod = parent = fh = None
for part in parts:
if part_fqname == '':
part_fqname = part
else:
part_fqname = "%s.%s" % (part_fqname, part)
try:
acquire_lock()
log.debug("find module part %s (%s) in %s",
part, part_fqname, path)
fh, filename, desc = find_module(part, path)
old = sys.modules.get(part_fqname)
if old is not None:
# test modules frequently have name overlap; make sure
# we get a fresh copy of anything we are trying to load
# from a new path
log.debug("sys.modules has %s as %s", part_fqname, old)
if (self.sameModule(old, filename)
or (self.config.firstPackageWins and
getattr(old, '__path__', None))):
mod = old
else:
del sys.modules[part_fqname]
mod = load_module(part_fqname, fh, filename, desc)
else:
mod = load_module(part_fqname, fh, filename, desc)
finally:
if fh:
fh.close()
release_lock()
if parent:
setattr(parent, part, mod)
if hasattr(mod, '__path__'):
path = mod.__path__
parent = mod
return mod
def _dirname_if_file(self, filename):
# We only take the dirname if we have a path to a non-dir,
# because taking the dirname of a symlink to a directory does not
# give the actual directory parent.
if os.path.isdir(filename):
return filename
else:
return os.path.dirname(filename)
def sameModule(self, mod, filename):
mod_paths = []
if hasattr(mod, '__path__'):
for path in mod.__path__:
mod_paths.append(self._dirname_if_file(path))
elif hasattr(mod, '__file__'):
mod_paths.append(self._dirname_if_file(mod.__file__))
else:
# builtin or other module-like object that
# doesn't have __file__; must be new
return False
new_path = self._dirname_if_file(filename)
for mod_path in mod_paths:
log.debug(
"module already loaded? mod: %s new: %s",
mod_path, new_path)
if _samefile(mod_path, new_path):
return True
return False
def add_path(path, config=None):
"""Ensure that the path, or the root of the current package (if
path is in a package), is in sys.path.
"""
# FIXME add any src-looking dirs seen too... need to get config for that
log.debug('Add path %s' % path)
if not path:
return []
added = []
parent = os.path.dirname(path)
if (parent
and os.path.exists(os.path.join(path, '__init__.py'))):
added.extend(add_path(parent, config))
elif not path in sys.path:
log.debug("insert %s into sys.path", path)
sys.path.insert(0, path)
added.append(path)
if config and config.srcDirs:
for dirname in config.srcDirs:
dirpath = os.path.join(path, dirname)
if os.path.isdir(dirpath):
sys.path.insert(0, dirpath)
added.append(dirpath)
return added
def remove_path(path):
log.debug('Remove path %s' % path)
if path in sys.path:
sys.path.remove(path)

207
lib/spack/external/nose/inspector.py vendored Normal file
View File

@ -0,0 +1,207 @@
"""Simple traceback introspection. Used to add additional information to
AssertionErrors in tests, so that failure messages may be more informative.
"""
import inspect
import logging
import re
import sys
import textwrap
import tokenize
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
log = logging.getLogger(__name__)
def inspect_traceback(tb):
"""Inspect a traceback and its frame, returning source for the expression
where the exception was raised, with simple variable replacement performed
and the line on which the exception was raised marked with '>>'
"""
log.debug('inspect traceback %s', tb)
# we only want the innermost frame, where the exception was raised
while tb.tb_next:
tb = tb.tb_next
frame = tb.tb_frame
lines, exc_line = tbsource(tb)
# figure out the set of lines to grab.
inspect_lines, mark_line = find_inspectable_lines(lines, exc_line)
src = StringIO(textwrap.dedent(''.join(inspect_lines)))
exp = Expander(frame.f_locals, frame.f_globals)
while inspect_lines:
try:
for tok in tokenize.generate_tokens(src.readline):
exp(*tok)
except tokenize.TokenError, e:
# this can happen if our inspectable region happens to butt up
# against the end of a construct like a docstring with the closing
# """ on separate line
log.debug("Tokenizer error: %s", e)
inspect_lines.pop(0)
mark_line -= 1
src = StringIO(textwrap.dedent(''.join(inspect_lines)))
exp = Expander(frame.f_locals, frame.f_globals)
continue
break
padded = []
if exp.expanded_source:
exp_lines = exp.expanded_source.split('\n')
ep = 0
for line in exp_lines:
if ep == mark_line:
padded.append('>> ' + line)
else:
padded.append(' ' + line)
ep += 1
return '\n'.join(padded)
def tbsource(tb, context=6):
"""Get source from a traceback object.
A tuple of two things is returned: a list of lines of context from
the source code, and the index of the current line within that list.
The optional second argument specifies the number of lines of context
to return, which are centered around the current line.
.. Note ::
This is adapted from inspect.py in the python 2.4 standard library,
since a bug in the 2.3 version of inspect prevents it from correctly
locating source lines in a traceback frame.
"""
lineno = tb.tb_lineno
frame = tb.tb_frame
if context > 0:
start = lineno - 1 - context//2
log.debug("lineno: %s start: %s", lineno, start)
try:
lines, dummy = inspect.findsource(frame)
except IOError:
lines, index = [''], 0
else:
all_lines = lines
start = max(start, 1)
start = max(0, min(start, len(lines) - context))
lines = lines[start:start+context]
index = lineno - 1 - start
# python 2.5 compat: if previous line ends in a continuation,
# decrement start by 1 to match 2.4 behavior
if sys.version_info >= (2, 5) and index > 0:
while lines[index-1].strip().endswith('\\'):
start -= 1
lines = all_lines[start:start+context]
else:
lines, index = [''], 0
log.debug("tbsource lines '''%s''' around index %s", lines, index)
return (lines, index)
def find_inspectable_lines(lines, pos):
"""Find lines in home that are inspectable.
Walk back from the err line up to 3 lines, but don't walk back over
changes in indent level.
Walk forward up to 3 lines, counting \ separated lines as 1. Don't walk
over changes in indent level (unless part of an extended line)
"""
cnt = re.compile(r'\\[\s\n]*$')
df = re.compile(r':[\s\n]*$')
ind = re.compile(r'^(\s*)')
toinspect = []
home = lines[pos]
home_indent = ind.match(home).groups()[0]
before = lines[max(pos-3, 0):pos]
before.reverse()
after = lines[pos+1:min(pos+4, len(lines))]
for line in before:
if ind.match(line).groups()[0] == home_indent:
toinspect.append(line)
else:
break
toinspect.reverse()
toinspect.append(home)
home_pos = len(toinspect)-1
continued = cnt.search(home)
for line in after:
if ((continued or ind.match(line).groups()[0] == home_indent)
and not df.search(line)):
toinspect.append(line)
continued = cnt.search(line)
else:
break
log.debug("Inspecting lines '''%s''' around %s", toinspect, home_pos)
return toinspect, home_pos
class Expander:
"""Simple expression expander. Uses tokenize to find the names and
expands any that can be looked up in the frame.
"""
def __init__(self, locals, globals):
self.locals = locals
self.globals = globals
self.lpos = None
self.expanded_source = ''
def __call__(self, ttype, tok, start, end, line):
# TODO
# deal with unicode properly
# TODO
# Dealing with instance members
# always keep the last thing seen
# if the current token is a dot,
# get ready to getattr(lastthing, this thing) on the
# next call.
if self.lpos is not None:
if start[1] >= self.lpos:
self.expanded_source += ' ' * (start[1]-self.lpos)
elif start[1] < self.lpos:
# newline, indent correctly
self.expanded_source += ' ' * start[1]
self.lpos = end[1]
if ttype == tokenize.INDENT:
pass
elif ttype == tokenize.NAME:
# Clean this junk up
try:
val = self.locals[tok]
if callable(val):
val = tok
else:
val = repr(val)
except KeyError:
try:
val = self.globals[tok]
if callable(val):
val = tok
else:
val = repr(val)
except KeyError:
val = tok
# FIXME... not sure how to handle things like funcs, classes
# FIXME this is broken for some unicode strings
self.expanded_source += val
else:
self.expanded_source += tok
# if this is the end of the line and the line ends with
# \, then tack a \ and newline onto the output
# print line[end[1]:]
if re.match(r'\s+\\\n', line[end[1]:]):
self.expanded_source += ' \\\n'

623
lib/spack/external/nose/loader.py vendored Normal file
View File

@ -0,0 +1,623 @@
"""
Test Loader
-----------
nose's test loader implements the same basic functionality as its
superclass, unittest.TestLoader, but extends it by more liberal
interpretations of what may be a test and how a test may be named.
"""
from __future__ import generators
import logging
import os
import sys
import unittest
import types
from inspect import isfunction
from nose.pyversion import unbound_method, ismethod
from nose.case import FunctionTestCase, MethodTestCase
from nose.failure import Failure
from nose.config import Config
from nose.importer import Importer, add_path, remove_path
from nose.selector import defaultSelector, TestAddress
from nose.util import func_lineno, getpackage, isclass, isgenerator, \
ispackage, regex_last_key, resolve_name, transplant_func, \
transplant_class, test_address
from nose.suite import ContextSuiteFactory, ContextList, LazySuite
from nose.pyversion import sort_list, cmp_to_key
log = logging.getLogger(__name__)
#log.setLevel(logging.DEBUG)
# for efficiency and easier mocking
op_normpath = os.path.normpath
op_abspath = os.path.abspath
op_join = os.path.join
op_isdir = os.path.isdir
op_isfile = os.path.isfile
__all__ = ['TestLoader', 'defaultTestLoader']
class TestLoader(unittest.TestLoader):
"""Test loader that extends unittest.TestLoader to:
* Load tests from test-like functions and classes that are not
unittest.TestCase subclasses
* Find and load test modules in a directory
* Support tests that are generators
* Support easy extensions of or changes to that behavior through plugins
"""
config = None
importer = None
workingDir = None
selector = None
suiteClass = None
def __init__(self, config=None, importer=None, workingDir=None,
selector=None):
"""Initialize a test loader.
Parameters (all optional):
* config: provide a `nose.config.Config`_ or other config class
instance; if not provided a `nose.config.Config`_ with
default values is used.
* importer: provide an importer instance that implements
`importFromPath`. If not provided, a
`nose.importer.Importer`_ is used.
* workingDir: the directory to which file and module names are
relative. If not provided, assumed to be the current working
directory.
* selector: a selector class or instance. If a class is
provided, it will be instantiated with one argument, the
current config. If not provided, a `nose.selector.Selector`_
is used.
"""
if config is None:
config = Config()
if importer is None:
importer = Importer(config=config)
if workingDir is None:
workingDir = config.workingDir
if selector is None:
selector = defaultSelector(config)
elif isclass(selector):
selector = selector(config)
self.config = config
self.importer = importer
self.workingDir = op_normpath(op_abspath(workingDir))
self.selector = selector
if config.addPaths:
add_path(workingDir, config)
self.suiteClass = ContextSuiteFactory(config=config)
self._visitedPaths = set([])
unittest.TestLoader.__init__(self)
def getTestCaseNames(self, testCaseClass):
"""Override to select with selector, unless
config.getTestCaseNamesCompat is True
"""
if self.config.getTestCaseNamesCompat:
return unittest.TestLoader.getTestCaseNames(self, testCaseClass)
def wanted(attr, cls=testCaseClass, sel=self.selector):
item = getattr(cls, attr, None)
if isfunction(item):
item = unbound_method(cls, item)
elif not ismethod(item):
return False
return sel.wantMethod(item)
cases = filter(wanted, dir(testCaseClass))
# add runTest if nothing else picked
if not cases and hasattr(testCaseClass, 'runTest'):
cases = ['runTest']
if self.sortTestMethodsUsing:
sort_list(cases, cmp_to_key(self.sortTestMethodsUsing))
return cases
def _haveVisited(self, path):
# For cases where path is None, we always pretend we haven't visited
# them.
if path is None:
return False
return path in self._visitedPaths
def _addVisitedPath(self, path):
if path is not None:
self._visitedPaths.add(path)
def loadTestsFromDir(self, path):
"""Load tests from the directory at path. This is a generator
-- each suite of tests from a module or other file is yielded
and is expected to be executed before the next file is
examined.
"""
log.debug("load from dir %s", path)
plugins = self.config.plugins
plugins.beforeDirectory(path)
if self.config.addPaths:
paths_added = add_path(path, self.config)
entries = os.listdir(path)
sort_list(entries, regex_last_key(self.config.testMatch))
for entry in entries:
# this hard-coded initial-dot test will be removed:
# http://code.google.com/p/python-nose/issues/detail?id=82
if entry.startswith('.'):
continue
entry_path = op_abspath(op_join(path, entry))
is_file = op_isfile(entry_path)
wanted = False
if is_file:
is_dir = False
wanted = self.selector.wantFile(entry_path)
else:
is_dir = op_isdir(entry_path)
if is_dir:
# this hard-coded initial-underscore test will be removed:
# http://code.google.com/p/python-nose/issues/detail?id=82
if entry.startswith('_'):
continue
wanted = self.selector.wantDirectory(entry_path)
is_package = ispackage(entry_path)
# Python 3.3 now implements PEP 420: Implicit Namespace Packages.
# As a result, it's now possible that parent paths that have a
# segment with the same basename as our package ends up
# in module.__path__. So we have to keep track of what we've
# visited, and not-revisit them again.
if wanted and not self._haveVisited(entry_path):
self._addVisitedPath(entry_path)
if is_file:
plugins.beforeContext()
if entry.endswith('.py'):
yield self.loadTestsFromName(
entry_path, discovered=True)
else:
yield self.loadTestsFromFile(entry_path)
plugins.afterContext()
elif is_package:
# Load the entry as a package: given the full path,
# loadTestsFromName() will figure it out
yield self.loadTestsFromName(
entry_path, discovered=True)
else:
# Another test dir in this one: recurse lazily
yield self.suiteClass(
lambda: self.loadTestsFromDir(entry_path))
tests = []
for test in plugins.loadTestsFromDir(path):
tests.append(test)
# TODO: is this try/except needed?
try:
if tests:
yield self.suiteClass(tests)
except (KeyboardInterrupt, SystemExit):
raise
except:
yield self.suiteClass([Failure(*sys.exc_info())])
# pop paths
if self.config.addPaths:
for p in paths_added:
remove_path(p)
plugins.afterDirectory(path)
def loadTestsFromFile(self, filename):
"""Load tests from a non-module file. Default is to raise a
ValueError; plugins may implement `loadTestsFromFile` to
provide a list of tests loaded from the file.
"""
log.debug("Load from non-module file %s", filename)
try:
tests = [test for test in
self.config.plugins.loadTestsFromFile(filename)]
if tests:
# Plugins can yield False to indicate that they were
# unable to load tests from a file, but it was not an
# error -- the file just had no tests to load.
tests = filter(None, tests)
return self.suiteClass(tests)
else:
# Nothing was able to even try to load from this file
open(filename, 'r').close() # trigger os error
raise ValueError("Unable to load tests from file %s"
% filename)
except (KeyboardInterrupt, SystemExit):
raise
except:
exc = sys.exc_info()
return self.suiteClass(
[Failure(exc[0], exc[1], exc[2],
address=(filename, None, None))])
def loadTestsFromGenerator(self, generator, module):
"""Lazy-load tests from a generator function. The generator function
may yield either:
* a callable, or
* a function name resolvable within the same module
"""
def generate(g=generator, m=module):
try:
for test in g():
test_func, arg = self.parseGeneratedTest(test)
if not callable(test_func):
test_func = getattr(m, test_func)
yield FunctionTestCase(test_func, arg=arg, descriptor=g)
except KeyboardInterrupt:
raise
except:
exc = sys.exc_info()
yield Failure(exc[0], exc[1], exc[2],
address=test_address(generator))
return self.suiteClass(generate, context=generator, can_split=False)
def loadTestsFromGeneratorMethod(self, generator, cls):
"""Lazy-load tests from a generator method.
This is more complicated than loading from a generator function,
since a generator method may yield:
* a function
* a bound or unbound method, or
* a method name
"""
# convert the unbound generator method
# into a bound method so it can be called below
if hasattr(generator, 'im_class'):
cls = generator.im_class
inst = cls()
method = generator.__name__
generator = getattr(inst, method)
def generate(g=generator, c=cls):
try:
for test in g():
test_func, arg = self.parseGeneratedTest(test)
if not callable(test_func):
test_func = unbound_method(c, getattr(c, test_func))
if ismethod(test_func):
yield MethodTestCase(test_func, arg=arg, descriptor=g)
elif callable(test_func):
# In this case we're forcing the 'MethodTestCase'
# to run the inline function as its test call,
# but using the generator method as the 'method of
# record' (so no need to pass it as the descriptor)
yield MethodTestCase(g, test=test_func, arg=arg)
else:
yield Failure(
TypeError,
"%s is not a callable or method" % test_func)
except KeyboardInterrupt:
raise
except:
exc = sys.exc_info()
yield Failure(exc[0], exc[1], exc[2],
address=test_address(generator))
return self.suiteClass(generate, context=generator, can_split=False)
def loadTestsFromModule(self, module, path=None, discovered=False):
"""Load all tests from module and return a suite containing
them. If the module has been discovered and is not test-like,
the suite will be empty by default, though plugins may add
their own tests.
"""
log.debug("Load from module %s", module)
tests = []
test_classes = []
test_funcs = []
# For *discovered* modules, we only load tests when the module looks
# testlike. For modules we've been directed to load, we always
# look for tests. (discovered is set to True by loadTestsFromDir)
if not discovered or self.selector.wantModule(module):
for item in dir(module):
test = getattr(module, item, None)
# print "Check %s (%s) in %s" % (item, test, module.__name__)
if isclass(test):
if self.selector.wantClass(test):
test_classes.append(test)
elif isfunction(test) and self.selector.wantFunction(test):
test_funcs.append(test)
sort_list(test_classes, lambda x: x.__name__)
sort_list(test_funcs, func_lineno)
tests = map(lambda t: self.makeTest(t, parent=module),
test_classes + test_funcs)
# Now, descend into packages
# FIXME can or should this be lazy?
# is this syntax 2.2 compatible?
module_paths = getattr(module, '__path__', [])
if path:
path = os.path.normcase(os.path.realpath(path))
for module_path in module_paths:
log.debug("Load tests from module path %s?", module_path)
log.debug("path: %s os.path.realpath(%s): %s",
path, os.path.normcase(module_path),
os.path.realpath(os.path.normcase(module_path)))
if (self.config.traverseNamespace or not path) or \
os.path.realpath(
os.path.normcase(module_path)).startswith(path):
# Egg files can be on sys.path, so make sure the path is a
# directory before trying to load from it.
if os.path.isdir(module_path):
tests.extend(self.loadTestsFromDir(module_path))
for test in self.config.plugins.loadTestsFromModule(module, path):
tests.append(test)
return self.suiteClass(ContextList(tests, context=module))
def loadTestsFromName(self, name, module=None, discovered=False):
"""Load tests from the entity with the given name.
The name may indicate a file, directory, module, or any object
within a module. See `nose.util.split_test_name` for details on
test name parsing.
"""
# FIXME refactor this method into little bites?
log.debug("load from %s (%s)", name, module)
suite = self.suiteClass
# give plugins first crack
plug_tests = self.config.plugins.loadTestsFromName(name, module)
if plug_tests:
return suite(plug_tests)
addr = TestAddress(name, workingDir=self.workingDir)
if module:
# Two cases:
# name is class.foo
# The addr will be incorrect, since it thinks class.foo is
# a dotted module name. It's actually a dotted attribute
# name. In this case we want to use the full submitted
# name as the name to load from the module.
# name is module:class.foo
# The addr will be correct. The part we want is the part after
# the :, which is in addr.call.
if addr.call:
name = addr.call
parent, obj = self.resolve(name, module)
if (isclass(parent)
and getattr(parent, '__module__', None) != module.__name__
and not isinstance(obj, Failure)):
parent = transplant_class(parent, module.__name__)
obj = getattr(parent, obj.__name__)
log.debug("parent %s obj %s module %s", parent, obj, module)
if isinstance(obj, Failure):
return suite([obj])
else:
return suite(ContextList([self.makeTest(obj, parent)],
context=parent))
else:
if addr.module:
try:
if addr.filename is None:
module = resolve_name(addr.module)
else:
self.config.plugins.beforeImport(
addr.filename, addr.module)
# FIXME: to support module.name names,
# do what resolve-name does and keep trying to
# import, popping tail of module into addr.call,
# until we either get an import or run out of
# module parts
try:
module = self.importer.importFromPath(
addr.filename, addr.module)
finally:
self.config.plugins.afterImport(
addr.filename, addr.module)
except (KeyboardInterrupt, SystemExit):
raise
except:
exc = sys.exc_info()
return suite([Failure(exc[0], exc[1], exc[2],
address=addr.totuple())])
if addr.call:
return self.loadTestsFromName(addr.call, module)
else:
return self.loadTestsFromModule(
module, addr.filename,
discovered=discovered)
elif addr.filename:
path = addr.filename
if addr.call:
package = getpackage(path)
if package is None:
return suite([
Failure(ValueError,
"Can't find callable %s in file %s: "
"file is not a python module" %
(addr.call, path),
address=addr.totuple())])
return self.loadTestsFromName(addr.call, module=package)
else:
if op_isdir(path):
# In this case we *can* be lazy since we know
# that each module in the dir will be fully
# loaded before its tests are executed; we
# also know that we're not going to be asked
# to load from . and ./some_module.py *as part
# of this named test load*
return LazySuite(
lambda: self.loadTestsFromDir(path))
elif op_isfile(path):
return self.loadTestsFromFile(path)
else:
return suite([
Failure(OSError, "No such file %s" % path,
address=addr.totuple())])
else:
# just a function? what to do? I think it can only be
# handled when module is not None
return suite([
Failure(ValueError, "Unresolvable test name %s" % name,
address=addr.totuple())])
def loadTestsFromNames(self, names, module=None):
"""Load tests from all names, returning a suite containing all
tests.
"""
plug_res = self.config.plugins.loadTestsFromNames(names, module)
if plug_res:
suite, names = plug_res
if suite:
return self.suiteClass([
self.suiteClass(suite),
unittest.TestLoader.loadTestsFromNames(self, names, module)
])
return unittest.TestLoader.loadTestsFromNames(self, names, module)
def loadTestsFromTestCase(self, testCaseClass):
"""Load tests from a unittest.TestCase subclass.
"""
cases = []
plugins = self.config.plugins
for case in plugins.loadTestsFromTestCase(testCaseClass):
cases.append(case)
# For efficiency in the most common case, just call and return from
# super. This avoids having to extract cases and rebuild a context
# suite when there are no plugin-contributed cases.
if not cases:
return super(TestLoader, self).loadTestsFromTestCase(testCaseClass)
cases.extend(
[case for case in
super(TestLoader, self).loadTestsFromTestCase(testCaseClass)])
return self.suiteClass(cases)
def loadTestsFromTestClass(self, cls):
"""Load tests from a test class that is *not* a unittest.TestCase
subclass.
In this case, we can't depend on the class's `__init__` taking method
name arguments, so we have to compose a MethodTestCase for each
method in the class that looks testlike.
"""
def wanted(attr, cls=cls, sel=self.selector):
item = getattr(cls, attr, None)
if isfunction(item):
item = unbound_method(cls, item)
elif not ismethod(item):
return False
return sel.wantMethod(item)
cases = [self.makeTest(getattr(cls, case), cls)
for case in filter(wanted, dir(cls))]
for test in self.config.plugins.loadTestsFromTestClass(cls):
cases.append(test)
return self.suiteClass(ContextList(cases, context=cls))
def makeTest(self, obj, parent=None):
try:
return self._makeTest(obj, parent)
except (KeyboardInterrupt, SystemExit):
raise
except:
exc = sys.exc_info()
try:
addr = test_address(obj)
except KeyboardInterrupt:
raise
except:
addr = None
return Failure(exc[0], exc[1], exc[2], address=addr)
def _makeTest(self, obj, parent=None):
"""Given a test object and its parent, return a test case
or test suite.
"""
plug_tests = []
try:
addr = test_address(obj)
except KeyboardInterrupt:
raise
except:
addr = None
for test in self.config.plugins.makeTest(obj, parent):
plug_tests.append(test)
# TODO: is this try/except needed?
try:
if plug_tests:
return self.suiteClass(plug_tests)
except (KeyboardInterrupt, SystemExit):
raise
except:
exc = sys.exc_info()
return Failure(exc[0], exc[1], exc[2], address=addr)
if isfunction(obj) and parent and not isinstance(parent, types.ModuleType):
# This is a Python 3.x 'unbound method'. Wrap it with its
# associated class..
obj = unbound_method(parent, obj)
if isinstance(obj, unittest.TestCase):
return obj
elif isclass(obj):
if parent and obj.__module__ != parent.__name__:
obj = transplant_class(obj, parent.__name__)
if issubclass(obj, unittest.TestCase):
return self.loadTestsFromTestCase(obj)
else:
return self.loadTestsFromTestClass(obj)
elif ismethod(obj):
if parent is None:
parent = obj.__class__
if issubclass(parent, unittest.TestCase):
return parent(obj.__name__)
else:
if isgenerator(obj):
return self.loadTestsFromGeneratorMethod(obj, parent)
else:
return MethodTestCase(obj)
elif isfunction(obj):
if parent and obj.__module__ != parent.__name__:
obj = transplant_func(obj, parent.__name__)
if isgenerator(obj):
return self.loadTestsFromGenerator(obj, parent)
else:
return FunctionTestCase(obj)
else:
return Failure(TypeError,
"Can't make a test from %s" % obj,
address=addr)
def resolve(self, name, module):
"""Resolve name within module
"""
obj = module
parts = name.split('.')
for part in parts:
parent, obj = obj, getattr(obj, part, None)
if obj is None:
# no such test
obj = Failure(ValueError, "No such test %s" % name)
return parent, obj
def parseGeneratedTest(self, test):
"""Given the yield value of a test generator, return a func and args.
This is used in the two loadTestsFromGenerator* methods.
"""
if not isinstance(test, tuple): # yield test
test_func, arg = (test, tuple())
elif len(test) == 1: # yield (test,)
test_func, arg = (test[0], tuple())
else: # yield test, foo, bar, ...
assert len(test) > 1 # sanity check
test_func, arg = (test[0], test[1:])
return test_func, arg
defaultTestLoader = TestLoader

View File

@ -0,0 +1,190 @@
"""
Writing Plugins
---------------
nose supports plugins for test collection, selection, observation and
reporting. There are two basic rules for plugins:
* Plugin classes should subclass :class:`nose.plugins.Plugin`.
* Plugins may implement any of the methods described in the class
:doc:`IPluginInterface <interface>` in nose.plugins.base. Please note that
this class is for documentary purposes only; plugins may not subclass
IPluginInterface.
Hello World
===========
Here's a basic plugin. It doesn't do much so read on for more ideas or dive
into the :doc:`IPluginInterface <interface>` to see all available hooks.
.. code-block:: python
import logging
import os
from nose.plugins import Plugin
log = logging.getLogger('nose.plugins.helloworld')
class HelloWorld(Plugin):
name = 'helloworld'
def options(self, parser, env=os.environ):
super(HelloWorld, self).options(parser, env=env)
def configure(self, options, conf):
super(HelloWorld, self).configure(options, conf)
if not self.enabled:
return
def finalize(self, result):
log.info('Hello pluginized world!')
Registering
===========
.. Note::
Important note: the following applies only to the default
plugin manager. Other plugin managers may use different means to
locate and load plugins.
For nose to find a plugin, it must be part of a package that uses
setuptools_, and the plugin must be included in the entry points defined
in the setup.py for the package:
.. code-block:: python
setup(name='Some plugin',
# ...
entry_points = {
'nose.plugins.0.10': [
'someplugin = someplugin:SomePlugin'
]
},
# ...
)
Once the package is installed with install or develop, nose will be able
to load the plugin.
.. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools
Registering a plugin without setuptools
=======================================
It is currently possible to register a plugin programmatically by
creating a custom nose runner like this :
.. code-block:: python
import nose
from yourplugin import YourPlugin
if __name__ == '__main__':
nose.main(addplugins=[YourPlugin()])
Defining options
================
All plugins must implement the methods ``options(self, parser, env)``
and ``configure(self, options, conf)``. Subclasses of nose.plugins.Plugin
that want the standard options should call the superclass methods.
nose uses optparse.OptionParser from the standard library to parse
arguments. A plugin's ``options()`` method receives a parser
instance. It's good form for a plugin to use that instance only to add
additional arguments that take only long arguments (--like-this). Most
of nose's built-in arguments get their default value from an environment
variable.
A plugin's ``configure()`` method receives the parsed ``OptionParser`` options
object, as well as the current config object. Plugins should configure their
behavior based on the user-selected settings, and may raise exceptions
if the configured behavior is nonsensical.
Logging
=======
nose uses the logging classes from the standard library. To enable users
to view debug messages easily, plugins should use ``logging.getLogger()`` to
acquire a logger in the ``nose.plugins`` namespace.
Recipes
=======
* Writing a plugin that monitors or controls test result output
Implement any or all of ``addError``, ``addFailure``, etc., to monitor test
results. If you also want to monitor output, implement
``setOutputStream`` and keep a reference to the output stream. If you
want to prevent the builtin ``TextTestResult`` output, implement
``setOutputSteam`` and *return a dummy stream*. The default output will go
to the dummy stream, while you send your desired output to the real stream.
Example: `examples/html_plugin/htmlplug.py`_
* Writing a plugin that handles exceptions
Subclass :doc:`ErrorClassPlugin <errorclasses>`.
Examples: :doc:`nose.plugins.deprecated <deprecated>`,
:doc:`nose.plugins.skip <skip>`
* Writing a plugin that adds detail to error reports
Implement ``formatError`` and/or ``formatFailure``. The error tuple
you return (error class, error message, traceback) will replace the
original error tuple.
Examples: :doc:`nose.plugins.capture <capture>`,
:doc:`nose.plugins.failuredetail <failuredetail>`
* Writing a plugin that loads tests from files other than python modules
Implement ``wantFile`` and ``loadTestsFromFile``. In ``wantFile``,
return True for files that you want to examine for tests. In
``loadTestsFromFile``, for those files, return an iterable
containing TestCases (or yield them as you find them;
``loadTestsFromFile`` may also be a generator).
Example: :doc:`nose.plugins.doctests <doctests>`
* Writing a plugin that prints a report
Implement ``begin`` if you need to perform setup before testing
begins. Implement ``report`` and output your report to the provided stream.
Examples: :doc:`nose.plugins.cover <cover>`, :doc:`nose.plugins.prof <prof>`
* Writing a plugin that selects or rejects tests
Implement any or all ``want*`` methods. Return False to reject the test
candidate, True to accept it -- which means that the test candidate
will pass through the rest of the system, so you must be prepared to
load tests from it if tests can't be loaded by the core loader or
another plugin -- and None if you don't care.
Examples: :doc:`nose.plugins.attrib <attrib>`,
:doc:`nose.plugins.doctests <doctests>`, :doc:`nose.plugins.testid <testid>`
More Examples
=============
See any builtin plugin or example plugin in the examples_ directory in
the nose source distribution. There is a list of third-party plugins
`on jottit`_.
.. _examples/html_plugin/htmlplug.py: http://python-nose.googlecode.com/svn/trunk/examples/html_plugin/htmlplug.py
.. _examples: http://python-nose.googlecode.com/svn/trunk/examples
.. _on jottit: http://nose-plugins.jottit.com/
"""
from nose.plugins.base import Plugin
from nose.plugins.manager import *
from nose.plugins.plugintest import PluginTester
if __name__ == '__main__':
import doctest
doctest.testmod()

View File

@ -0,0 +1,45 @@
"""Use the AllModules plugin by passing ``--all-modules`` or setting the
NOSE_ALL_MODULES environment variable to enable collection and execution of
tests in all python modules. Normal nose behavior is to look for tests only in
modules that match testMatch.
More information: :doc:`../doc_tests/test_allmodules/test_allmodules`
.. warning ::
This plugin can have surprising interactions with plugins that load tests
from what nose normally considers non-test modules, such as
the :doc:`doctest plugin <doctests>`. This is because any given
object in a module can't be loaded both by a plugin and the normal nose
:class:`test loader <nose.loader.TestLoader>`. Also, if you have functions
or classes in non-test modules that look like tests but aren't, you will
likely see errors as nose attempts to run them as tests.
"""
import os
from nose.plugins.base import Plugin
class AllModules(Plugin):
"""Collect tests from all python modules.
"""
def options(self, parser, env):
"""Register commandline options.
"""
env_opt = 'NOSE_ALL_MODULES'
parser.add_option('--all-modules',
action="store_true",
dest=self.enableOpt,
default=env.get(env_opt),
help="Enable plugin %s: %s [%s]" %
(self.__class__.__name__, self.help(), env_opt))
def wantFile(self, file):
"""Override to return True for all files ending with .py"""
# always want .py files
if file.endswith('.py'):
return True
def wantModule(self, module):
"""Override return True for all modules"""
return True

View File

@ -0,0 +1,286 @@
"""Attribute selector plugin.
Oftentimes when testing you will want to select tests based on
criteria rather then simply by filename. For example, you might want
to run all tests except for the slow ones. You can do this with the
Attribute selector plugin by setting attributes on your test methods.
Here is an example:
.. code-block:: python
def test_big_download():
import urllib
# commence slowness...
test_big_download.slow = 1
Once you've assigned an attribute ``slow = 1`` you can exclude that
test and all other tests having the slow attribute by running ::
$ nosetests -a '!slow'
There is also a decorator available for you that will set attributes.
Here's how to set ``slow=1`` like above with the decorator:
.. code-block:: python
from nose.plugins.attrib import attr
@attr('slow')
def test_big_download():
import urllib
# commence slowness...
And here's how to set an attribute with a specific value:
.. code-block:: python
from nose.plugins.attrib import attr
@attr(speed='slow')
def test_big_download():
import urllib
# commence slowness...
This test could be run with ::
$ nosetests -a speed=slow
In Python 2.6 and higher, ``@attr`` can be used on a class to set attributes
on all its test methods at once. For example:
.. code-block:: python
from nose.plugins.attrib import attr
@attr(speed='slow')
class MyTestCase:
def test_long_integration(self):
pass
def test_end_to_end_something(self):
pass
Below is a reference to the different syntaxes available.
Simple syntax
-------------
Examples of using the ``-a`` and ``--attr`` options:
* ``nosetests -a status=stable``
Only runs tests with attribute "status" having value "stable"
* ``nosetests -a priority=2,status=stable``
Runs tests having both attributes and values
* ``nosetests -a priority=2 -a slow``
Runs tests that match either attribute
* ``nosetests -a tags=http``
If a test's ``tags`` attribute was a list and it contained the value
``http`` then it would be run
* ``nosetests -a slow``
Runs tests with the attribute ``slow`` if its value does not equal False
(False, [], "", etc...)
* ``nosetests -a '!slow'``
Runs tests that do NOT have the attribute ``slow`` or have a ``slow``
attribute that is equal to False
**NOTE**:
if your shell (like bash) interprets '!' as a special character make sure to
put single quotes around it.
Expression Evaluation
---------------------
Examples using the ``-A`` and ``--eval-attr`` options:
* ``nosetests -A "not slow"``
Evaluates the Python expression "not slow" and runs the test if True
* ``nosetests -A "(priority > 5) and not slow"``
Evaluates a complex Python expression and runs the test if True
"""
import inspect
import logging
import os
import sys
from inspect import isfunction
from nose.plugins.base import Plugin
from nose.util import tolist
log = logging.getLogger('nose.plugins.attrib')
compat_24 = sys.version_info >= (2, 4)
def attr(*args, **kwargs):
"""Decorator that adds attributes to classes or functions
for use with the Attribute (-a) plugin.
"""
def wrap_ob(ob):
for name in args:
setattr(ob, name, True)
for name, value in kwargs.iteritems():
setattr(ob, name, value)
return ob
return wrap_ob
def get_method_attr(method, cls, attr_name, default = False):
"""Look up an attribute on a method/ function.
If the attribute isn't found there, looking it up in the
method's class, if any.
"""
Missing = object()
value = getattr(method, attr_name, Missing)
if value is Missing and cls is not None:
value = getattr(cls, attr_name, Missing)
if value is Missing:
return default
return value
class ContextHelper:
"""Object that can act as context dictionary for eval and looks up
names as attributes on a method/ function and its class.
"""
def __init__(self, method, cls):
self.method = method
self.cls = cls
def __getitem__(self, name):
return get_method_attr(self.method, self.cls, name)
class AttributeSelector(Plugin):
"""Selects test cases to be run based on their attributes.
"""
def __init__(self):
Plugin.__init__(self)
self.attribs = []
def options(self, parser, env):
"""Register command line options"""
parser.add_option("-a", "--attr",
dest="attr", action="append",
default=env.get('NOSE_ATTR'),
metavar="ATTR",
help="Run only tests that have attributes "
"specified by ATTR [NOSE_ATTR]")
# disable in < 2.4: eval can't take needed args
if compat_24:
parser.add_option("-A", "--eval-attr",
dest="eval_attr", metavar="EXPR", action="append",
default=env.get('NOSE_EVAL_ATTR'),
help="Run only tests for whose attributes "
"the Python expression EXPR evaluates "
"to True [NOSE_EVAL_ATTR]")
def configure(self, options, config):
"""Configure the plugin and system, based on selected options.
attr and eval_attr may each be lists.
self.attribs will be a list of lists of tuples. In that list, each
list is a group of attributes, all of which must match for the rule to
match.
"""
self.attribs = []
# handle python eval-expression parameter
if compat_24 and options.eval_attr:
eval_attr = tolist(options.eval_attr)
for attr in eval_attr:
# "<python expression>"
# -> eval(expr) in attribute context must be True
def eval_in_context(expr, obj, cls):
return eval(expr, None, ContextHelper(obj, cls))
self.attribs.append([(attr, eval_in_context)])
# attribute requirements are a comma separated list of
# 'key=value' pairs
if options.attr:
std_attr = tolist(options.attr)
for attr in std_attr:
# all attributes within an attribute group must match
attr_group = []
for attrib in attr.strip().split(","):
# don't die on trailing comma
if not attrib:
continue
items = attrib.split("=", 1)
if len(items) > 1:
# "name=value"
# -> 'str(obj.name) == value' must be True
key, value = items
else:
key = items[0]
if key[0] == "!":
# "!name"
# 'bool(obj.name)' must be False
key = key[1:]
value = False
else:
# "name"
# -> 'bool(obj.name)' must be True
value = True
attr_group.append((key, value))
self.attribs.append(attr_group)
if self.attribs:
self.enabled = True
def validateAttrib(self, method, cls = None):
"""Verify whether a method has the required attributes
The method is considered a match if it matches all attributes
for any attribute group.
."""
# TODO: is there a need for case-sensitive value comparison?
any = False
for group in self.attribs:
match = True
for key, value in group:
attr = get_method_attr(method, cls, key)
if callable(value):
if not value(key, method, cls):
match = False
break
elif value is True:
# value must exist and be True
if not bool(attr):
match = False
break
elif value is False:
# value must not exist or be False
if bool(attr):
match = False
break
elif type(attr) in (list, tuple):
# value must be found in the list attribute
if not str(value).lower() in [str(x).lower()
for x in attr]:
match = False
break
else:
# value must match, convert to string and compare
if (value != attr
and str(value).lower() != str(attr).lower()):
match = False
break
any = any or match
if any:
# not True because we don't want to FORCE the selection of the
# item, only say that it is acceptable
return None
return False
def wantFunction(self, function):
"""Accept the function if its attributes match.
"""
return self.validateAttrib(function)
def wantMethod(self, method):
"""Accept the method if its attributes match.
"""
try:
cls = method.im_class
except AttributeError:
return False
return self.validateAttrib(method, cls)

725
lib/spack/external/nose/plugins/base.py vendored Normal file
View File

@ -0,0 +1,725 @@
import os
import textwrap
from optparse import OptionConflictError
from warnings import warn
from nose.util import tolist
class Plugin(object):
"""Base class for nose plugins. It's recommended but not *necessary* to
subclass this class to create a plugin, but all plugins *must* implement
`options(self, parser, env)` and `configure(self, options, conf)`, and
must have the attributes `enabled`, `name` and `score`. The `name`
attribute may contain hyphens ('-').
Plugins should not be enabled by default.
Subclassing Plugin (and calling the superclass methods in
__init__, configure, and options, if you override them) will give
your plugin some friendly default behavior:
* A --with-$name option will be added to the command line interface
to enable the plugin, and a corresponding environment variable
will be used as the default value. The plugin class's docstring
will be used as the help for this option.
* The plugin will not be enabled unless this option is selected by
the user.
"""
can_configure = False
enabled = False
enableOpt = None
name = None
score = 100
def __init__(self):
if self.name is None:
self.name = self.__class__.__name__.lower()
if self.enableOpt is None:
self.enableOpt = "enable_plugin_%s" % self.name.replace('-', '_')
def addOptions(self, parser, env=None):
"""Add command-line options for this plugin.
The base plugin class adds --with-$name by default, used to enable the
plugin.
.. warning :: Don't implement addOptions unless you want to override
all default option handling behavior, including
warnings for conflicting options. Implement
:meth:`options
<nose.plugins.base.IPluginInterface.options>`
instead.
"""
self.add_options(parser, env)
def add_options(self, parser, env=None):
"""Non-camel-case version of func name for backwards compatibility.
.. warning ::
DEPRECATED: Do not use this method,
use :meth:`options <nose.plugins.base.IPluginInterface.options>`
instead.
"""
# FIXME raise deprecation warning if wasn't called by wrapper
if env is None:
env = os.environ
try:
self.options(parser, env)
self.can_configure = True
except OptionConflictError, e:
warn("Plugin %s has conflicting option string: %s and will "
"be disabled" % (self, e), RuntimeWarning)
self.enabled = False
self.can_configure = False
def options(self, parser, env):
"""Register commandline options.
Implement this method for normal options behavior with protection from
OptionConflictErrors. If you override this method and want the default
--with-$name option to be registered, be sure to call super().
"""
env_opt = 'NOSE_WITH_%s' % self.name.upper()
env_opt = env_opt.replace('-', '_')
parser.add_option("--with-%s" % self.name,
action="store_true",
dest=self.enableOpt,
default=env.get(env_opt),
help="Enable plugin %s: %s [%s]" %
(self.__class__.__name__, self.help(), env_opt))
def configure(self, options, conf):
"""Configure the plugin and system, based on selected options.
The base plugin class sets the plugin to enabled if the enable option
for the plugin (self.enableOpt) is true.
"""
if not self.can_configure:
return
self.conf = conf
if hasattr(options, self.enableOpt):
self.enabled = getattr(options, self.enableOpt)
def help(self):
"""Return help for this plugin. This will be output as the help
section of the --with-$name option that enables the plugin.
"""
if self.__class__.__doc__:
# doc sections are often indented; compress the spaces
return textwrap.dedent(self.__class__.__doc__)
return "(no help available)"
# Compatiblity shim
def tolist(self, val):
warn("Plugin.tolist is deprecated. Use nose.util.tolist instead",
DeprecationWarning)
return tolist(val)
class IPluginInterface(object):
"""
IPluginInterface describes the plugin API. Do not subclass or use this
class directly.
"""
def __new__(cls, *arg, **kw):
raise TypeError("IPluginInterface class is for documentation only")
def addOptions(self, parser, env):
"""Called to allow plugin to register command-line options with the
parser. DO NOT return a value from this method unless you want to stop
all other plugins from setting their options.
.. warning ::
DEPRECATED -- implement
:meth:`options <nose.plugins.base.IPluginInterface.options>` instead.
"""
pass
add_options = addOptions
add_options.deprecated = True
def addDeprecated(self, test):
"""Called when a deprecated test is seen. DO NOT return a value
unless you want to stop other plugins from seeing the deprecated
test.
.. warning :: DEPRECATED -- check error class in addError instead
"""
pass
addDeprecated.deprecated = True
def addError(self, test, err):
"""Called when a test raises an uncaught exception. DO NOT return a
value unless you want to stop other plugins from seeing that the
test has raised an error.
:param test: the test case
:type test: :class:`nose.case.Test`
:param err: sys.exc_info() tuple
:type err: 3-tuple
"""
pass
addError.changed = True
def addFailure(self, test, err):
"""Called when a test fails. DO NOT return a value unless you
want to stop other plugins from seeing that the test has failed.
:param test: the test case
:type test: :class:`nose.case.Test`
:param err: 3-tuple
:type err: sys.exc_info() tuple
"""
pass
addFailure.changed = True
def addSkip(self, test):
"""Called when a test is skipped. DO NOT return a value unless
you want to stop other plugins from seeing the skipped test.
.. warning:: DEPRECATED -- check error class in addError instead
"""
pass
addSkip.deprecated = True
def addSuccess(self, test):
"""Called when a test passes. DO NOT return a value unless you
want to stop other plugins from seeing the passing test.
:param test: the test case
:type test: :class:`nose.case.Test`
"""
pass
addSuccess.changed = True
def afterContext(self):
"""Called after a context (generally a module) has been
lazy-loaded, imported, setup, had its tests loaded and
executed, and torn down.
"""
pass
afterContext._new = True
def afterDirectory(self, path):
"""Called after all tests have been loaded from directory at path
and run.
:param path: the directory that has finished processing
:type path: string
"""
pass
afterDirectory._new = True
def afterImport(self, filename, module):
"""Called after module is imported from filename. afterImport
is called even if the import failed.
:param filename: The file that was loaded
:type filename: string
:param module: The name of the module
:type module: string
"""
pass
afterImport._new = True
def afterTest(self, test):
"""Called after the test has been run and the result recorded
(after stopTest).
:param test: the test case
:type test: :class:`nose.case.Test`
"""
pass
afterTest._new = True
def beforeContext(self):
"""Called before a context (generally a module) is
examined. Because the context is not yet loaded, plugins don't
get to know what the context is; so any context operations
should use a stack that is pushed in `beforeContext` and popped
in `afterContext` to ensure they operate symmetrically.
`beforeContext` and `afterContext` are mainly useful for tracking
and restoring global state around possible changes from within a
context, whatever the context may be. If you need to operate on
contexts themselves, see `startContext` and `stopContext`, which
are passed the context in question, but are called after
it has been loaded (imported in the module case).
"""
pass
beforeContext._new = True
def beforeDirectory(self, path):
"""Called before tests are loaded from directory at path.
:param path: the directory that is about to be processed
"""
pass
beforeDirectory._new = True
def beforeImport(self, filename, module):
"""Called before module is imported from filename.
:param filename: The file that will be loaded
:param module: The name of the module found in file
:type module: string
"""
beforeImport._new = True
def beforeTest(self, test):
"""Called before the test is run (before startTest).
:param test: the test case
:type test: :class:`nose.case.Test`
"""
pass
beforeTest._new = True
def begin(self):
"""Called before any tests are collected or run. Use this to
perform any setup needed before testing begins.
"""
pass
def configure(self, options, conf):
"""Called after the command line has been parsed, with the
parsed options and the config container. Here, implement any
config storage or changes to state or operation that are set
by command line options.
DO NOT return a value from this method unless you want to
stop all other plugins from being configured.
"""
pass
def finalize(self, result):
"""Called after all report output, including output from all
plugins, has been sent to the stream. Use this to print final
test results or perform final cleanup. Return None to allow
other plugins to continue printing, or any other value to stop
them.
:param result: test result object
.. Note:: When tests are run under a test runner other than
:class:`nose.core.TextTestRunner`, such as
via ``python setup.py test``, this method may be called
**before** the default report output is sent.
"""
pass
def describeTest(self, test):
"""Return a test description.
Called by :meth:`nose.case.Test.shortDescription`.
:param test: the test case
:type test: :class:`nose.case.Test`
"""
pass
describeTest._new = True
def formatError(self, test, err):
"""Called in result.addError, before plugin.addError. If you
want to replace or modify the error tuple, return a new error
tuple, otherwise return err, the original error tuple.
:param test: the test case
:type test: :class:`nose.case.Test`
:param err: sys.exc_info() tuple
:type err: 3-tuple
"""
pass
formatError._new = True
formatError.chainable = True
# test arg is not chainable
formatError.static_args = (True, False)
def formatFailure(self, test, err):
"""Called in result.addFailure, before plugin.addFailure. If you
want to replace or modify the error tuple, return a new error
tuple, otherwise return err, the original error tuple.
:param test: the test case
:type test: :class:`nose.case.Test`
:param err: sys.exc_info() tuple
:type err: 3-tuple
"""
pass
formatFailure._new = True
formatFailure.chainable = True
# test arg is not chainable
formatFailure.static_args = (True, False)
def handleError(self, test, err):
"""Called on addError. To handle the error yourself and prevent normal
error processing, return a true value.
:param test: the test case
:type test: :class:`nose.case.Test`
:param err: sys.exc_info() tuple
:type err: 3-tuple
"""
pass
handleError._new = True
def handleFailure(self, test, err):
"""Called on addFailure. To handle the failure yourself and
prevent normal failure processing, return a true value.
:param test: the test case
:type test: :class:`nose.case.Test`
:param err: sys.exc_info() tuple
:type err: 3-tuple
"""
pass
handleFailure._new = True
def loadTestsFromDir(self, path):
"""Return iterable of tests from a directory. May be a
generator. Each item returned must be a runnable
unittest.TestCase (or subclass) instance or suite instance.
Return None if your plugin cannot collect any tests from
directory.
:param path: The path to the directory.
"""
pass
loadTestsFromDir.generative = True
loadTestsFromDir._new = True
def loadTestsFromModule(self, module, path=None):
"""Return iterable of tests in a module. May be a
generator. Each item returned must be a runnable
unittest.TestCase (or subclass) instance.
Return None if your plugin cannot
collect any tests from module.
:param module: The module object
:type module: python module
:param path: the path of the module to search, to distinguish from
namespace package modules
.. note::
NEW. The ``path`` parameter will only be passed by nose 0.11
or above.
"""
pass
loadTestsFromModule.generative = True
def loadTestsFromName(self, name, module=None, importPath=None):
"""Return tests in this file or module. Return None if you are not able
to load any tests, or an iterable if you are. May be a
generator.
:param name: The test name. May be a file or module name plus a test
callable. Use split_test_name to split into parts. Or it might
be some crazy name of your own devising, in which case, do
whatever you want.
:param module: Module from which the name is to be loaded
:param importPath: Path from which file (must be a python module) was
found
.. warning:: DEPRECATED: this argument will NOT be passed.
"""
pass
loadTestsFromName.generative = True
def loadTestsFromNames(self, names, module=None):
"""Return a tuple of (tests loaded, remaining names). Return
None if you are not able to load any tests. Multiple plugins
may implement loadTestsFromNames; the remaining name list from
each will be passed to the next as input.
:param names: List of test names.
:type names: iterable
:param module: Module from which the names are to be loaded
"""
pass
loadTestsFromNames._new = True
loadTestsFromNames.chainable = True
def loadTestsFromFile(self, filename):
"""Return tests in this file. Return None if you are not
interested in loading any tests, or an iterable if you are and
can load some. May be a generator. *If you are interested in
loading tests from the file and encounter no errors, but find
no tests, yield False or return [False].*
.. Note:: This method replaces loadTestsFromPath from the 0.9
API.
:param filename: The full path to the file or directory.
"""
pass
loadTestsFromFile.generative = True
loadTestsFromFile._new = True
def loadTestsFromPath(self, path):
"""
.. warning:: DEPRECATED -- use loadTestsFromFile instead
"""
pass
loadTestsFromPath.deprecated = True
def loadTestsFromTestCase(self, cls):
"""Return tests in this test case class. Return None if you are
not able to load any tests, or an iterable if you are. May be a
generator.
:param cls: The test case class. Must be subclass of
:class:`unittest.TestCase`.
"""
pass
loadTestsFromTestCase.generative = True
def loadTestsFromTestClass(self, cls):
"""Return tests in this test class. Class will *not* be a
unittest.TestCase subclass. Return None if you are not able to
load any tests, an iterable if you are. May be a generator.
:param cls: The test case class. Must be **not** be subclass of
:class:`unittest.TestCase`.
"""
pass
loadTestsFromTestClass._new = True
loadTestsFromTestClass.generative = True
def makeTest(self, obj, parent):
"""Given an object and its parent, return or yield one or more
test cases. Each test must be a unittest.TestCase (or subclass)
instance. This is called before default test loading to allow
plugins to load an alternate test case or cases for an
object. May be a generator.
:param obj: The object to be made into a test
:param parent: The parent of obj (eg, for a method, the class)
"""
pass
makeTest._new = True
makeTest.generative = True
def options(self, parser, env):
"""Called to allow plugin to register command line
options with the parser.
DO NOT return a value from this method unless you want to stop
all other plugins from setting their options.
:param parser: options parser instance
:type parser: :class:`ConfigParser.ConfigParser`
:param env: environment, default is os.environ
"""
pass
options._new = True
def prepareTest(self, test):
"""Called before the test is run by the test runner. Please
note the article *the* in the previous sentence: prepareTest
is called *only once*, and is passed the test case or test
suite that the test runner will execute. It is *not* called
for each individual test case. If you return a non-None value,
that return value will be run as the test. Use this hook to
wrap or decorate the test with another function. If you need
to modify or wrap individual test cases, use `prepareTestCase`
instead.
:param test: the test case
:type test: :class:`nose.case.Test`
"""
pass
def prepareTestCase(self, test):
"""Prepare or wrap an individual test case. Called before
execution of the test. The test passed here is a
nose.case.Test instance; the case to be executed is in the
test attribute of the passed case. To modify the test to be
run, you should return a callable that takes one argument (the
test result object) -- it is recommended that you *do not*
side-effect the nose.case.Test instance you have been passed.
Keep in mind that when you replace the test callable you are
replacing the run() method of the test case -- including the
exception handling and result calls, etc.
:param test: the test case
:type test: :class:`nose.case.Test`
"""
pass
prepareTestCase._new = True
def prepareTestLoader(self, loader):
"""Called before tests are loaded. To replace the test loader,
return a test loader. To allow other plugins to process the
test loader, return None. Only one plugin may replace the test
loader. Only valid when using nose.TestProgram.
:param loader: :class:`nose.loader.TestLoader`
(or other loader) instance
"""
pass
prepareTestLoader._new = True
def prepareTestResult(self, result):
"""Called before the first test is run. To use a different
test result handler for all tests than the given result,
return a test result handler. NOTE however that this handler
will only be seen by tests, that is, inside of the result
proxy system. The TestRunner and TestProgram -- whether nose's
or other -- will continue to see the original result
handler. For this reason, it is usually better to monkeypatch
the result (for instance, if you want to handle some
exceptions in a unique way). Only one plugin may replace the
result, but many may monkeypatch it. If you want to
monkeypatch and stop other plugins from doing so, monkeypatch
and return the patched result.
:param result: :class:`nose.result.TextTestResult`
(or other result) instance
"""
pass
prepareTestResult._new = True
def prepareTestRunner(self, runner):
"""Called before tests are run. To replace the test runner,
return a test runner. To allow other plugins to process the
test runner, return None. Only valid when using nose.TestProgram.
:param runner: :class:`nose.core.TextTestRunner`
(or other runner) instance
"""
pass
prepareTestRunner._new = True
def report(self, stream):
"""Called after all error output has been printed. Print your
plugin's report to the provided stream. Return None to allow
other plugins to print reports, any other value to stop them.
:param stream: stream object; send your output here
:type stream: file-like object
"""
pass
def setOutputStream(self, stream):
"""Called before test output begins. To direct test output to a
new stream, return a stream object, which must implement a
`write(msg)` method. If you only want to note the stream, not
capture or redirect it, then return None.
:param stream: stream object; send your output here
:type stream: file-like object
"""
def startContext(self, context):
"""Called before context setup and the running of tests in the
context. Note that tests have already been *loaded* from the
context before this call.
:param context: the context about to be setup. May be a module or
class, or any other object that contains tests.
"""
pass
startContext._new = True
def startTest(self, test):
"""Called before each test is run. DO NOT return a value unless
you want to stop other plugins from seeing the test start.
:param test: the test case
:type test: :class:`nose.case.Test`
"""
pass
def stopContext(self, context):
"""Called after the tests in a context have run and the
context has been torn down.
:param context: the context that has been torn down. May be a module or
class, or any other object that contains tests.
"""
pass
stopContext._new = True
def stopTest(self, test):
"""Called after each test is run. DO NOT return a value unless
you want to stop other plugins from seeing that the test has stopped.
:param test: the test case
:type test: :class:`nose.case.Test`
"""
pass
def testName(self, test):
"""Return a short test name. Called by `nose.case.Test.__str__`.
:param test: the test case
:type test: :class:`nose.case.Test`
"""
pass
testName._new = True
def wantClass(self, cls):
"""Return true if you want the main test selector to collect
tests from this class, false if you don't, and None if you don't
care.
:param cls: The class being examined by the selector
"""
pass
def wantDirectory(self, dirname):
"""Return true if you want test collection to descend into this
directory, false if you do not, and None if you don't care.
:param dirname: Full path to directory being examined by the selector
"""
pass
def wantFile(self, file):
"""Return true if you want to collect tests from this file,
false if you do not and None if you don't care.
Change from 0.9: The optional package parameter is no longer passed.
:param file: Full path to file being examined by the selector
"""
pass
def wantFunction(self, function):
"""Return true to collect this function as a test, false to
prevent it from being collected, and None if you don't care.
:param function: The function object being examined by the selector
"""
pass
def wantMethod(self, method):
"""Return true to collect this method as a test, false to
prevent it from being collected, and None if you don't care.
:param method: The method object being examined by the selector
:type method: unbound method
"""
pass
def wantModule(self, module):
"""Return true if you want to collection to descend into this
module, false to prevent the collector from descending into the
module, and None if you don't care.
:param module: The module object being examined by the selector
:type module: python module
"""
pass
def wantModuleTests(self, module):
"""
.. warning:: DEPRECATED -- this method will not be called, it has
been folded into wantModule.
"""
pass
wantModuleTests.deprecated = True

View File

@ -0,0 +1,34 @@
"""
Lists builtin plugins.
"""
plugins = []
builtins = (
('nose.plugins.attrib', 'AttributeSelector'),
('nose.plugins.capture', 'Capture'),
('nose.plugins.logcapture', 'LogCapture'),
('nose.plugins.cover', 'Coverage'),
('nose.plugins.debug', 'Pdb'),
('nose.plugins.deprecated', 'Deprecated'),
('nose.plugins.doctests', 'Doctest'),
('nose.plugins.isolate', 'IsolationPlugin'),
('nose.plugins.failuredetail', 'FailureDetail'),
('nose.plugins.prof', 'Profile'),
('nose.plugins.skip', 'Skip'),
('nose.plugins.testid', 'TestId'),
('nose.plugins.multiprocess', 'MultiProcess'),
('nose.plugins.xunit', 'Xunit'),
('nose.plugins.allmodules', 'AllModules'),
('nose.plugins.collect', 'CollectOnly'),
)
for module, cls in builtins:
try:
plugmod = __import__(module, globals(), locals(), [cls])
except KeyboardInterrupt:
raise
except:
continue
plug = getattr(plugmod, cls)
plugins.append(plug)
globals()[cls] = plug

View File

@ -0,0 +1,115 @@
"""
This plugin captures stdout during test execution. If the test fails
or raises an error, the captured output will be appended to the error
or failure output. It is enabled by default but can be disabled with
the options ``-s`` or ``--nocapture``.
:Options:
``--nocapture``
Don't capture stdout (any stdout output will be printed immediately)
"""
import logging
import os
import sys
from nose.plugins.base import Plugin
from nose.pyversion import exc_to_unicode, force_unicode
from nose.util import ln
from StringIO import StringIO
log = logging.getLogger(__name__)
class Capture(Plugin):
"""
Output capture plugin. Enabled by default. Disable with ``-s`` or
``--nocapture``. This plugin captures stdout during test execution,
appending any output captured to the error or failure output,
should the test fail or raise an error.
"""
enabled = True
env_opt = 'NOSE_NOCAPTURE'
name = 'capture'
score = 1600
def __init__(self):
self.stdout = []
self._buf = None
def options(self, parser, env):
"""Register commandline options
"""
parser.add_option(
"-s", "--nocapture", action="store_false",
default=not env.get(self.env_opt), dest="capture",
help="Don't capture stdout (any stdout output "
"will be printed immediately) [NOSE_NOCAPTURE]")
def configure(self, options, conf):
"""Configure plugin. Plugin is enabled by default.
"""
self.conf = conf
if not options.capture:
self.enabled = False
def afterTest(self, test):
"""Clear capture buffer.
"""
self.end()
self._buf = None
def begin(self):
"""Replace sys.stdout with capture buffer.
"""
self.start() # get an early handle on sys.stdout
def beforeTest(self, test):
"""Flush capture buffer.
"""
self.start()
def formatError(self, test, err):
"""Add captured output to error report.
"""
test.capturedOutput = output = self.buffer
self._buf = None
if not output:
# Don't return None as that will prevent other
# formatters from formatting and remove earlier formatters
# formats, instead return the err we got
return err
ec, ev, tb = err
return (ec, self.addCaptureToErr(ev, output), tb)
def formatFailure(self, test, err):
"""Add captured output to failure report.
"""
return self.formatError(test, err)
def addCaptureToErr(self, ev, output):
ev = exc_to_unicode(ev)
output = force_unicode(output)
return u'\n'.join([ev, ln(u'>> begin captured stdout <<'),
output, ln(u'>> end captured stdout <<')])
def start(self):
self.stdout.append(sys.stdout)
self._buf = StringIO()
sys.stdout = self._buf
def end(self):
if self.stdout:
sys.stdout = self.stdout.pop()
def finalize(self, result):
"""Restore stdout.
"""
while self.stdout:
self.end()
def _get_buffer(self):
if self._buf is not None:
return self._buf.getvalue()
buffer = property(_get_buffer, None, None,
"""Captured stdout output.""")

View File

@ -0,0 +1,94 @@
"""
This plugin bypasses the actual execution of tests, and instead just collects
test names. Fixtures are also bypassed, so running nosetests with the
collection plugin enabled should be very quick.
This plugin is useful in combination with the testid plugin (``--with-id``).
Run both together to get an indexed list of all tests, which will enable you to
run individual tests by index number.
This plugin is also useful for counting tests in a test suite, and making
people watching your demo think all of your tests pass.
"""
from nose.plugins.base import Plugin
from nose.case import Test
import logging
import unittest
log = logging.getLogger(__name__)
class CollectOnly(Plugin):
"""
Collect and output test names only, don't run any tests.
"""
name = "collect-only"
enableOpt = 'collect_only'
def options(self, parser, env):
"""Register commandline options.
"""
parser.add_option('--collect-only',
action='store_true',
dest=self.enableOpt,
default=env.get('NOSE_COLLECT_ONLY'),
help="Enable collect-only: %s [COLLECT_ONLY]" %
(self.help()))
def prepareTestLoader(self, loader):
"""Install collect-only suite class in TestLoader.
"""
# Disable context awareness
log.debug("Preparing test loader")
loader.suiteClass = TestSuiteFactory(self.conf)
def prepareTestCase(self, test):
"""Replace actual test with dummy that always passes.
"""
# Return something that always passes
log.debug("Preparing test case %s", test)
if not isinstance(test, Test):
return
def run(result):
# We need to make these plugin calls because there won't be
# a result proxy, due to using a stripped-down test suite
self.conf.plugins.startTest(test)
result.startTest(test)
self.conf.plugins.addSuccess(test)
result.addSuccess(test)
self.conf.plugins.stopTest(test)
result.stopTest(test)
return run
class TestSuiteFactory:
"""
Factory for producing configured test suites.
"""
def __init__(self, conf):
self.conf = conf
def __call__(self, tests=(), **kw):
return TestSuite(tests, conf=self.conf)
class TestSuite(unittest.TestSuite):
"""
Basic test suite that bypasses most proxy and plugin calls, but does
wrap tests in a nose.case.Test so prepareTestCase will be called.
"""
def __init__(self, tests=(), conf=None):
self.conf = conf
# Exec lazy suites: makes discovery depth-first
if callable(tests):
tests = tests()
log.debug("TestSuite(%r)", tests)
unittest.TestSuite.__init__(self, tests)
def addTest(self, test):
log.debug("Add test %s", test)
if isinstance(test, unittest.TestSuite):
self._tests.append(test)
else:
self._tests.append(Test(test, config=self.conf))

271
lib/spack/external/nose/plugins/cover.py vendored Normal file
View File

@ -0,0 +1,271 @@
"""If you have Ned Batchelder's coverage_ module installed, you may activate a
coverage report with the ``--with-coverage`` switch or NOSE_WITH_COVERAGE
environment variable. The coverage report will cover any python source module
imported after the start of the test run, excluding modules that match
testMatch. If you want to include those modules too, use the ``--cover-tests``
switch, or set the NOSE_COVER_TESTS environment variable to a true value. To
restrict the coverage report to modules from a particular package or packages,
use the ``--cover-package`` switch or the NOSE_COVER_PACKAGE environment
variable.
.. _coverage: http://www.nedbatchelder.com/code/modules/coverage.html
"""
import logging
import re
import sys
import StringIO
from nose.plugins.base import Plugin
from nose.util import src, tolist
log = logging.getLogger(__name__)
class Coverage(Plugin):
"""
Activate a coverage report using Ned Batchelder's coverage module.
"""
coverTests = False
coverPackages = None
coverInstance = None
coverErase = False
coverMinPercentage = None
score = 200
status = {}
def options(self, parser, env):
"""
Add options to command line.
"""
super(Coverage, self).options(parser, env)
parser.add_option("--cover-package", action="append",
default=env.get('NOSE_COVER_PACKAGE'),
metavar="PACKAGE",
dest="cover_packages",
help="Restrict coverage output to selected packages "
"[NOSE_COVER_PACKAGE]")
parser.add_option("--cover-erase", action="store_true",
default=env.get('NOSE_COVER_ERASE'),
dest="cover_erase",
help="Erase previously collected coverage "
"statistics before run")
parser.add_option("--cover-tests", action="store_true",
dest="cover_tests",
default=env.get('NOSE_COVER_TESTS'),
help="Include test modules in coverage report "
"[NOSE_COVER_TESTS]")
parser.add_option("--cover-min-percentage", action="store",
dest="cover_min_percentage",
default=env.get('NOSE_COVER_MIN_PERCENTAGE'),
help="Minimum percentage of coverage for tests "
"to pass [NOSE_COVER_MIN_PERCENTAGE]")
parser.add_option("--cover-inclusive", action="store_true",
dest="cover_inclusive",
default=env.get('NOSE_COVER_INCLUSIVE'),
help="Include all python files under working "
"directory in coverage report. Useful for "
"discovering holes in test coverage if not all "
"files are imported by the test suite. "
"[NOSE_COVER_INCLUSIVE]")
parser.add_option("--cover-html", action="store_true",
default=env.get('NOSE_COVER_HTML'),
dest='cover_html',
help="Produce HTML coverage information")
parser.add_option('--cover-html-dir', action='store',
default=env.get('NOSE_COVER_HTML_DIR', 'cover'),
dest='cover_html_dir',
metavar='DIR',
help='Produce HTML coverage information in dir')
parser.add_option("--cover-branches", action="store_true",
default=env.get('NOSE_COVER_BRANCHES'),
dest="cover_branches",
help="Include branch coverage in coverage report "
"[NOSE_COVER_BRANCHES]")
parser.add_option("--cover-xml", action="store_true",
default=env.get('NOSE_COVER_XML'),
dest="cover_xml",
help="Produce XML coverage information")
parser.add_option("--cover-xml-file", action="store",
default=env.get('NOSE_COVER_XML_FILE', 'coverage.xml'),
dest="cover_xml_file",
metavar="FILE",
help="Produce XML coverage information in file")
def configure(self, options, conf):
"""
Configure plugin.
"""
try:
self.status.pop('active')
except KeyError:
pass
super(Coverage, self).configure(options, conf)
if self.enabled:
try:
import coverage
if not hasattr(coverage, 'coverage'):
raise ImportError("Unable to import coverage module")
except ImportError:
log.error("Coverage not available: "
"unable to import coverage module")
self.enabled = False
return
self.conf = conf
self.coverErase = options.cover_erase
self.coverTests = options.cover_tests
self.coverPackages = []
if options.cover_packages:
if isinstance(options.cover_packages, (list, tuple)):
cover_packages = options.cover_packages
else:
cover_packages = [options.cover_packages]
for pkgs in [tolist(x) for x in cover_packages]:
self.coverPackages.extend(pkgs)
self.coverInclusive = options.cover_inclusive
if self.coverPackages:
log.info("Coverage report will include only packages: %s",
self.coverPackages)
self.coverHtmlDir = None
if options.cover_html:
self.coverHtmlDir = options.cover_html_dir
log.debug('Will put HTML coverage report in %s', self.coverHtmlDir)
self.coverBranches = options.cover_branches
self.coverXmlFile = None
if options.cover_min_percentage:
self.coverMinPercentage = int(options.cover_min_percentage.rstrip('%'))
if options.cover_xml:
self.coverXmlFile = options.cover_xml_file
log.debug('Will put XML coverage report in %s', self.coverXmlFile)
if self.enabled:
self.status['active'] = True
self.coverInstance = coverage.coverage(auto_data=False,
branch=self.coverBranches, data_suffix=conf.worker,
source=self.coverPackages)
self.coverInstance._warn_no_data = False
self.coverInstance.is_worker = conf.worker
self.coverInstance.exclude('#pragma[: ]+[nN][oO] [cC][oO][vV][eE][rR]')
log.debug("Coverage begin")
self.skipModules = sys.modules.keys()[:]
if self.coverErase:
log.debug("Clearing previously collected coverage statistics")
self.coverInstance.combine()
self.coverInstance.erase()
if not self.coverInstance.is_worker:
self.coverInstance.load()
self.coverInstance.start()
def beforeTest(self, *args, **kwargs):
"""
Begin recording coverage information.
"""
if self.coverInstance.is_worker:
self.coverInstance.load()
self.coverInstance.start()
def afterTest(self, *args, **kwargs):
"""
Stop recording coverage information.
"""
if self.coverInstance.is_worker:
self.coverInstance.stop()
self.coverInstance.save()
def report(self, stream):
"""
Output code coverage report.
"""
log.debug("Coverage report")
self.coverInstance.stop()
self.coverInstance.combine()
self.coverInstance.save()
modules = [module
for name, module in sys.modules.items()
if self.wantModuleCoverage(name, module)]
log.debug("Coverage report will cover modules: %s", modules)
self.coverInstance.report(modules, file=stream)
import coverage
if self.coverHtmlDir:
log.debug("Generating HTML coverage report")
try:
self.coverInstance.html_report(modules, self.coverHtmlDir)
except coverage.misc.CoverageException, e:
log.warning("Failed to generate HTML report: %s" % str(e))
if self.coverXmlFile:
log.debug("Generating XML coverage report")
try:
self.coverInstance.xml_report(modules, self.coverXmlFile)
except coverage.misc.CoverageException, e:
log.warning("Failed to generate XML report: %s" % str(e))
# make sure we have minimum required coverage
if self.coverMinPercentage:
f = StringIO.StringIO()
self.coverInstance.report(modules, file=f)
multiPackageRe = (r'-------\s\w+\s+\d+\s+\d+(?:\s+\d+\s+\d+)?'
r'\s+(\d+)%\s+\d*\s{0,1}$')
singlePackageRe = (r'-------\s[\w./]+\s+\d+\s+\d+(?:\s+\d+\s+\d+)?'
r'\s+(\d+)%(?:\s+[-\d, ]+)\s{0,1}$')
m = re.search(multiPackageRe, f.getvalue())
if m is None:
m = re.search(singlePackageRe, f.getvalue())
if m:
percentage = int(m.groups()[0])
if percentage < self.coverMinPercentage:
log.error('TOTAL Coverage did not reach minimum '
'required: %d%%' % self.coverMinPercentage)
sys.exit(1)
else:
log.error("No total percentage was found in coverage output, "
"something went wrong.")
def wantModuleCoverage(self, name, module):
if not hasattr(module, '__file__'):
log.debug("no coverage of %s: no __file__", name)
return False
module_file = src(module.__file__)
if not module_file or not module_file.endswith('.py'):
log.debug("no coverage of %s: not a python file", name)
return False
if self.coverPackages:
for package in self.coverPackages:
if (re.findall(r'^%s\b' % re.escape(package), name)
and (self.coverTests
or not self.conf.testMatch.search(name))):
log.debug("coverage for %s", name)
return True
if name in self.skipModules:
log.debug("no coverage for %s: loaded before coverage start",
name)
return False
if self.conf.testMatch.search(name) and not self.coverTests:
log.debug("no coverage for %s: is a test", name)
return False
# accept any package that passed the previous tests, unless
# coverPackages is on -- in that case, if we wanted this
# module, we would have already returned True
return not self.coverPackages
def wantFile(self, file, package=None):
"""If inclusive coverage enabled, return true for all source files
in wanted packages.
"""
if self.coverInclusive:
if file.endswith(".py"):
if package and self.coverPackages:
for want in self.coverPackages:
if package.startswith(want):
return True
else:
return True
return None

View File

@ -0,0 +1,67 @@
"""
This plugin provides ``--pdb`` and ``--pdb-failures`` options. The ``--pdb``
option will drop the test runner into pdb when it encounters an error. To
drop into pdb on failure, use ``--pdb-failures``.
"""
import pdb
from nose.plugins.base import Plugin
class Pdb(Plugin):
"""
Provides --pdb and --pdb-failures options that cause the test runner to
drop into pdb if it encounters an error or failure, respectively.
"""
enabled_for_errors = False
enabled_for_failures = False
score = 5 # run last, among builtins
def options(self, parser, env):
"""Register commandline options.
"""
parser.add_option(
"--pdb", action="store_true", dest="debugBoth",
default=env.get('NOSE_PDB', False),
help="Drop into debugger on failures or errors")
parser.add_option(
"--pdb-failures", action="store_true",
dest="debugFailures",
default=env.get('NOSE_PDB_FAILURES', False),
help="Drop into debugger on failures")
parser.add_option(
"--pdb-errors", action="store_true",
dest="debugErrors",
default=env.get('NOSE_PDB_ERRORS', False),
help="Drop into debugger on errors")
def configure(self, options, conf):
"""Configure which kinds of exceptions trigger plugin.
"""
self.conf = conf
self.enabled_for_errors = options.debugErrors or options.debugBoth
self.enabled_for_failures = options.debugFailures or options.debugBoth
self.enabled = self.enabled_for_failures or self.enabled_for_errors
def addError(self, test, err):
"""Enter pdb if configured to debug errors.
"""
if not self.enabled_for_errors:
return
self.debug(err)
def addFailure(self, test, err):
"""Enter pdb if configured to debug failures.
"""
if not self.enabled_for_failures:
return
self.debug(err)
def debug(self, err):
import sys # FIXME why is this import here?
ec, ev, tb = err
stdout = sys.stdout
sys.stdout = sys.__stdout__
try:
pdb.post_mortem(tb)
finally:
sys.stdout = stdout

View File

@ -0,0 +1,45 @@
"""
This plugin installs a DEPRECATED error class for the :class:`DeprecatedTest`
exception. When :class:`DeprecatedTest` is raised, the exception will be logged
in the deprecated attribute of the result, ``D`` or ``DEPRECATED`` (verbose)
will be output, and the exception will not be counted as an error or failure.
It is enabled by default, but can be turned off by using ``--no-deprecated``.
"""
from nose.plugins.errorclass import ErrorClass, ErrorClassPlugin
class DeprecatedTest(Exception):
"""Raise this exception to mark a test as deprecated.
"""
pass
class Deprecated(ErrorClassPlugin):
"""
Installs a DEPRECATED error class for the DeprecatedTest exception. Enabled
by default.
"""
enabled = True
deprecated = ErrorClass(DeprecatedTest,
label='DEPRECATED',
isfailure=False)
def options(self, parser, env):
"""Register commandline options.
"""
env_opt = 'NOSE_WITHOUT_DEPRECATED'
parser.add_option('--no-deprecated', action='store_true',
dest='noDeprecated', default=env.get(env_opt, False),
help="Disable special handling of DeprecatedTest "
"exceptions.")
def configure(self, options, conf):
"""Configure plugin.
"""
if not self.can_configure:
return
self.conf = conf
disable = getattr(options, 'noDeprecated', False)
if disable:
self.enabled = False

View File

@ -0,0 +1,455 @@
"""Use the Doctest plugin with ``--with-doctest`` or the NOSE_WITH_DOCTEST
environment variable to enable collection and execution of :mod:`doctests
<doctest>`. Because doctests are usually included in the tested package
(instead of being grouped into packages or modules of their own), nose only
looks for them in the non-test packages it discovers in the working directory.
Doctests may also be placed into files other than python modules, in which
case they can be collected and executed by using the ``--doctest-extension``
switch or NOSE_DOCTEST_EXTENSION environment variable to indicate which file
extension(s) to load.
When loading doctests from non-module files, use the ``--doctest-fixtures``
switch to specify how to find modules containing fixtures for the tests. A
module name will be produced by appending the value of that switch to the base
name of each doctest file loaded. For example, a doctest file "widgets.rst"
with the switch ``--doctest_fixtures=_fixt`` will load fixtures from the module
``widgets_fixt.py``.
A fixtures module may define any or all of the following functions:
* setup([module]) or setup_module([module])
Called before the test runs. You may raise SkipTest to skip all tests.
* teardown([module]) or teardown_module([module])
Called after the test runs, if setup/setup_module did not raise an
unhandled exception.
* setup_test(test)
Called before the test. NOTE: the argument passed is a
doctest.DocTest instance, *not* a unittest.TestCase.
* teardown_test(test)
Called after the test, if setup_test did not raise an exception. NOTE: the
argument passed is a doctest.DocTest instance, *not* a unittest.TestCase.
Doctests are run like any other test, with the exception that output
capture does not work; doctest does its own output capture while running a
test.
.. note ::
See :doc:`../doc_tests/test_doctest_fixtures/doctest_fixtures` for
additional documentation and examples.
"""
from __future__ import generators
import logging
import os
import sys
import unittest
from inspect import getmodule
from nose.plugins.base import Plugin
from nose.suite import ContextList
from nose.util import anyp, getpackage, test_address, resolve_name, \
src, tolist, isproperty
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
import sys
import __builtin__ as builtin_mod
log = logging.getLogger(__name__)
try:
import doctest
doctest.DocTestCase
# system version of doctest is acceptable, but needs a monkeypatch
except (ImportError, AttributeError):
# system version is too old
import nose.ext.dtcompat as doctest
#
# Doctest and coverage don't get along, so we need to create
# a monkeypatch that will replace the part of doctest that
# interferes with coverage reports.
#
# The monkeypatch is based on this zope patch:
# http://svn.zope.org/Zope3/trunk/src/zope/testing/doctest.py?rev=28679&r1=28703&r2=28705
#
_orp = doctest._OutputRedirectingPdb
class NoseOutputRedirectingPdb(_orp):
def __init__(self, out):
self.__debugger_used = False
_orp.__init__(self, out)
def set_trace(self):
self.__debugger_used = True
_orp.set_trace(self, sys._getframe().f_back)
def set_continue(self):
# Calling set_continue unconditionally would break unit test
# coverage reporting, as Bdb.set_continue calls sys.settrace(None).
if self.__debugger_used:
_orp.set_continue(self)
doctest._OutputRedirectingPdb = NoseOutputRedirectingPdb
class DoctestSuite(unittest.TestSuite):
"""
Doctest suites are parallelizable at the module or file level only,
since they may be attached to objects that are not individually
addressable (like properties). This suite subclass is used when
loading doctests from a module to ensure that behavior.
This class is used only if the plugin is not fully prepared;
in normal use, the loader's suiteClass is used.
"""
can_split = False
def __init__(self, tests=(), context=None, can_split=False):
self.context = context
self.can_split = can_split
unittest.TestSuite.__init__(self, tests=tests)
def address(self):
return test_address(self.context)
def __iter__(self):
# 2.3 compat
return iter(self._tests)
def __str__(self):
return str(self._tests)
class Doctest(Plugin):
"""
Activate doctest plugin to find and run doctests in non-test modules.
"""
extension = None
suiteClass = DoctestSuite
def options(self, parser, env):
"""Register commmandline options.
"""
Plugin.options(self, parser, env)
parser.add_option('--doctest-tests', action='store_true',
dest='doctest_tests',
default=env.get('NOSE_DOCTEST_TESTS'),
help="Also look for doctests in test modules. "
"Note that classes, methods and functions should "
"have either doctests or non-doctest tests, "
"not both. [NOSE_DOCTEST_TESTS]")
parser.add_option('--doctest-extension', action="append",
dest="doctestExtension",
metavar="EXT",
help="Also look for doctests in files with "
"this extension [NOSE_DOCTEST_EXTENSION]")
parser.add_option('--doctest-result-variable',
dest='doctest_result_var',
default=env.get('NOSE_DOCTEST_RESULT_VAR'),
metavar="VAR",
help="Change the variable name set to the result of "
"the last interpreter command from the default '_'. "
"Can be used to avoid conflicts with the _() "
"function used for text translation. "
"[NOSE_DOCTEST_RESULT_VAR]")
parser.add_option('--doctest-fixtures', action="store",
dest="doctestFixtures",
metavar="SUFFIX",
help="Find fixtures for a doctest file in module "
"with this name appended to the base name "
"of the doctest file")
parser.add_option('--doctest-options', action="append",
dest="doctestOptions",
metavar="OPTIONS",
help="Specify options to pass to doctest. " +
"Eg. '+ELLIPSIS,+NORMALIZE_WHITESPACE'")
# Set the default as a list, if given in env; otherwise
# an additional value set on the command line will cause
# an error.
env_setting = env.get('NOSE_DOCTEST_EXTENSION')
if env_setting is not None:
parser.set_defaults(doctestExtension=tolist(env_setting))
def configure(self, options, config):
"""Configure plugin.
"""
Plugin.configure(self, options, config)
self.doctest_result_var = options.doctest_result_var
self.doctest_tests = options.doctest_tests
self.extension = tolist(options.doctestExtension)
self.fixtures = options.doctestFixtures
self.finder = doctest.DocTestFinder()
self.optionflags = 0
if options.doctestOptions:
flags = ",".join(options.doctestOptions).split(',')
for flag in flags:
if not flag or flag[0] not in '+-':
raise ValueError(
"Must specify doctest options with starting " +
"'+' or '-'. Got %s" % (flag,))
mode, option_name = flag[0], flag[1:]
option_flag = doctest.OPTIONFLAGS_BY_NAME.get(option_name)
if not option_flag:
raise ValueError("Unknown doctest option %s" %
(option_name,))
if mode == '+':
self.optionflags |= option_flag
elif mode == '-':
self.optionflags &= ~option_flag
def prepareTestLoader(self, loader):
"""Capture loader's suiteClass.
This is used to create test suites from doctest files.
"""
self.suiteClass = loader.suiteClass
def loadTestsFromModule(self, module):
"""Load doctests from the module.
"""
log.debug("loading from %s", module)
if not self.matches(module.__name__):
log.debug("Doctest doesn't want module %s", module)
return
try:
tests = self.finder.find(module)
except AttributeError:
log.exception("Attribute error loading from %s", module)
# nose allows module.__test__ = False; doctest does not and throws
# AttributeError
return
if not tests:
log.debug("No tests found in %s", module)
return
tests.sort()
module_file = src(module.__file__)
# FIXME this breaks the id plugin somehow (tests probably don't
# get wrapped in result proxy or something)
cases = []
for test in tests:
if not test.examples:
continue
if not test.filename:
test.filename = module_file
cases.append(DocTestCase(test,
optionflags=self.optionflags,
result_var=self.doctest_result_var))
if cases:
yield self.suiteClass(cases, context=module, can_split=False)
def loadTestsFromFile(self, filename):
"""Load doctests from the file.
Tests are loaded only if filename's extension matches
configured doctest extension.
"""
if self.extension and anyp(filename.endswith, self.extension):
name = os.path.basename(filename)
dh = open(filename)
try:
doc = dh.read()
finally:
dh.close()
fixture_context = None
globs = {'__file__': filename}
if self.fixtures:
base, ext = os.path.splitext(name)
dirname = os.path.dirname(filename)
sys.path.append(dirname)
fixt_mod = base + self.fixtures
try:
fixture_context = __import__(
fixt_mod, globals(), locals(), ["nop"])
except ImportError, e:
log.debug(
"Could not import %s: %s (%s)", fixt_mod, e, sys.path)
log.debug("Fixture module %s resolved to %s",
fixt_mod, fixture_context)
if hasattr(fixture_context, 'globs'):
globs = fixture_context.globs(globs)
parser = doctest.DocTestParser()
test = parser.get_doctest(
doc, globs=globs, name=name,
filename=filename, lineno=0)
if test.examples:
case = DocFileCase(
test,
optionflags=self.optionflags,
setUp=getattr(fixture_context, 'setup_test', None),
tearDown=getattr(fixture_context, 'teardown_test', None),
result_var=self.doctest_result_var)
if fixture_context:
yield ContextList((case,), context=fixture_context)
else:
yield case
else:
yield False # no tests to load
def makeTest(self, obj, parent):
"""Look for doctests in the given object, which will be a
function, method or class.
"""
name = getattr(obj, '__name__', 'Unnammed %s' % type(obj))
doctests = self.finder.find(obj, module=getmodule(parent), name=name)
if doctests:
for test in doctests:
if len(test.examples) == 0:
continue
yield DocTestCase(test, obj=obj, optionflags=self.optionflags,
result_var=self.doctest_result_var)
def matches(self, name):
# FIXME this seems wrong -- nothing is ever going to
# fail this test, since we're given a module NAME not FILE
if name == '__init__.py':
return False
# FIXME don't think we need include/exclude checks here?
return ((self.doctest_tests or not self.conf.testMatch.search(name)
or (self.conf.include
and filter(None,
[inc.search(name)
for inc in self.conf.include])))
and (not self.conf.exclude
or not filter(None,
[exc.search(name)
for exc in self.conf.exclude])))
def wantFile(self, file):
"""Override to select all modules and any file ending with
configured doctest extension.
"""
# always want .py files
if file.endswith('.py'):
return True
# also want files that match my extension
if (self.extension
and anyp(file.endswith, self.extension)
and (not self.conf.exclude
or not filter(None,
[exc.search(file)
for exc in self.conf.exclude]))):
return True
return None
class DocTestCase(doctest.DocTestCase):
"""Overrides DocTestCase to
provide an address() method that returns the correct address for
the doctest case. To provide hints for address(), an obj may also
be passed -- this will be used as the test object for purposes of
determining the test address, if it is provided.
"""
def __init__(self, test, optionflags=0, setUp=None, tearDown=None,
checker=None, obj=None, result_var='_'):
self._result_var = result_var
self._nose_obj = obj
super(DocTestCase, self).__init__(
test, optionflags=optionflags, setUp=setUp, tearDown=tearDown,
checker=checker)
def address(self):
if self._nose_obj is not None:
return test_address(self._nose_obj)
obj = resolve_name(self._dt_test.name)
if isproperty(obj):
# properties have no connection to the class they are in
# so we can't just look 'em up, we have to first look up
# the class, then stick the prop on the end
parts = self._dt_test.name.split('.')
class_name = '.'.join(parts[:-1])
cls = resolve_name(class_name)
base_addr = test_address(cls)
return (base_addr[0], base_addr[1],
'.'.join([base_addr[2], parts[-1]]))
else:
return test_address(obj)
# doctests loaded via find(obj) omit the module name
# so we need to override id, __repr__ and shortDescription
# bonus: this will squash a 2.3 vs 2.4 incompatiblity
def id(self):
name = self._dt_test.name
filename = self._dt_test.filename
if filename is not None:
pk = getpackage(filename)
if pk is None:
return name
if not name.startswith(pk):
name = "%s.%s" % (pk, name)
return name
def __repr__(self):
name = self.id()
name = name.split('.')
return "%s (%s)" % (name[-1], '.'.join(name[:-1]))
__str__ = __repr__
def shortDescription(self):
return 'Doctest: %s' % self.id()
def setUp(self):
if self._result_var is not None:
self._old_displayhook = sys.displayhook
sys.displayhook = self._displayhook
super(DocTestCase, self).setUp()
def _displayhook(self, value):
if value is None:
return
setattr(builtin_mod, self._result_var, value)
print repr(value)
def tearDown(self):
super(DocTestCase, self).tearDown()
if self._result_var is not None:
sys.displayhook = self._old_displayhook
delattr(builtin_mod, self._result_var)
class DocFileCase(doctest.DocFileCase):
"""Overrides to provide address() method that returns the correct
address for the doc file case.
"""
def __init__(self, test, optionflags=0, setUp=None, tearDown=None,
checker=None, result_var='_'):
self._result_var = result_var
super(DocFileCase, self).__init__(
test, optionflags=optionflags, setUp=setUp, tearDown=tearDown,
checker=None)
def address(self):
return (self._dt_test.filename, None, None)
def setUp(self):
if self._result_var is not None:
self._old_displayhook = sys.displayhook
sys.displayhook = self._displayhook
super(DocFileCase, self).setUp()
def _displayhook(self, value):
if value is None:
return
setattr(builtin_mod, self._result_var, value)
print repr(value)
def tearDown(self):
super(DocFileCase, self).tearDown()
if self._result_var is not None:
sys.displayhook = self._old_displayhook
delattr(builtin_mod, self._result_var)

View File

@ -0,0 +1,210 @@
"""
ErrorClass Plugins
------------------
ErrorClass plugins provide an easy way to add support for custom
handling of particular classes of exceptions.
An ErrorClass plugin defines one or more ErrorClasses and how each is
handled and reported on. Each error class is stored in a different
attribute on the result, and reported separately. Each error class must
indicate the exceptions that fall under that class, the label to use
for reporting, and whether exceptions of the class should be
considered as failures for the whole test run.
ErrorClasses use a declarative syntax. Assign an ErrorClass to the
attribute you wish to add to the result object, defining the
exceptions, label and isfailure attributes. For example, to declare an
ErrorClassPlugin that defines TodoErrors (and subclasses of TodoError)
as an error class with the label 'TODO' that is considered a failure,
do this:
>>> class Todo(Exception):
... pass
>>> class TodoError(ErrorClassPlugin):
... todo = ErrorClass(Todo, label='TODO', isfailure=True)
The MetaErrorClass metaclass translates the ErrorClass declarations
into the tuples used by the error handling and reporting functions in
the result. This is an internal format and subject to change; you
should always use the declarative syntax for attaching ErrorClasses to
an ErrorClass plugin.
>>> TodoError.errorClasses # doctest: +ELLIPSIS
((<class ...Todo...>, ('todo', 'TODO', True)),)
Let's see the plugin in action. First some boilerplate.
>>> import sys
>>> import unittest
>>> try:
... # 2.7+
... from unittest.runner import _WritelnDecorator
... except ImportError:
... from unittest import _WritelnDecorator
...
>>> buf = _WritelnDecorator(sys.stdout)
Now define a test case that raises a Todo.
>>> class TestTodo(unittest.TestCase):
... def runTest(self):
... raise Todo("I need to test something")
>>> case = TestTodo()
Prepare the result using our plugin. Normally this happens during the
course of test execution within nose -- you won't be doing this
yourself. For the purposes of this testing document, I'm stepping
through the internal process of nose so you can see what happens at
each step.
>>> plugin = TodoError()
>>> from nose.result import _TextTestResult
>>> result = _TextTestResult(stream=buf, descriptions=0, verbosity=2)
>>> plugin.prepareTestResult(result)
Now run the test. TODO is printed.
>>> _ = case(result) # doctest: +ELLIPSIS
runTest (....TestTodo) ... TODO: I need to test something
Errors and failures are empty, but todo has our test:
>>> result.errors
[]
>>> result.failures
[]
>>> result.todo # doctest: +ELLIPSIS
[(<....TestTodo testMethod=runTest>, '...Todo: I need to test something\\n')]
>>> result.printErrors() # doctest: +ELLIPSIS
<BLANKLINE>
======================================================================
TODO: runTest (....TestTodo)
----------------------------------------------------------------------
Traceback (most recent call last):
...
...Todo: I need to test something
<BLANKLINE>
Since we defined a Todo as a failure, the run was not successful.
>>> result.wasSuccessful()
False
"""
from nose.pyversion import make_instancemethod
from nose.plugins.base import Plugin
from nose.result import TextTestResult
from nose.util import isclass
class MetaErrorClass(type):
"""Metaclass for ErrorClassPlugins that allows error classes to be
set up in a declarative manner.
"""
def __init__(self, name, bases, attr):
errorClasses = []
for name, detail in attr.items():
if isinstance(detail, ErrorClass):
attr.pop(name)
for cls in detail:
errorClasses.append(
(cls, (name, detail.label, detail.isfailure)))
super(MetaErrorClass, self).__init__(name, bases, attr)
self.errorClasses = tuple(errorClasses)
class ErrorClass(object):
def __init__(self, *errorClasses, **kw):
self.errorClasses = errorClasses
try:
for key in ('label', 'isfailure'):
setattr(self, key, kw.pop(key))
except KeyError:
raise TypeError("%r is a required named argument for ErrorClass"
% key)
def __iter__(self):
return iter(self.errorClasses)
class ErrorClassPlugin(Plugin):
"""
Base class for ErrorClass plugins. Subclass this class and declare the
exceptions that you wish to handle as attributes of the subclass.
"""
__metaclass__ = MetaErrorClass
score = 1000
errorClasses = ()
def addError(self, test, err):
err_cls, a, b = err
if not isclass(err_cls):
return
classes = [e[0] for e in self.errorClasses]
if filter(lambda c: issubclass(err_cls, c), classes):
return True
def prepareTestResult(self, result):
if not hasattr(result, 'errorClasses'):
self.patchResult(result)
for cls, (storage_attr, label, isfail) in self.errorClasses:
if cls not in result.errorClasses:
storage = getattr(result, storage_attr, [])
setattr(result, storage_attr, storage)
result.errorClasses[cls] = (storage, label, isfail)
def patchResult(self, result):
result.printLabel = print_label_patch(result)
result._orig_addError, result.addError = \
result.addError, add_error_patch(result)
result._orig_wasSuccessful, result.wasSuccessful = \
result.wasSuccessful, wassuccessful_patch(result)
if hasattr(result, 'printErrors'):
result._orig_printErrors, result.printErrors = \
result.printErrors, print_errors_patch(result)
if hasattr(result, 'addSkip'):
result._orig_addSkip, result.addSkip = \
result.addSkip, add_skip_patch(result)
result.errorClasses = {}
def add_error_patch(result):
"""Create a new addError method to patch into a result instance
that recognizes the errorClasses attribute and deals with
errorclasses correctly.
"""
return make_instancemethod(TextTestResult.addError, result)
def print_errors_patch(result):
"""Create a new printErrors method that prints errorClasses items
as well.
"""
return make_instancemethod(TextTestResult.printErrors, result)
def print_label_patch(result):
"""Create a new printLabel method that prints errorClasses items
as well.
"""
return make_instancemethod(TextTestResult.printLabel, result)
def wassuccessful_patch(result):
"""Create a new wasSuccessful method that checks errorClasses for
exceptions that were put into other slots than error or failure
but that still count as not success.
"""
return make_instancemethod(TextTestResult.wasSuccessful, result)
def add_skip_patch(result):
"""Create a new addSkip method to patch into a result instance
that delegates to addError.
"""
return make_instancemethod(TextTestResult.addSkip, result)
if __name__ == '__main__':
import doctest
doctest.testmod()

View File

@ -0,0 +1,49 @@
"""
This plugin provides assert introspection. When the plugin is enabled
and a test failure occurs, the traceback is displayed with extra context
around the line in which the exception was raised. Simple variable
substitution is also performed in the context output to provide more
debugging information.
"""
from nose.plugins import Plugin
from nose.pyversion import exc_to_unicode, force_unicode
from nose.inspector import inspect_traceback
class FailureDetail(Plugin):
"""
Plugin that provides extra information in tracebacks of test failures.
"""
score = 1600 # before capture
def options(self, parser, env):
"""Register commmandline options.
"""
parser.add_option(
"-d", "--detailed-errors", "--failure-detail",
action="store_true",
default=env.get('NOSE_DETAILED_ERRORS'),
dest="detailedErrors", help="Add detail to error"
" output by attempting to evaluate failed"
" asserts [NOSE_DETAILED_ERRORS]")
def configure(self, options, conf):
"""Configure plugin.
"""
if not self.can_configure:
return
self.enabled = options.detailedErrors
self.conf = conf
def formatFailure(self, test, err):
"""Add detail from traceback inspection to error message of a failure.
"""
ec, ev, tb = err
tbinfo, str_ev = None, exc_to_unicode(ev)
if tb:
tbinfo = force_unicode(inspect_traceback(tb))
str_ev = '\n'.join([str_ev, tbinfo])
test.tbinfo = tbinfo
return (ec, str_ev, tb)

View File

@ -0,0 +1,103 @@
"""The isolation plugin resets the contents of sys.modules after running
each test module or package. Use it by setting ``--with-isolation`` or the
NOSE_WITH_ISOLATION environment variable.
The effects are similar to wrapping the following functions around the
import and execution of each test module::
def setup(module):
module._mods = sys.modules.copy()
def teardown(module):
to_del = [ m for m in sys.modules.keys() if m not in
module._mods ]
for mod in to_del:
del sys.modules[mod]
sys.modules.update(module._mods)
Isolation works only during lazy loading. In normal use, this is only
during discovery of modules within a directory, where the process of
importing, loading tests and running tests from each module is
encapsulated in a single loadTestsFromName call. This plugin
implements loadTestsFromNames to force the same lazy-loading there,
which allows isolation to work in directed mode as well as discovery,
at the cost of some efficiency: lazy-loading names forces full context
setup and teardown to run for each name, defeating the grouping that
is normally used to ensure that context setup and teardown are run the
fewest possible times for a given set of names.
.. warning ::
This plugin should not be used in conjunction with other plugins
that assume that modules, once imported, will stay imported; for
instance, it may cause very odd results when used with the coverage
plugin.
"""
import logging
import sys
from nose.plugins import Plugin
log = logging.getLogger('nose.plugins.isolation')
class IsolationPlugin(Plugin):
"""
Activate the isolation plugin to isolate changes to external
modules to a single test module or package. The isolation plugin
resets the contents of sys.modules after each test module or
package runs to its state before the test. PLEASE NOTE that this
plugin should not be used with the coverage plugin, or in any other case
where module reloading may produce undesirable side-effects.
"""
score = 10 # I want to be last
name = 'isolation'
def configure(self, options, conf):
"""Configure plugin.
"""
Plugin.configure(self, options, conf)
self._mod_stack = []
def beforeContext(self):
"""Copy sys.modules onto my mod stack
"""
mods = sys.modules.copy()
self._mod_stack.append(mods)
def afterContext(self):
"""Pop my mod stack and restore sys.modules to the state
it was in when mod stack was pushed.
"""
mods = self._mod_stack.pop()
to_del = [ m for m in sys.modules.keys() if m not in mods ]
if to_del:
log.debug('removing sys modules entries: %s', to_del)
for mod in to_del:
del sys.modules[mod]
sys.modules.update(mods)
def loadTestsFromNames(self, names, module=None):
"""Create a lazy suite that calls beforeContext and afterContext
around each name. The side-effect of this is that full context
fixtures will be set up and torn down around each test named.
"""
# Fast path for when we don't care
if not names or len(names) == 1:
return
loader = self.loader
plugins = self.conf.plugins
def lazy():
for name in names:
plugins.beforeContext()
yield loader.loadTestsFromName(name, module=module)
plugins.afterContext()
return (loader.suiteClass(lazy), [])
def prepareTestLoader(self, loader):
"""Get handle on test loader so we can use it in loadTestsFromNames.
"""
self.loader = loader

View File

@ -0,0 +1,245 @@
"""
This plugin captures logging statements issued during test execution. When an
error or failure occurs, the captured log messages are attached to the running
test in the test.capturedLogging attribute, and displayed with the error failure
output. It is enabled by default but can be turned off with the option
``--nologcapture``.
You can filter captured logging statements with the ``--logging-filter`` option.
If set, it specifies which logger(s) will be captured; loggers that do not match
will be passed. Example: specifying ``--logging-filter=sqlalchemy,myapp``
will ensure that only statements logged via sqlalchemy.engine, myapp
or myapp.foo.bar logger will be logged.
You can remove other installed logging handlers with the
``--logging-clear-handlers`` option.
"""
import logging
from logging import Handler
import threading
from nose.plugins.base import Plugin
from nose.util import anyp, ln, safe_str
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
log = logging.getLogger(__name__)
class FilterSet(object):
def __init__(self, filter_components):
self.inclusive, self.exclusive = self._partition(filter_components)
# @staticmethod
def _partition(components):
inclusive, exclusive = [], []
for component in components:
if component.startswith('-'):
exclusive.append(component[1:])
else:
inclusive.append(component)
return inclusive, exclusive
_partition = staticmethod(_partition)
def allow(self, record):
"""returns whether this record should be printed"""
if not self:
# nothing to filter
return True
return self._allow(record) and not self._deny(record)
# @staticmethod
def _any_match(matchers, record):
"""return the bool of whether `record` starts with
any item in `matchers`"""
def record_matches_key(key):
return record == key or record.startswith(key + '.')
return anyp(bool, map(record_matches_key, matchers))
_any_match = staticmethod(_any_match)
def _allow(self, record):
if not self.inclusive:
return True
return self._any_match(self.inclusive, record)
def _deny(self, record):
if not self.exclusive:
return False
return self._any_match(self.exclusive, record)
class MyMemoryHandler(Handler):
def __init__(self, logformat, logdatefmt, filters):
Handler.__init__(self)
fmt = logging.Formatter(logformat, logdatefmt)
self.setFormatter(fmt)
self.filterset = FilterSet(filters)
self.buffer = []
def emit(self, record):
self.buffer.append(self.format(record))
def flush(self):
pass # do nothing
def truncate(self):
self.buffer = []
def filter(self, record):
if self.filterset.allow(record.name):
return Handler.filter(self, record)
def __getstate__(self):
state = self.__dict__.copy()
del state['lock']
return state
def __setstate__(self, state):
self.__dict__.update(state)
self.lock = threading.RLock()
class LogCapture(Plugin):
"""
Log capture plugin. Enabled by default. Disable with --nologcapture.
This plugin captures logging statements issued during test execution,
appending any output captured to the error or failure output,
should the test fail or raise an error.
"""
enabled = True
env_opt = 'NOSE_NOLOGCAPTURE'
name = 'logcapture'
score = 500
logformat = '%(name)s: %(levelname)s: %(message)s'
logdatefmt = None
clear = False
filters = ['-nose']
def options(self, parser, env):
"""Register commandline options.
"""
parser.add_option(
"--nologcapture", action="store_false",
default=not env.get(self.env_opt), dest="logcapture",
help="Disable logging capture plugin. "
"Logging configuration will be left intact."
" [NOSE_NOLOGCAPTURE]")
parser.add_option(
"--logging-format", action="store", dest="logcapture_format",
default=env.get('NOSE_LOGFORMAT') or self.logformat,
metavar="FORMAT",
help="Specify custom format to print statements. "
"Uses the same format as used by standard logging handlers."
" [NOSE_LOGFORMAT]")
parser.add_option(
"--logging-datefmt", action="store", dest="logcapture_datefmt",
default=env.get('NOSE_LOGDATEFMT') or self.logdatefmt,
metavar="FORMAT",
help="Specify custom date/time format to print statements. "
"Uses the same format as used by standard logging handlers."
" [NOSE_LOGDATEFMT]")
parser.add_option(
"--logging-filter", action="store", dest="logcapture_filters",
default=env.get('NOSE_LOGFILTER'),
metavar="FILTER",
help="Specify which statements to filter in/out. "
"By default, everything is captured. If the output is too"
" verbose,\nuse this option to filter out needless output.\n"
"Example: filter=foo will capture statements issued ONLY to\n"
" foo or foo.what.ever.sub but not foobar or other logger.\n"
"Specify multiple loggers with comma: filter=foo,bar,baz.\n"
"If any logger name is prefixed with a minus, eg filter=-foo,\n"
"it will be excluded rather than included. Default: "
"exclude logging messages from nose itself (-nose)."
" [NOSE_LOGFILTER]\n")
parser.add_option(
"--logging-clear-handlers", action="store_true",
default=False, dest="logcapture_clear",
help="Clear all other logging handlers")
parser.add_option(
"--logging-level", action="store",
default='NOTSET', dest="logcapture_level",
help="Set the log level to capture")
def configure(self, options, conf):
"""Configure plugin.
"""
self.conf = conf
# Disable if explicitly disabled, or if logging is
# configured via logging config file
if not options.logcapture or conf.loggingConfig:
self.enabled = False
self.logformat = options.logcapture_format
self.logdatefmt = options.logcapture_datefmt
self.clear = options.logcapture_clear
self.loglevel = options.logcapture_level
if options.logcapture_filters:
self.filters = options.logcapture_filters.split(',')
def setupLoghandler(self):
# setup our handler with root logger
root_logger = logging.getLogger()
if self.clear:
if hasattr(root_logger, "handlers"):
for handler in root_logger.handlers:
root_logger.removeHandler(handler)
for logger in logging.Logger.manager.loggerDict.values():
if hasattr(logger, "handlers"):
for handler in logger.handlers:
logger.removeHandler(handler)
# make sure there isn't one already
# you can't simply use "if self.handler not in root_logger.handlers"
# since at least in unit tests this doesn't work --
# LogCapture() is instantiated for each test case while root_logger
# is module global
# so we always add new MyMemoryHandler instance
for handler in root_logger.handlers[:]:
if isinstance(handler, MyMemoryHandler):
root_logger.handlers.remove(handler)
root_logger.addHandler(self.handler)
# to make sure everything gets captured
loglevel = getattr(self, "loglevel", "NOTSET")
root_logger.setLevel(getattr(logging, loglevel))
def begin(self):
"""Set up logging handler before test run begins.
"""
self.start()
def start(self):
self.handler = MyMemoryHandler(self.logformat, self.logdatefmt,
self.filters)
self.setupLoghandler()
def end(self):
pass
def beforeTest(self, test):
"""Clear buffers and handlers before test.
"""
self.setupLoghandler()
def afterTest(self, test):
"""Clear buffers after test.
"""
self.handler.truncate()
def formatFailure(self, test, err):
"""Add captured log messages to failure output.
"""
return self.formatError(test, err)
def formatError(self, test, err):
"""Add captured log messages to error output.
"""
# logic flow copied from Capture.formatError
test.capturedLogging = records = self.formatLogRecords()
if not records:
return err
ec, ev, tb = err
return (ec, self.addCaptureToErr(ev, records), tb)
def formatLogRecords(self):
return map(safe_str, self.handler.buffer)
def addCaptureToErr(self, ev, records):
return '\n'.join([safe_str(ev), ln('>> begin captured logging <<')] + \
records + \
[ln('>> end captured logging <<')])

View File

@ -0,0 +1,460 @@
"""
Plugin Manager
--------------
A plugin manager class is used to load plugins, manage the list of
loaded plugins, and proxy calls to those plugins.
The plugin managers provided with nose are:
:class:`PluginManager`
This manager doesn't implement loadPlugins, so it can only work
with a static list of plugins.
:class:`BuiltinPluginManager`
This manager loads plugins referenced in ``nose.plugins.builtin``.
:class:`EntryPointPluginManager`
This manager uses setuptools entrypoints to load plugins.
:class:`ExtraPluginsPluginManager`
This manager loads extra plugins specified with the keyword
`addplugins`.
:class:`DefaultPluginMananger`
This is the manager class that will be used by default. If
setuptools is installed, it is a subclass of
:class:`EntryPointPluginManager` and :class:`BuiltinPluginManager`;
otherwise, an alias to :class:`BuiltinPluginManager`.
:class:`RestrictedPluginManager`
This manager is for use in test runs where some plugin calls are
not available, such as runs started with ``python setup.py test``,
where the test runner is the default unittest :class:`TextTestRunner`. It
is a subclass of :class:`DefaultPluginManager`.
Writing a plugin manager
========================
If you want to load plugins via some other means, you can write a
plugin manager and pass an instance of your plugin manager class when
instantiating the :class:`nose.config.Config` instance that you pass to
:class:`TestProgram` (or :func:`main` or :func:`run`).
To implement your plugin loading scheme, implement ``loadPlugins()``,
and in that method, call ``addPlugin()`` with an instance of each plugin
you wish to make available. Make sure to call
``super(self).loadPlugins()`` as well if have subclassed a manager
other than ``PluginManager``.
"""
import inspect
import logging
import os
import sys
from itertools import chain as iterchain
from warnings import warn
import nose.config
from nose.failure import Failure
from nose.plugins.base import IPluginInterface
from nose.pyversion import sort_list
try:
import cPickle as pickle
except:
import pickle
try:
from cStringIO import StringIO
except:
from StringIO import StringIO
__all__ = ['DefaultPluginManager', 'PluginManager', 'EntryPointPluginManager',
'BuiltinPluginManager', 'RestrictedPluginManager']
log = logging.getLogger(__name__)
class PluginProxy(object):
"""Proxy for plugin calls. Essentially a closure bound to the
given call and plugin list.
The plugin proxy also must be bound to a particular plugin
interface specification, so that it knows what calls are available
and any special handling that is required for each call.
"""
interface = IPluginInterface
def __init__(self, call, plugins):
try:
self.method = getattr(self.interface, call)
except AttributeError:
raise AttributeError("%s is not a valid %s method"
% (call, self.interface.__name__))
self.call = self.makeCall(call)
self.plugins = []
for p in plugins:
self.addPlugin(p, call)
def __call__(self, *arg, **kw):
return self.call(*arg, **kw)
def addPlugin(self, plugin, call):
"""Add plugin to my list of plugins to call, if it has the attribute
I'm bound to.
"""
meth = getattr(plugin, call, None)
if meth is not None:
if call == 'loadTestsFromModule' and \
len(inspect.getargspec(meth)[0]) == 2:
orig_meth = meth
meth = lambda module, path, **kwargs: orig_meth(module)
self.plugins.append((plugin, meth))
def makeCall(self, call):
if call == 'loadTestsFromNames':
# special case -- load tests from names behaves somewhat differently
# from other chainable calls, because plugins return a tuple, only
# part of which can be chained to the next plugin.
return self._loadTestsFromNames
meth = self.method
if getattr(meth, 'generative', False):
# call all plugins and yield a flattened iterator of their results
return lambda *arg, **kw: list(self.generate(*arg, **kw))
elif getattr(meth, 'chainable', False):
return self.chain
else:
# return a value from the first plugin that returns non-None
return self.simple
def chain(self, *arg, **kw):
"""Call plugins in a chain, where the result of each plugin call is
sent to the next plugin as input. The final output result is returned.
"""
result = None
# extract the static arguments (if any) from arg so they can
# be passed to each plugin call in the chain
static = [a for (static, a)
in zip(getattr(self.method, 'static_args', []), arg)
if static]
for p, meth in self.plugins:
result = meth(*arg, **kw)
arg = static[:]
arg.append(result)
return result
def generate(self, *arg, **kw):
"""Call all plugins, yielding each item in each non-None result.
"""
for p, meth in self.plugins:
result = None
try:
result = meth(*arg, **kw)
if result is not None:
for r in result:
yield r
except (KeyboardInterrupt, SystemExit):
raise
except:
exc = sys.exc_info()
yield Failure(*exc)
continue
def simple(self, *arg, **kw):
"""Call all plugins, returning the first non-None result.
"""
for p, meth in self.plugins:
result = meth(*arg, **kw)
if result is not None:
return result
def _loadTestsFromNames(self, names, module=None):
"""Chainable but not quite normal. Plugins return a tuple of
(tests, names) after processing the names. The tests are added
to a suite that is accumulated throughout the full call, while
names are input for the next plugin in the chain.
"""
suite = []
for p, meth in self.plugins:
result = meth(names, module=module)
if result is not None:
suite_part, names = result
if suite_part:
suite.extend(suite_part)
return suite, names
class NoPlugins(object):
"""Null Plugin manager that has no plugins."""
interface = IPluginInterface
def __init__(self):
self._plugins = self.plugins = ()
def __iter__(self):
return ()
def _doNothing(self, *args, **kwds):
pass
def _emptyIterator(self, *args, **kwds):
return ()
def __getattr__(self, call):
method = getattr(self.interface, call)
if getattr(method, "generative", False):
return self._emptyIterator
else:
return self._doNothing
def addPlugin(self, plug):
raise NotImplementedError()
def addPlugins(self, plugins):
raise NotImplementedError()
def configure(self, options, config):
pass
def loadPlugins(self):
pass
def sort(self):
pass
class PluginManager(object):
"""Base class for plugin managers. PluginManager is intended to be
used only with a static list of plugins. The loadPlugins() implementation
only reloads plugins from _extraplugins to prevent those from being
overridden by a subclass.
The basic functionality of a plugin manager is to proxy all unknown
attributes through a ``PluginProxy`` to a list of plugins.
Note that the list of plugins *may not* be changed after the first plugin
call.
"""
proxyClass = PluginProxy
def __init__(self, plugins=(), proxyClass=None):
self._plugins = []
self._extraplugins = ()
self._proxies = {}
if plugins:
self.addPlugins(plugins)
if proxyClass is not None:
self.proxyClass = proxyClass
def __getattr__(self, call):
try:
return self._proxies[call]
except KeyError:
proxy = self.proxyClass(call, self._plugins)
self._proxies[call] = proxy
return proxy
def __iter__(self):
return iter(self.plugins)
def addPlugin(self, plug):
# allow, for instance, plugins loaded via entry points to
# supplant builtin plugins.
new_name = getattr(plug, 'name', object())
self._plugins[:] = [p for p in self._plugins
if getattr(p, 'name', None) != new_name]
self._plugins.append(plug)
def addPlugins(self, plugins=(), extraplugins=()):
"""extraplugins are maintained in a separate list and
re-added by loadPlugins() to prevent their being overwritten
by plugins added by a subclass of PluginManager
"""
self._extraplugins = extraplugins
for plug in iterchain(plugins, extraplugins):
self.addPlugin(plug)
def configure(self, options, config):
"""Configure the set of plugins with the given options
and config instance. After configuration, disabled plugins
are removed from the plugins list.
"""
log.debug("Configuring plugins")
self.config = config
cfg = PluginProxy('configure', self._plugins)
cfg(options, config)
enabled = [plug for plug in self._plugins if plug.enabled]
self.plugins = enabled
self.sort()
log.debug("Plugins enabled: %s", enabled)
def loadPlugins(self):
for plug in self._extraplugins:
self.addPlugin(plug)
def sort(self):
return sort_list(self._plugins, lambda x: getattr(x, 'score', 1), reverse=True)
def _get_plugins(self):
return self._plugins
def _set_plugins(self, plugins):
self._plugins = []
self.addPlugins(plugins)
plugins = property(_get_plugins, _set_plugins, None,
"""Access the list of plugins managed by
this plugin manager""")
class ZeroNinePlugin:
"""Proxy for 0.9 plugins, adapts 0.10 calls to 0.9 standard.
"""
def __init__(self, plugin):
self.plugin = plugin
def options(self, parser, env=os.environ):
self.plugin.add_options(parser, env)
def addError(self, test, err):
if not hasattr(self.plugin, 'addError'):
return
# switch off to addSkip, addDeprecated if those types
from nose.exc import SkipTest, DeprecatedTest
ec, ev, tb = err
if issubclass(ec, SkipTest):
if not hasattr(self.plugin, 'addSkip'):
return
return self.plugin.addSkip(test.test)
elif issubclass(ec, DeprecatedTest):
if not hasattr(self.plugin, 'addDeprecated'):
return
return self.plugin.addDeprecated(test.test)
# add capt
capt = test.capturedOutput
return self.plugin.addError(test.test, err, capt)
def loadTestsFromFile(self, filename):
if hasattr(self.plugin, 'loadTestsFromPath'):
return self.plugin.loadTestsFromPath(filename)
def addFailure(self, test, err):
if not hasattr(self.plugin, 'addFailure'):
return
# add capt and tbinfo
capt = test.capturedOutput
tbinfo = test.tbinfo
return self.plugin.addFailure(test.test, err, capt, tbinfo)
def addSuccess(self, test):
if not hasattr(self.plugin, 'addSuccess'):
return
capt = test.capturedOutput
self.plugin.addSuccess(test.test, capt)
def startTest(self, test):
if not hasattr(self.plugin, 'startTest'):
return
return self.plugin.startTest(test.test)
def stopTest(self, test):
if not hasattr(self.plugin, 'stopTest'):
return
return self.plugin.stopTest(test.test)
def __getattr__(self, val):
return getattr(self.plugin, val)
class EntryPointPluginManager(PluginManager):
"""Plugin manager that loads plugins from the `nose.plugins` and
`nose.plugins.0.10` entry points.
"""
entry_points = (('nose.plugins.0.10', None),
('nose.plugins', ZeroNinePlugin))
def loadPlugins(self):
"""Load plugins by iterating the `nose.plugins` entry point.
"""
from pkg_resources import iter_entry_points
loaded = {}
for entry_point, adapt in self.entry_points:
for ep in iter_entry_points(entry_point):
if ep.name in loaded:
continue
loaded[ep.name] = True
log.debug('%s load plugin %s', self.__class__.__name__, ep)
try:
plugcls = ep.load()
except KeyboardInterrupt:
raise
except Exception, e:
# never want a plugin load to kill the test run
# but we can't log here because the logger is not yet
# configured
warn("Unable to load plugin %s: %s" % (ep, e),
RuntimeWarning)
continue
if adapt:
plug = adapt(plugcls())
else:
plug = plugcls()
self.addPlugin(plug)
super(EntryPointPluginManager, self).loadPlugins()
class BuiltinPluginManager(PluginManager):
"""Plugin manager that loads plugins from the list in
`nose.plugins.builtin`.
"""
def loadPlugins(self):
"""Load plugins in nose.plugins.builtin
"""
from nose.plugins import builtin
for plug in builtin.plugins:
self.addPlugin(plug())
super(BuiltinPluginManager, self).loadPlugins()
try:
import pkg_resources
class DefaultPluginManager(EntryPointPluginManager, BuiltinPluginManager):
pass
except ImportError:
class DefaultPluginManager(BuiltinPluginManager):
pass
class RestrictedPluginManager(DefaultPluginManager):
"""Plugin manager that restricts the plugin list to those not
excluded by a list of exclude methods. Any plugin that implements
an excluded method will be removed from the manager's plugin list
after plugins are loaded.
"""
def __init__(self, plugins=(), exclude=(), load=True):
DefaultPluginManager.__init__(self, plugins)
self.load = load
self.exclude = exclude
self.excluded = []
self._excludedOpts = None
def excludedOption(self, name):
if self._excludedOpts is None:
from optparse import OptionParser
self._excludedOpts = OptionParser(add_help_option=False)
for plugin in self.excluded:
plugin.options(self._excludedOpts, env={})
return self._excludedOpts.get_option('--' + name)
def loadPlugins(self):
if self.load:
DefaultPluginManager.loadPlugins(self)
allow = []
for plugin in self.plugins:
ok = True
for method in self.exclude:
if hasattr(plugin, method):
ok = False
self.excluded.append(plugin)
break
if ok:
allow.append(plugin)
self.plugins = allow

View File

@ -0,0 +1,835 @@
"""
Overview
========
The multiprocess plugin enables you to distribute your test run among a set of
worker processes that run tests in parallel. This can speed up CPU-bound test
runs (as long as the number of work processeses is around the number of
processors or cores available), but is mainly useful for IO-bound tests that
spend most of their time waiting for data to arrive from someplace else.
.. note ::
See :doc:`../doc_tests/test_multiprocess/multiprocess` for
additional documentation and examples. Use of this plugin on python
2.5 or earlier requires the multiprocessing_ module, also available
from PyPI.
.. _multiprocessing : http://code.google.com/p/python-multiprocessing/
How tests are distributed
=========================
The ideal case would be to dispatch each test to a worker process
separately. This ideal is not attainable in all cases, however, because many
test suites depend on context (class, module or package) fixtures.
The plugin can't know (unless you tell it -- see below!) if a context fixture
can be called many times concurrently (is re-entrant), or if it can be shared
among tests running in different processes. Therefore, if a context has
fixtures, the default behavior is to dispatch the entire suite to a worker as
a unit.
Controlling distribution
^^^^^^^^^^^^^^^^^^^^^^^^
There are two context-level variables that you can use to control this default
behavior.
If a context's fixtures are re-entrant, set ``_multiprocess_can_split_ = True``
in the context, and the plugin will dispatch tests in suites bound to that
context as if the context had no fixtures. This means that the fixtures will
execute concurrently and multiple times, typically once per test.
If a context's fixtures can be shared by tests running in different processes
-- such as a package-level fixture that starts an external http server or
initializes a shared database -- then set ``_multiprocess_shared_ = True`` in
the context. These fixtures will then execute in the primary nose process, and
tests in those contexts will be individually dispatched to run in parallel.
How results are collected and reported
======================================
As each test or suite executes in a worker process, results (failures, errors,
and specially handled exceptions like SkipTest) are collected in that
process. When the worker process finishes, it returns results to the main
nose process. There, any progress output is printed (dots!), and the
results from the test run are combined into a consolidated result
set. When results have been received for all dispatched tests, or all
workers have died, the result summary is output as normal.
Beware!
=======
Not all test suites will benefit from, or even operate correctly using, this
plugin. For example, CPU-bound tests will run more slowly if you don't have
multiple processors. There are also some differences in plugin
interactions and behaviors due to the way in which tests are dispatched and
loaded. In general, test loading under this plugin operates as if it were
always in directed mode instead of discovered mode. For instance, doctests
in test modules will always be found when using this plugin with the doctest
plugin.
But the biggest issue you will face is probably concurrency. Unless you
have kept your tests as religiously pure unit tests, with no side-effects, no
ordering issues, and no external dependencies, chances are you will experience
odd, intermittent and unexplainable failures and errors when using this
plugin. This doesn't necessarily mean the plugin is broken; it may mean that
your test suite is not safe for concurrency.
New Features in 1.1.0
=====================
* functions generated by test generators are now added to the worker queue
making them multi-threaded.
* fixed timeout functionality, now functions will be terminated with a
TimedOutException exception when they exceed their execution time. The
worker processes are not terminated.
* added ``--process-restartworker`` option to restart workers once they are
done, this helps control memory usage. Sometimes memory leaks can accumulate
making long runs very difficult.
* added global _instantiate_plugins to configure which plugins are started
on the worker processes.
"""
import logging
import os
import sys
import time
import traceback
import unittest
import pickle
import signal
import nose.case
from nose.core import TextTestRunner
from nose import failure
from nose import loader
from nose.plugins.base import Plugin
from nose.pyversion import bytes_
from nose.result import TextTestResult
from nose.suite import ContextSuite
from nose.util import test_address
try:
# 2.7+
from unittest.runner import _WritelnDecorator
except ImportError:
from unittest import _WritelnDecorator
from Queue import Empty
from warnings import warn
try:
from cStringIO import StringIO
except ImportError:
import StringIO
# this is a list of plugin classes that will be checked for and created inside
# each worker process
_instantiate_plugins = None
log = logging.getLogger(__name__)
Process = Queue = Pool = Event = Value = Array = None
# have to inherit KeyboardInterrupt to it will interrupt process properly
class TimedOutException(KeyboardInterrupt):
def __init__(self, value = "Timed Out"):
self.value = value
def __str__(self):
return repr(self.value)
def _import_mp():
global Process, Queue, Pool, Event, Value, Array
try:
from multiprocessing import Manager, Process
#prevent the server process created in the manager which holds Python
#objects and allows other processes to manipulate them using proxies
#to interrupt on SIGINT (keyboardinterrupt) so that the communication
#channel between subprocesses and main process is still usable after
#ctrl+C is received in the main process.
old=signal.signal(signal.SIGINT, signal.SIG_IGN)
m = Manager()
#reset it back so main process will receive a KeyboardInterrupt
#exception on ctrl+c
signal.signal(signal.SIGINT, old)
Queue, Pool, Event, Value, Array = (
m.Queue, m.Pool, m.Event, m.Value, m.Array
)
except ImportError:
warn("multiprocessing module is not available, multiprocess plugin "
"cannot be used", RuntimeWarning)
class TestLet:
def __init__(self, case):
try:
self._id = case.id()
except AttributeError:
pass
self._short_description = case.shortDescription()
self._str = str(case)
def id(self):
return self._id
def shortDescription(self):
return self._short_description
def __str__(self):
return self._str
class MultiProcess(Plugin):
"""
Run tests in multiple processes. Requires processing module.
"""
score = 1000
status = {}
def options(self, parser, env):
"""
Register command-line options.
"""
parser.add_option("--processes", action="store",
default=env.get('NOSE_PROCESSES', 0),
dest="multiprocess_workers",
metavar="NUM",
help="Spread test run among this many processes. "
"Set a number equal to the number of processors "
"or cores in your machine for best results. "
"Pass a negative number to have the number of "
"processes automatically set to the number of "
"cores. Passing 0 means to disable parallel "
"testing. Default is 0 unless NOSE_PROCESSES is "
"set. "
"[NOSE_PROCESSES]")
parser.add_option("--process-timeout", action="store",
default=env.get('NOSE_PROCESS_TIMEOUT', 10),
dest="multiprocess_timeout",
metavar="SECONDS",
help="Set timeout for return of results from each "
"test runner process. Default is 10. "
"[NOSE_PROCESS_TIMEOUT]")
parser.add_option("--process-restartworker", action="store_true",
default=env.get('NOSE_PROCESS_RESTARTWORKER', False),
dest="multiprocess_restartworker",
help="If set, will restart each worker process once"
" their tests are done, this helps control memory "
"leaks from killing the system. "
"[NOSE_PROCESS_RESTARTWORKER]")
def configure(self, options, config):
"""
Configure plugin.
"""
try:
self.status.pop('active')
except KeyError:
pass
if not hasattr(options, 'multiprocess_workers'):
self.enabled = False
return
# don't start inside of a worker process
if config.worker:
return
self.config = config
try:
workers = int(options.multiprocess_workers)
except (TypeError, ValueError):
workers = 0
if workers:
_import_mp()
if Process is None:
self.enabled = False
return
# Negative number of workers will cause multiprocessing to hang.
# Set the number of workers to the CPU count to avoid this.
if workers < 0:
try:
import multiprocessing
workers = multiprocessing.cpu_count()
except NotImplementedError:
self.enabled = False
return
self.enabled = True
self.config.multiprocess_workers = workers
t = float(options.multiprocess_timeout)
self.config.multiprocess_timeout = t
r = int(options.multiprocess_restartworker)
self.config.multiprocess_restartworker = r
self.status['active'] = True
def prepareTestLoader(self, loader):
"""Remember loader class so MultiProcessTestRunner can instantiate
the right loader.
"""
self.loaderClass = loader.__class__
def prepareTestRunner(self, runner):
"""Replace test runner with MultiProcessTestRunner.
"""
# replace with our runner class
return MultiProcessTestRunner(stream=runner.stream,
verbosity=self.config.verbosity,
config=self.config,
loaderClass=self.loaderClass)
def signalhandler(sig, frame):
raise TimedOutException()
class MultiProcessTestRunner(TextTestRunner):
waitkilltime = 5.0 # max time to wait to terminate a process that does not
# respond to SIGILL
def __init__(self, **kw):
self.loaderClass = kw.pop('loaderClass', loader.defaultTestLoader)
super(MultiProcessTestRunner, self).__init__(**kw)
def collect(self, test, testQueue, tasks, to_teardown, result):
# dispatch and collect results
# put indexes only on queue because tests aren't picklable
for case in self.nextBatch(test):
log.debug("Next batch %s (%s)", case, type(case))
if (isinstance(case, nose.case.Test) and
isinstance(case.test, failure.Failure)):
log.debug("Case is a Failure")
case(result) # run here to capture the failure
continue
# handle shared fixtures
if isinstance(case, ContextSuite) and case.context is failure.Failure:
log.debug("Case is a Failure")
case(result) # run here to capture the failure
continue
elif isinstance(case, ContextSuite) and self.sharedFixtures(case):
log.debug("%s has shared fixtures", case)
try:
case.setUp()
except (KeyboardInterrupt, SystemExit):
raise
except:
log.debug("%s setup failed", sys.exc_info())
result.addError(case, sys.exc_info())
else:
to_teardown.append(case)
if case.factory:
ancestors=case.factory.context.get(case, [])
for an in ancestors[:2]:
#log.debug('reset ancestor %s', an)
if getattr(an, '_multiprocess_shared_', False):
an._multiprocess_can_split_=True
#an._multiprocess_shared_=False
self.collect(case, testQueue, tasks, to_teardown, result)
else:
test_addr = self.addtask(testQueue,tasks,case)
log.debug("Queued test %s (%s) to %s",
len(tasks), test_addr, testQueue)
def startProcess(self, iworker, testQueue, resultQueue, shouldStop, result):
currentaddr = Value('c',bytes_(''))
currentstart = Value('d',time.time())
keyboardCaught = Event()
p = Process(target=runner,
args=(iworker, testQueue,
resultQueue,
currentaddr,
currentstart,
keyboardCaught,
shouldStop,
self.loaderClass,
result.__class__,
pickle.dumps(self.config)))
p.currentaddr = currentaddr
p.currentstart = currentstart
p.keyboardCaught = keyboardCaught
old = signal.signal(signal.SIGILL, signalhandler)
p.start()
signal.signal(signal.SIGILL, old)
return p
def run(self, test):
"""
Execute the test (which may be a test suite). If the test is a suite,
distribute it out among as many processes as have been configured, at
as fine a level as is possible given the context fixtures defined in
the suite or any sub-suites.
"""
log.debug("%s.run(%s) (%s)", self, test, os.getpid())
wrapper = self.config.plugins.prepareTest(test)
if wrapper is not None:
test = wrapper
# plugins can decorate or capture the output stream
wrapped = self.config.plugins.setOutputStream(self.stream)
if wrapped is not None:
self.stream = wrapped
testQueue = Queue()
resultQueue = Queue()
tasks = []
completed = []
workers = []
to_teardown = []
shouldStop = Event()
result = self._makeResult()
start = time.time()
self.collect(test, testQueue, tasks, to_teardown, result)
log.debug("Starting %s workers", self.config.multiprocess_workers)
for i in range(self.config.multiprocess_workers):
p = self.startProcess(i, testQueue, resultQueue, shouldStop, result)
workers.append(p)
log.debug("Started worker process %s", i+1)
total_tasks = len(tasks)
# need to keep track of the next time to check for timeouts in case
# more than one process times out at the same time.
nexttimeout=self.config.multiprocess_timeout
thrownError = None
try:
while tasks:
log.debug("Waiting for results (%s/%s tasks), next timeout=%.3fs",
len(completed), total_tasks,nexttimeout)
try:
iworker, addr, newtask_addrs, batch_result = resultQueue.get(
timeout=nexttimeout)
log.debug('Results received for worker %d, %s, new tasks: %d',
iworker,addr,len(newtask_addrs))
try:
try:
tasks.remove(addr)
except ValueError:
log.warn('worker %s failed to remove from tasks: %s',
iworker,addr)
total_tasks += len(newtask_addrs)
tasks.extend(newtask_addrs)
except KeyError:
log.debug("Got result for unknown task? %s", addr)
log.debug("current: %s",str(list(tasks)[0]))
else:
completed.append([addr,batch_result])
self.consolidate(result, batch_result)
if (self.config.stopOnError
and not result.wasSuccessful()):
# set the stop condition
shouldStop.set()
break
if self.config.multiprocess_restartworker:
log.debug('joining worker %s',iworker)
# wait for working, but not that important if worker
# cannot be joined in fact, for workers that add to
# testQueue, they will not terminate until all their
# items are read
workers[iworker].join(timeout=1)
if not shouldStop.is_set() and not testQueue.empty():
log.debug('starting new process on worker %s',iworker)
workers[iworker] = self.startProcess(iworker, testQueue, resultQueue, shouldStop, result)
except Empty:
log.debug("Timed out with %s tasks pending "
"(empty testQueue=%r): %s",
len(tasks),testQueue.empty(),str(tasks))
any_alive = False
for iworker, w in enumerate(workers):
if w.is_alive():
worker_addr = bytes_(w.currentaddr.value,'ascii')
timeprocessing = time.time() - w.currentstart.value
if ( len(worker_addr) == 0
and timeprocessing > self.config.multiprocess_timeout-0.1):
log.debug('worker %d has finished its work item, '
'but is not exiting? do we wait for it?',
iworker)
else:
any_alive = True
if (len(worker_addr) > 0
and timeprocessing > self.config.multiprocess_timeout-0.1):
log.debug('timed out worker %s: %s',
iworker,worker_addr)
w.currentaddr.value = bytes_('')
# If the process is in C++ code, sending a SIGILL
# might not send a python KeybordInterrupt exception
# therefore, send multiple signals until an
# exception is caught. If this takes too long, then
# terminate the process
w.keyboardCaught.clear()
startkilltime = time.time()
while not w.keyboardCaught.is_set() and w.is_alive():
if time.time()-startkilltime > self.waitkilltime:
# have to terminate...
log.error("terminating worker %s",iworker)
w.terminate()
# there is a small probability that the
# terminated process might send a result,
# which has to be specially handled or
# else processes might get orphaned.
workers[iworker] = w = self.startProcess(iworker, testQueue, resultQueue, shouldStop, result)
break
os.kill(w.pid, signal.SIGILL)
time.sleep(0.1)
if not any_alive and testQueue.empty():
log.debug("All workers dead")
break
nexttimeout=self.config.multiprocess_timeout
for w in workers:
if w.is_alive() and len(w.currentaddr.value) > 0:
timeprocessing = time.time()-w.currentstart.value
if timeprocessing <= self.config.multiprocess_timeout:
nexttimeout = min(nexttimeout,
self.config.multiprocess_timeout-timeprocessing)
log.debug("Completed %s tasks (%s remain)", len(completed), len(tasks))
except (KeyboardInterrupt, SystemExit), e:
log.info('parent received ctrl-c when waiting for test results')
thrownError = e
#resultQueue.get(False)
result.addError(test, sys.exc_info())
try:
for case in to_teardown:
log.debug("Tearing down shared fixtures for %s", case)
try:
case.tearDown()
except (KeyboardInterrupt, SystemExit):
raise
except:
result.addError(case, sys.exc_info())
stop = time.time()
# first write since can freeze on shutting down processes
result.printErrors()
result.printSummary(start, stop)
self.config.plugins.finalize(result)
if thrownError is None:
log.debug("Tell all workers to stop")
for w in workers:
if w.is_alive():
testQueue.put('STOP', block=False)
# wait for the workers to end
for iworker,worker in enumerate(workers):
if worker.is_alive():
log.debug('joining worker %s',iworker)
worker.join()
if worker.is_alive():
log.debug('failed to join worker %s',iworker)
except (KeyboardInterrupt, SystemExit):
log.info('parent received ctrl-c when shutting down: stop all processes')
for worker in workers:
if worker.is_alive():
worker.terminate()
if thrownError: raise thrownError
else: raise
return result
def addtask(testQueue,tasks,case):
arg = None
if isinstance(case,nose.case.Test) and hasattr(case.test,'arg'):
# this removes the top level descriptor and allows real function
# name to be returned
case.test.descriptor = None
arg = case.test.arg
test_addr = MultiProcessTestRunner.address(case)
testQueue.put((test_addr,arg), block=False)
if arg is not None:
test_addr += str(arg)
if tasks is not None:
tasks.append(test_addr)
return test_addr
addtask = staticmethod(addtask)
def address(case):
if hasattr(case, 'address'):
file, mod, call = case.address()
elif hasattr(case, 'context'):
file, mod, call = test_address(case.context)
else:
raise Exception("Unable to convert %s to address" % case)
parts = []
if file is None:
if mod is None:
raise Exception("Unaddressable case %s" % case)
else:
parts.append(mod)
else:
# strip __init__.py(c) from end of file part
# if present, having it there confuses loader
dirname, basename = os.path.split(file)
if basename.startswith('__init__'):
file = dirname
parts.append(file)
if call is not None:
parts.append(call)
return ':'.join(map(str, parts))
address = staticmethod(address)
def nextBatch(self, test):
# allows tests or suites to mark themselves as not safe
# for multiprocess execution
if hasattr(test, 'context'):
if not getattr(test.context, '_multiprocess_', True):
return
if ((isinstance(test, ContextSuite)
and test.hasFixtures(self.checkCanSplit))
or not getattr(test, 'can_split', True)
or not isinstance(test, unittest.TestSuite)):
# regular test case, or a suite with context fixtures
# special case: when run like nosetests path/to/module.py
# the top-level suite has only one item, and it shares
# the same context as that item. In that case, we want the
# item, not the top-level suite
if isinstance(test, ContextSuite):
contained = list(test)
if (len(contained) == 1
and getattr(contained[0],
'context', None) == test.context):
test = contained[0]
yield test
else:
# Suite is without fixtures at this level; but it may have
# fixtures at any deeper level, so we need to examine it all
# the way down to the case level
for case in test:
for batch in self.nextBatch(case):
yield batch
def checkCanSplit(context, fixt):
"""
Callback that we use to check whether the fixtures found in a
context or ancestor are ones we care about.
Contexts can tell us that their fixtures are reentrant by setting
_multiprocess_can_split_. So if we see that, we return False to
disregard those fixtures.
"""
if not fixt:
return False
if getattr(context, '_multiprocess_can_split_', False):
return False
return True
checkCanSplit = staticmethod(checkCanSplit)
def sharedFixtures(self, case):
context = getattr(case, 'context', None)
if not context:
return False
return getattr(context, '_multiprocess_shared_', False)
def consolidate(self, result, batch_result):
log.debug("batch result is %s" , batch_result)
try:
output, testsRun, failures, errors, errorClasses = batch_result
except ValueError:
log.debug("result in unexpected format %s", batch_result)
failure.Failure(*sys.exc_info())(result)
return
self.stream.write(output)
result.testsRun += testsRun
result.failures.extend(failures)
result.errors.extend(errors)
for key, (storage, label, isfail) in errorClasses.items():
if key not in result.errorClasses:
# Ordinarily storage is result attribute
# but it's only processed through the errorClasses
# dict, so it's ok to fake it here
result.errorClasses[key] = ([], label, isfail)
mystorage, _junk, _junk = result.errorClasses[key]
mystorage.extend(storage)
log.debug("Ran %s tests (total: %s)", testsRun, result.testsRun)
def runner(ix, testQueue, resultQueue, currentaddr, currentstart,
keyboardCaught, shouldStop, loaderClass, resultClass, config):
try:
try:
return __runner(ix, testQueue, resultQueue, currentaddr, currentstart,
keyboardCaught, shouldStop, loaderClass, resultClass, config)
except KeyboardInterrupt:
log.debug('Worker %s keyboard interrupt, stopping',ix)
except Empty:
log.debug("Worker %s timed out waiting for tasks", ix)
def __runner(ix, testQueue, resultQueue, currentaddr, currentstart,
keyboardCaught, shouldStop, loaderClass, resultClass, config):
config = pickle.loads(config)
dummy_parser = config.parserClass()
if _instantiate_plugins is not None:
for pluginclass in _instantiate_plugins:
plugin = pluginclass()
plugin.addOptions(dummy_parser,{})
config.plugins.addPlugin(plugin)
config.plugins.configure(config.options,config)
config.plugins.begin()
log.debug("Worker %s executing, pid=%d", ix,os.getpid())
loader = loaderClass(config=config)
loader.suiteClass.suiteClass = NoSharedFixtureContextSuite
def get():
return testQueue.get(timeout=config.multiprocess_timeout)
def makeResult():
stream = _WritelnDecorator(StringIO())
result = resultClass(stream, descriptions=1,
verbosity=config.verbosity,
config=config)
plug_result = config.plugins.prepareTestResult(result)
if plug_result:
return plug_result
return result
def batch(result):
failures = [(TestLet(c), err) for c, err in result.failures]
errors = [(TestLet(c), err) for c, err in result.errors]
errorClasses = {}
for key, (storage, label, isfail) in result.errorClasses.items():
errorClasses[key] = ([(TestLet(c), err) for c, err in storage],
label, isfail)
return (
result.stream.getvalue(),
result.testsRun,
failures,
errors,
errorClasses)
for test_addr, arg in iter(get, 'STOP'):
if shouldStop.is_set():
log.exception('Worker %d STOPPED',ix)
break
result = makeResult()
test = loader.loadTestsFromNames([test_addr])
test.testQueue = testQueue
test.tasks = []
test.arg = arg
log.debug("Worker %s Test is %s (%s)", ix, test_addr, test)
try:
if arg is not None:
test_addr = test_addr + str(arg)
currentaddr.value = bytes_(test_addr)
currentstart.value = time.time()
test(result)
currentaddr.value = bytes_('')
resultQueue.put((ix, test_addr, test.tasks, batch(result)))
except KeyboardInterrupt, e: #TimedOutException:
timeout = isinstance(e, TimedOutException)
if timeout:
keyboardCaught.set()
if len(currentaddr.value):
if timeout:
msg = 'Worker %s timed out, failing current test %s'
else:
msg = 'Worker %s keyboard interrupt, failing current test %s'
log.exception(msg,ix,test_addr)
currentaddr.value = bytes_('')
failure.Failure(*sys.exc_info())(result)
resultQueue.put((ix, test_addr, test.tasks, batch(result)))
else:
if timeout:
msg = 'Worker %s test %s timed out'
else:
msg = 'Worker %s test %s keyboard interrupt'
log.debug(msg,ix,test_addr)
resultQueue.put((ix, test_addr, test.tasks, batch(result)))
if not timeout:
raise
except SystemExit:
currentaddr.value = bytes_('')
log.exception('Worker %s system exit',ix)
raise
except:
currentaddr.value = bytes_('')
log.exception("Worker %s error running test or returning "
"results",ix)
failure.Failure(*sys.exc_info())(result)
resultQueue.put((ix, test_addr, test.tasks, batch(result)))
if config.multiprocess_restartworker:
break
log.debug("Worker %s ending", ix)
class NoSharedFixtureContextSuite(ContextSuite):
"""
Context suite that never fires shared fixtures.
When a context sets _multiprocess_shared_, fixtures in that context
are executed by the main process. Using this suite class prevents them
from executing in the runner process as well.
"""
testQueue = None
tasks = None
arg = None
def setupContext(self, context):
if getattr(context, '_multiprocess_shared_', False):
return
super(NoSharedFixtureContextSuite, self).setupContext(context)
def teardownContext(self, context):
if getattr(context, '_multiprocess_shared_', False):
return
super(NoSharedFixtureContextSuite, self).teardownContext(context)
def run(self, result):
"""Run tests in suite inside of suite fixtures.
"""
# proxy the result for myself
log.debug("suite %s (%s) run called, tests: %s",
id(self), self, self._tests)
if self.resultProxy:
result, orig = self.resultProxy(result, self), result
else:
result, orig = result, result
try:
#log.debug('setUp for %s', id(self));
self.setUp()
except KeyboardInterrupt:
raise
except:
self.error_context = 'setup'
result.addError(self, self._exc_info())
return
try:
for test in self._tests:
if (isinstance(test,nose.case.Test)
and self.arg is not None):
test.test.arg = self.arg
else:
test.arg = self.arg
test.testQueue = self.testQueue
test.tasks = self.tasks
if result.shouldStop:
log.debug("stopping")
break
# each nose.case.Test will create its own result proxy
# so the cases need the original result, to avoid proxy
# chains
#log.debug('running test %s in suite %s', test, self);
try:
test(orig)
except KeyboardInterrupt, e:
timeout = isinstance(e, TimedOutException)
if timeout:
msg = 'Timeout when running test %s in suite %s'
else:
msg = 'KeyboardInterrupt when running test %s in suite %s'
log.debug(msg, test, self)
err = (TimedOutException,TimedOutException(str(test)),
sys.exc_info()[2])
test.config.plugins.addError(test,err)
orig.addError(test,err)
if not timeout:
raise
finally:
self.has_run = True
try:
#log.debug('tearDown for %s', id(self));
self.tearDown()
except KeyboardInterrupt:
raise
except:
self.error_context = 'teardown'
result.addError(self, self._exc_info())

View File

@ -0,0 +1,416 @@
"""
Testing Plugins
===============
The plugin interface is well-tested enough to safely unit test your
use of its hooks with some level of confidence. However, there is also
a mixin for unittest.TestCase called PluginTester that's designed to
test plugins in their native runtime environment.
Here's a simple example with a do-nothing plugin and a composed suite.
>>> import unittest
>>> from nose.plugins import Plugin, PluginTester
>>> class FooPlugin(Plugin):
... pass
>>> class TestPluginFoo(PluginTester, unittest.TestCase):
... activate = '--with-foo'
... plugins = [FooPlugin()]
... def test_foo(self):
... for line in self.output:
... # i.e. check for patterns
... pass
...
... # or check for a line containing ...
... assert "ValueError" in self.output
... def makeSuite(self):
... class TC(unittest.TestCase):
... def runTest(self):
... raise ValueError("I hate foo")
... return [TC('runTest')]
...
>>> res = unittest.TestResult()
>>> case = TestPluginFoo('test_foo')
>>> _ = case(res)
>>> res.errors
[]
>>> res.failures
[]
>>> res.wasSuccessful()
True
>>> res.testsRun
1
And here is a more complex example of testing a plugin that has extra
arguments and reads environment variables.
>>> import unittest, os
>>> from nose.plugins import Plugin, PluginTester
>>> class FancyOutputter(Plugin):
... name = "fancy"
... def configure(self, options, conf):
... Plugin.configure(self, options, conf)
... if not self.enabled:
... return
... self.fanciness = 1
... if options.more_fancy:
... self.fanciness = 2
... if 'EVEN_FANCIER' in self.env:
... self.fanciness = 3
...
... def options(self, parser, env=os.environ):
... self.env = env
... parser.add_option('--more-fancy', action='store_true')
... Plugin.options(self, parser, env=env)
...
... def report(self, stream):
... stream.write("FANCY " * self.fanciness)
...
>>> class TestFancyOutputter(PluginTester, unittest.TestCase):
... activate = '--with-fancy' # enables the plugin
... plugins = [FancyOutputter()]
... args = ['--more-fancy']
... env = {'EVEN_FANCIER': '1'}
...
... def test_fancy_output(self):
... assert "FANCY FANCY FANCY" in self.output, (
... "got: %s" % self.output)
... def makeSuite(self):
... class TC(unittest.TestCase):
... def runTest(self):
... raise ValueError("I hate fancy stuff")
... return [TC('runTest')]
...
>>> res = unittest.TestResult()
>>> case = TestFancyOutputter('test_fancy_output')
>>> _ = case(res)
>>> res.errors
[]
>>> res.failures
[]
>>> res.wasSuccessful()
True
>>> res.testsRun
1
"""
import re
import sys
from warnings import warn
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
__all__ = ['PluginTester', 'run']
from os import getpid
class MultiProcessFile(object):
"""
helper for testing multiprocessing
multiprocessing poses a problem for doctests, since the strategy
of replacing sys.stdout/stderr with file-like objects then
inspecting the results won't work: the child processes will
write to the objects, but the data will not be reflected
in the parent doctest-ing process.
The solution is to create file-like objects which will interact with
multiprocessing in a more desirable way.
All processes can write to this object, but only the creator can read.
This allows the testing system to see a unified picture of I/O.
"""
def __init__(self):
# per advice at:
# http://docs.python.org/library/multiprocessing.html#all-platforms
self.__master = getpid()
self.__queue = Manager().Queue()
self.__buffer = StringIO()
self.softspace = 0
def buffer(self):
if getpid() != self.__master:
return
from Queue import Empty
from collections import defaultdict
cache = defaultdict(str)
while True:
try:
pid, data = self.__queue.get_nowait()
except Empty:
break
if pid == ():
#show parent output after children
#this is what users see, usually
pid = ( 1e100, ) # googol!
cache[pid] += data
for pid in sorted(cache):
#self.__buffer.write( '%s wrote: %r\n' % (pid, cache[pid]) ) #DEBUG
self.__buffer.write( cache[pid] )
def write(self, data):
# note that these pids are in the form of current_process()._identity
# rather than OS pids
from multiprocessing import current_process
pid = current_process()._identity
self.__queue.put((pid, data))
def __iter__(self):
"getattr doesn't work for iter()"
self.buffer()
return self.__buffer
def seek(self, offset, whence=0):
self.buffer()
return self.__buffer.seek(offset, whence)
def getvalue(self):
self.buffer()
return self.__buffer.getvalue()
def __getattr__(self, attr):
return getattr(self.__buffer, attr)
try:
from multiprocessing import Manager
Buffer = MultiProcessFile
except ImportError:
Buffer = StringIO
class PluginTester(object):
"""A mixin for testing nose plugins in their runtime environment.
Subclass this and mix in unittest.TestCase to run integration/functional
tests on your plugin. When setUp() is called, the stub test suite is
executed with your plugin so that during an actual test you can inspect the
artifacts of how your plugin interacted with the stub test suite.
- activate
- the argument to send nosetests to activate the plugin
- suitepath
- if set, this is the path of the suite to test. Otherwise, you
will need to use the hook, makeSuite()
- plugins
- the list of plugins to make available during the run. Note
that this does not mean these plugins will be *enabled* during
the run -- only the plugins enabled by the activate argument
or other settings in argv or env will be enabled.
- args
- a list of arguments to add to the nosetests command, in addition to
the activate argument
- env
- optional dict of environment variables to send nosetests
"""
activate = None
suitepath = None
args = None
env = {}
argv = None
plugins = []
ignoreFiles = None
def makeSuite(self):
"""returns a suite object of tests to run (unittest.TestSuite())
If self.suitepath is None, this must be implemented. The returned suite
object will be executed with all plugins activated. It may return
None.
Here is an example of a basic suite object you can return ::
>>> import unittest
>>> class SomeTest(unittest.TestCase):
... def runTest(self):
... raise ValueError("Now do something, plugin!")
...
>>> unittest.TestSuite([SomeTest()]) # doctest: +ELLIPSIS
<unittest...TestSuite tests=[<...SomeTest testMethod=runTest>]>
"""
raise NotImplementedError
def _execPlugin(self):
"""execute the plugin on the internal test suite.
"""
from nose.config import Config
from nose.core import TestProgram
from nose.plugins.manager import PluginManager
suite = None
stream = Buffer()
conf = Config(env=self.env,
stream=stream,
plugins=PluginManager(plugins=self.plugins))
if self.ignoreFiles is not None:
conf.ignoreFiles = self.ignoreFiles
if not self.suitepath:
suite = self.makeSuite()
self.nose = TestProgram(argv=self.argv, config=conf, suite=suite,
exit=False)
self.output = AccessDecorator(stream)
def setUp(self):
"""runs nosetests with the specified test suite, all plugins
activated.
"""
self.argv = ['nosetests', self.activate]
if self.args:
self.argv.extend(self.args)
if self.suitepath:
self.argv.append(self.suitepath)
self._execPlugin()
class AccessDecorator(object):
stream = None
_buf = None
def __init__(self, stream):
self.stream = stream
stream.seek(0)
self._buf = stream.read()
stream.seek(0)
def __contains__(self, val):
return val in self._buf
def __iter__(self):
return iter(self.stream)
def __str__(self):
return self._buf
def blankline_separated_blocks(text):
"a bunch of === characters is also considered a blank line"
block = []
for line in text.splitlines(True):
block.append(line)
line = line.strip()
if not line or line.startswith('===') and not line.strip('='):
yield "".join(block)
block = []
if block:
yield "".join(block)
def remove_stack_traces(out):
# this regexp taken from Python 2.5's doctest
traceback_re = re.compile(r"""
# Grab the traceback header. Different versions of Python have
# said different things on the first traceback line.
^(?P<hdr> Traceback\ \(
(?: most\ recent\ call\ last
| innermost\ last
) \) :
)
\s* $ # toss trailing whitespace on the header.
(?P<stack> .*?) # don't blink: absorb stuff until...
^(?=\w) # a line *starts* with alphanum.
.*?(?P<exception> \w+ ) # exception name
(?P<msg> [:\n] .*) # the rest
""", re.VERBOSE | re.MULTILINE | re.DOTALL)
blocks = []
for block in blankline_separated_blocks(out):
blocks.append(traceback_re.sub(r"\g<hdr>\n...\n\g<exception>\g<msg>", block))
return "".join(blocks)
def simplify_warnings(out):
warn_re = re.compile(r"""
# Cut the file and line no, up to the warning name
^.*:\d+:\s
(?P<category>\w+): \s+ # warning category
(?P<detail>.+) $ \n? # warning message
^ .* $ # stack frame
""", re.VERBOSE | re.MULTILINE)
return warn_re.sub(r"\g<category>: \g<detail>", out)
def remove_timings(out):
return re.sub(
r"Ran (\d+ tests?) in [0-9.]+s", r"Ran \1 in ...s", out)
def munge_nose_output_for_doctest(out):
"""Modify nose output to make it easy to use in doctests."""
out = remove_stack_traces(out)
out = simplify_warnings(out)
out = remove_timings(out)
return out.strip()
def run(*arg, **kw):
"""
Specialized version of nose.run for use inside of doctests that
test test runs.
This version of run() prints the result output to stdout. Before
printing, the output is processed by replacing the timing
information with an ellipsis (...), removing traceback stacks, and
removing trailing whitespace.
Use this version of run wherever you are writing a doctest that
tests nose (or unittest) test result output.
Note: do not use doctest: +ELLIPSIS when testing nose output,
since ellipses ("test_foo ... ok") in your expected test runner
output may match multiple lines of output, causing spurious test
passes!
"""
from nose import run
from nose.config import Config
from nose.plugins.manager import PluginManager
buffer = Buffer()
if 'config' not in kw:
plugins = kw.pop('plugins', [])
if isinstance(plugins, list):
plugins = PluginManager(plugins=plugins)
env = kw.pop('env', {})
kw['config'] = Config(env=env, plugins=plugins)
if 'argv' not in kw:
kw['argv'] = ['nosetests', '-v']
kw['config'].stream = buffer
# Set up buffering so that all output goes to our buffer,
# or warn user if deprecated behavior is active. If this is not
# done, prints and warnings will either be out of place or
# disappear.
stderr = sys.stderr
stdout = sys.stdout
if kw.pop('buffer_all', False):
sys.stdout = sys.stderr = buffer
restore = True
else:
restore = False
warn("The behavior of nose.plugins.plugintest.run() will change in "
"the next release of nose. The current behavior does not "
"correctly account for output to stdout and stderr. To enable "
"correct behavior, use run_buffered() instead, or pass "
"the keyword argument buffer_all=True to run().",
DeprecationWarning, stacklevel=2)
try:
run(*arg, **kw)
finally:
if restore:
sys.stderr = stderr
sys.stdout = stdout
out = buffer.getvalue()
print munge_nose_output_for_doctest(out)
def run_buffered(*arg, **kw):
kw['buffer_all'] = True
run(*arg, **kw)
if __name__ == '__main__':
import doctest
doctest.testmod()

154
lib/spack/external/nose/plugins/prof.py vendored Normal file
View File

@ -0,0 +1,154 @@
"""This plugin will run tests using the hotshot profiler, which is part
of the standard library. To turn it on, use the ``--with-profile`` option
or set the NOSE_WITH_PROFILE environment variable. Profiler output can be
controlled with the ``--profile-sort`` and ``--profile-restrict`` options,
and the profiler output file may be changed with ``--profile-stats-file``.
See the `hotshot documentation`_ in the standard library documentation for
more details on the various output options.
.. _hotshot documentation: http://docs.python.org/library/hotshot.html
"""
try:
import hotshot
from hotshot import stats
except ImportError:
hotshot, stats = None, None
import logging
import os
import sys
import tempfile
from nose.plugins.base import Plugin
from nose.util import tolist
log = logging.getLogger('nose.plugins')
class Profile(Plugin):
"""
Use this plugin to run tests using the hotshot profiler.
"""
pfile = None
clean_stats_file = False
def options(self, parser, env):
"""Register commandline options.
"""
if not self.available():
return
Plugin.options(self, parser, env)
parser.add_option('--profile-sort', action='store', dest='profile_sort',
default=env.get('NOSE_PROFILE_SORT', 'cumulative'),
metavar="SORT",
help="Set sort order for profiler output")
parser.add_option('--profile-stats-file', action='store',
dest='profile_stats_file',
metavar="FILE",
default=env.get('NOSE_PROFILE_STATS_FILE'),
help='Profiler stats file; default is a new '
'temp file on each run')
parser.add_option('--profile-restrict', action='append',
dest='profile_restrict',
metavar="RESTRICT",
default=env.get('NOSE_PROFILE_RESTRICT'),
help="Restrict profiler output. See help for "
"pstats.Stats for details")
def available(cls):
return hotshot is not None
available = classmethod(available)
def begin(self):
"""Create profile stats file and load profiler.
"""
if not self.available():
return
self._create_pfile()
self.prof = hotshot.Profile(self.pfile)
def configure(self, options, conf):
"""Configure plugin.
"""
if not self.available():
self.enabled = False
return
Plugin.configure(self, options, conf)
self.conf = conf
if options.profile_stats_file:
self.pfile = options.profile_stats_file
self.clean_stats_file = False
else:
self.pfile = None
self.clean_stats_file = True
self.fileno = None
self.sort = options.profile_sort
self.restrict = tolist(options.profile_restrict)
def prepareTest(self, test):
"""Wrap entire test run in :func:`prof.runcall`.
"""
if not self.available():
return
log.debug('preparing test %s' % test)
def run_and_profile(result, prof=self.prof, test=test):
self._create_pfile()
prof.runcall(test, result)
return run_and_profile
def report(self, stream):
"""Output profiler report.
"""
log.debug('printing profiler report')
self.prof.close()
prof_stats = stats.load(self.pfile)
prof_stats.sort_stats(self.sort)
# 2.5 has completely different stream handling from 2.4 and earlier.
# Before 2.5, stats objects have no stream attribute; in 2.5 and later
# a reference sys.stdout is stored before we can tweak it.
compat_25 = hasattr(prof_stats, 'stream')
if compat_25:
tmp = prof_stats.stream
prof_stats.stream = stream
else:
tmp = sys.stdout
sys.stdout = stream
try:
if self.restrict:
log.debug('setting profiler restriction to %s', self.restrict)
prof_stats.print_stats(*self.restrict)
else:
prof_stats.print_stats()
finally:
if compat_25:
prof_stats.stream = tmp
else:
sys.stdout = tmp
def finalize(self, result):
"""Clean up stats file, if configured to do so.
"""
if not self.available():
return
try:
self.prof.close()
except AttributeError:
# TODO: is this trying to catch just the case where not
# hasattr(self.prof, "close")? If so, the function call should be
# moved out of the try: suite.
pass
if self.clean_stats_file:
if self.fileno:
try:
os.close(self.fileno)
except OSError:
pass
try:
os.unlink(self.pfile)
except OSError:
pass
return None
def _create_pfile(self):
if not self.pfile:
self.fileno, self.pfile = tempfile.mkstemp()
self.clean_stats_file = True

63
lib/spack/external/nose/plugins/skip.py vendored Normal file
View File

@ -0,0 +1,63 @@
"""
This plugin installs a SKIP error class for the SkipTest exception.
When SkipTest is raised, the exception will be logged in the skipped
attribute of the result, 'S' or 'SKIP' (verbose) will be output, and
the exception will not be counted as an error or failure. This plugin
is enabled by default but may be disabled with the ``--no-skip`` option.
"""
from nose.plugins.errorclass import ErrorClass, ErrorClassPlugin
# on SkipTest:
# - unittest SkipTest is first preference, but it's only available
# for >= 2.7
# - unittest2 SkipTest is second preference for older pythons. This
# mirrors logic for choosing SkipTest exception in testtools
# - if none of the above, provide custom class
try:
from unittest.case import SkipTest
except ImportError:
try:
from unittest2.case import SkipTest
except ImportError:
class SkipTest(Exception):
"""Raise this exception to mark a test as skipped.
"""
pass
class Skip(ErrorClassPlugin):
"""
Plugin that installs a SKIP error class for the SkipTest
exception. When SkipTest is raised, the exception will be logged
in the skipped attribute of the result, 'S' or 'SKIP' (verbose)
will be output, and the exception will not be counted as an error
or failure.
"""
enabled = True
skipped = ErrorClass(SkipTest,
label='SKIP',
isfailure=False)
def options(self, parser, env):
"""
Add my options to command line.
"""
env_opt = 'NOSE_WITHOUT_SKIP'
parser.add_option('--no-skip', action='store_true',
dest='noSkip', default=env.get(env_opt, False),
help="Disable special handling of SkipTest "
"exceptions.")
def configure(self, options, conf):
"""
Configure plugin. Skip plugin is enabled by default.
"""
if not self.can_configure:
return
self.conf = conf
disable = getattr(options, 'noSkip', False)
if disable:
self.enabled = False

View File

@ -0,0 +1,311 @@
"""
This plugin adds a test id (like #1) to each test name output. After
you've run once to generate test ids, you can re-run individual
tests by activating the plugin and passing the ids (with or
without the # prefix) instead of test names.
For example, if your normal test run looks like::
% nosetests -v
tests.test_a ... ok
tests.test_b ... ok
tests.test_c ... ok
When adding ``--with-id`` you'll see::
% nosetests -v --with-id
#1 tests.test_a ... ok
#2 tests.test_b ... ok
#3 tests.test_c ... ok
Then you can re-run individual tests by supplying just an id number::
% nosetests -v --with-id 2
#2 tests.test_b ... ok
You can also pass multiple id numbers::
% nosetests -v --with-id 2 3
#2 tests.test_b ... ok
#3 tests.test_c ... ok
Since most shells consider '#' a special character, you can leave it out when
specifying a test id.
Note that when run without the -v switch, no special output is displayed, but
the ids file is still written.
Looping over failed tests
-------------------------
This plugin also adds a mode that will direct the test runner to record
failed tests. Subsequent test runs will then run only the tests that failed
last time. Activate this mode with the ``--failed`` switch::
% nosetests -v --failed
#1 test.test_a ... ok
#2 test.test_b ... ERROR
#3 test.test_c ... FAILED
#4 test.test_d ... ok
On the second run, only tests #2 and #3 will run::
% nosetests -v --failed
#2 test.test_b ... ERROR
#3 test.test_c ... FAILED
As you correct errors and tests pass, they'll drop out of subsequent runs.
First::
% nosetests -v --failed
#2 test.test_b ... ok
#3 test.test_c ... FAILED
Second::
% nosetests -v --failed
#3 test.test_c ... FAILED
When all tests pass, the full set will run on the next invocation.
First::
% nosetests -v --failed
#3 test.test_c ... ok
Second::
% nosetests -v --failed
#1 test.test_a ... ok
#2 test.test_b ... ok
#3 test.test_c ... ok
#4 test.test_d ... ok
.. note ::
If you expect to use ``--failed`` regularly, it's a good idea to always run
using the ``--with-id`` option. This will ensure that an id file is always
created, allowing you to add ``--failed`` to the command line as soon as
you have failing tests. Otherwise, your first run using ``--failed`` will
(perhaps surprisingly) run *all* tests, because there won't be an id file
containing the record of failed tests from your previous run.
"""
__test__ = False
import logging
import os
from nose.plugins import Plugin
from nose.util import src, set
try:
from cPickle import dump, load
except ImportError:
from pickle import dump, load
log = logging.getLogger(__name__)
class TestId(Plugin):
"""
Activate to add a test id (like #1) to each test name output. Activate
with --failed to rerun failing tests only.
"""
name = 'id'
idfile = None
collecting = True
loopOnFailed = False
def options(self, parser, env):
"""Register commandline options.
"""
Plugin.options(self, parser, env)
parser.add_option('--id-file', action='store', dest='testIdFile',
default='.noseids', metavar="FILE",
help="Store test ids found in test runs in this "
"file. Default is the file .noseids in the "
"working directory.")
parser.add_option('--failed', action='store_true',
dest='failed', default=False,
help="Run the tests that failed in the last "
"test run.")
def configure(self, options, conf):
"""Configure plugin.
"""
Plugin.configure(self, options, conf)
if options.failed:
self.enabled = True
self.loopOnFailed = True
log.debug("Looping on failed tests")
self.idfile = os.path.expanduser(options.testIdFile)
if not os.path.isabs(self.idfile):
self.idfile = os.path.join(conf.workingDir, self.idfile)
self.id = 1
# Ids and tests are mirror images: ids are {id: test address} and
# tests are {test address: id}
self.ids = {}
self.tests = {}
self.failed = []
self.source_names = []
# used to track ids seen when tests is filled from
# loaded ids file
self._seen = {}
self._write_hashes = conf.verbosity >= 2
def finalize(self, result):
"""Save new ids file, if needed.
"""
if result.wasSuccessful():
self.failed = []
if self.collecting:
ids = dict(list(zip(list(self.tests.values()), list(self.tests.keys()))))
else:
ids = self.ids
fh = open(self.idfile, 'wb')
dump({'ids': ids,
'failed': self.failed,
'source_names': self.source_names}, fh)
fh.close()
log.debug('Saved test ids: %s, failed %s to %s',
ids, self.failed, self.idfile)
def loadTestsFromNames(self, names, module=None):
"""Translate ids in the list of requested names into their
test addresses, if they are found in my dict of tests.
"""
log.debug('ltfn %s %s', names, module)
try:
fh = open(self.idfile, 'rb')
data = load(fh)
if 'ids' in data:
self.ids = data['ids']
self.failed = data['failed']
self.source_names = data['source_names']
else:
# old ids field
self.ids = data
self.failed = []
self.source_names = names
if self.ids:
self.id = max(self.ids) + 1
self.tests = dict(list(zip(list(self.ids.values()), list(self.ids.keys()))))
else:
self.id = 1
log.debug(
'Loaded test ids %s tests %s failed %s sources %s from %s',
self.ids, self.tests, self.failed, self.source_names,
self.idfile)
fh.close()
except ValueError, e:
# load() may throw a ValueError when reading the ids file, if it
# was generated with a newer version of Python than we are currently
# running.
log.debug('Error loading %s : %s', self.idfile, str(e))
except IOError:
log.debug('IO error reading %s', self.idfile)
if self.loopOnFailed and self.failed:
self.collecting = False
names = self.failed
self.failed = []
# I don't load any tests myself, only translate names like '#2'
# into the associated test addresses
translated = []
new_source = []
really_new = []
for name in names:
trans = self.tr(name)
if trans != name:
translated.append(trans)
else:
new_source.append(name)
# names that are not ids and that are not in the current
# list of source names go into the list for next time
if new_source:
new_set = set(new_source)
old_set = set(self.source_names)
log.debug("old: %s new: %s", old_set, new_set)
really_new = [s for s in new_source
if not s in old_set]
if really_new:
# remember new sources
self.source_names.extend(really_new)
if not translated:
# new set of source names, no translations
# means "run the requested tests"
names = new_source
else:
# no new names to translate and add to id set
self.collecting = False
log.debug("translated: %s new sources %s names %s",
translated, really_new, names)
return (None, translated + really_new or names)
def makeName(self, addr):
log.debug("Make name %s", addr)
filename, module, call = addr
if filename is not None:
head = src(filename)
else:
head = module
if call is not None:
return "%s:%s" % (head, call)
return head
def setOutputStream(self, stream):
"""Get handle on output stream so the plugin can print id #s
"""
self.stream = stream
def startTest(self, test):
"""Maybe output an id # before the test name.
Example output::
#1 test.test ... ok
#2 test.test_two ... ok
"""
adr = test.address()
log.debug('start test %s (%s)', adr, adr in self.tests)
if adr in self.tests:
if adr in self._seen:
self.write(' ')
else:
self.write('#%s ' % self.tests[adr])
self._seen[adr] = 1
return
self.tests[adr] = self.id
self.write('#%s ' % self.id)
self.id += 1
def afterTest(self, test):
# None means test never ran, False means failed/err
if test.passed is False:
try:
key = str(self.tests[test.address()])
except KeyError:
# never saw this test -- startTest didn't run
pass
else:
if key not in self.failed:
self.failed.append(key)
def tr(self, name):
log.debug("tr '%s'", name)
try:
key = int(name.replace('#', ''))
except ValueError:
return name
log.debug("Got key %s", key)
# I'm running tests mapped from the ids file,
# not collecting new ones
if key in self.ids:
return self.makeName(self.ids[key])
return name
def write(self, output):
if self._write_hashes:
self.stream.write(output)

341
lib/spack/external/nose/plugins/xunit.py vendored Normal file
View File

@ -0,0 +1,341 @@
"""This plugin provides test results in the standard XUnit XML format.
It's designed for the `Jenkins`_ (previously Hudson) continuous build
system, but will probably work for anything else that understands an
XUnit-formatted XML representation of test results.
Add this shell command to your builder ::
nosetests --with-xunit
And by default a file named nosetests.xml will be written to the
working directory.
In a Jenkins builder, tick the box named "Publish JUnit test result report"
under the Post-build Actions and enter this value for Test report XMLs::
**/nosetests.xml
If you need to change the name or location of the file, you can set the
``--xunit-file`` option.
If you need to change the name of the test suite, you can set the
``--xunit-testsuite-name`` option.
Here is an abbreviated version of what an XML test report might look like::
<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="nosetests" tests="1" errors="1" failures="0" skip="0">
<testcase classname="path_to_test_suite.TestSomething"
name="test_it" time="0">
<error type="exceptions.TypeError" message="oops, wrong type">
Traceback (most recent call last):
...
TypeError: oops, wrong type
</error>
</testcase>
</testsuite>
.. _Jenkins: http://jenkins-ci.org/
"""
import codecs
import doctest
import os
import sys
import traceback
import re
import inspect
from StringIO import StringIO
from time import time
from xml.sax import saxutils
from nose.plugins.base import Plugin
from nose.exc import SkipTest
from nose.pyversion import force_unicode, format_exception
# Invalid XML characters, control characters 0-31 sans \t, \n and \r
CONTROL_CHARACTERS = re.compile(r"[\000-\010\013\014\016-\037]")
TEST_ID = re.compile(r'^(.*?)(\(.*\))$')
def xml_safe(value):
"""Replaces invalid XML characters with '?'."""
return CONTROL_CHARACTERS.sub('?', value)
def escape_cdata(cdata):
"""Escape a string for an XML CDATA section."""
return xml_safe(cdata).replace(']]>', ']]>]]&gt;<![CDATA[')
def id_split(idval):
m = TEST_ID.match(idval)
if m:
name, fargs = m.groups()
head, tail = name.rsplit(".", 1)
return [head, tail+fargs]
else:
return idval.rsplit(".", 1)
def nice_classname(obj):
"""Returns a nice name for class object or class instance.
>>> nice_classname(Exception()) # doctest: +ELLIPSIS
'...Exception'
>>> nice_classname(Exception) # doctest: +ELLIPSIS
'...Exception'
"""
if inspect.isclass(obj):
cls_name = obj.__name__
else:
cls_name = obj.__class__.__name__
mod = inspect.getmodule(obj)
if mod:
name = mod.__name__
# jython
if name.startswith('org.python.core.'):
name = name[len('org.python.core.'):]
return "%s.%s" % (name, cls_name)
else:
return cls_name
def exc_message(exc_info):
"""Return the exception's message."""
exc = exc_info[1]
if exc is None:
# str exception
result = exc_info[0]
else:
try:
result = str(exc)
except UnicodeEncodeError:
try:
result = unicode(exc)
except UnicodeError:
# Fallback to args as neither str nor
# unicode(Exception(u'\xe6')) work in Python < 2.6
result = exc.args[0]
result = force_unicode(result, 'UTF-8')
return xml_safe(result)
class Tee(object):
def __init__(self, encoding, *args):
self._encoding = encoding
self._streams = args
def write(self, data):
data = force_unicode(data, self._encoding)
for s in self._streams:
s.write(data)
def writelines(self, lines):
for line in lines:
self.write(line)
def flush(self):
for s in self._streams:
s.flush()
def isatty(self):
return False
class Xunit(Plugin):
"""This plugin provides test results in the standard XUnit XML format."""
name = 'xunit'
score = 1500
encoding = 'UTF-8'
error_report_file = None
def __init__(self):
super(Xunit, self).__init__()
self._capture_stack = []
self._currentStdout = None
self._currentStderr = None
def _timeTaken(self):
if hasattr(self, '_timer'):
taken = time() - self._timer
else:
# test died before it ran (probably error in setup())
# or success/failure added before test started probably
# due to custom TestResult munging
taken = 0.0
return taken
def _quoteattr(self, attr):
"""Escape an XML attribute. Value can be unicode."""
attr = xml_safe(attr)
return saxutils.quoteattr(attr)
def options(self, parser, env):
"""Sets additional command line options."""
Plugin.options(self, parser, env)
parser.add_option(
'--xunit-file', action='store',
dest='xunit_file', metavar="FILE",
default=env.get('NOSE_XUNIT_FILE', 'nosetests.xml'),
help=("Path to xml file to store the xunit report in. "
"Default is nosetests.xml in the working directory "
"[NOSE_XUNIT_FILE]"))
parser.add_option(
'--xunit-testsuite-name', action='store',
dest='xunit_testsuite_name', metavar="PACKAGE",
default=env.get('NOSE_XUNIT_TESTSUITE_NAME', 'nosetests'),
help=("Name of the testsuite in the xunit xml, generated by plugin. "
"Default test suite name is nosetests."))
def configure(self, options, config):
"""Configures the xunit plugin."""
Plugin.configure(self, options, config)
self.config = config
if self.enabled:
self.stats = {'errors': 0,
'failures': 0,
'passes': 0,
'skipped': 0
}
self.errorlist = []
self.error_report_file_name = os.path.realpath(options.xunit_file)
self.xunit_testsuite_name = options.xunit_testsuite_name
def report(self, stream):
"""Writes an Xunit-formatted XML file
The file includes a report of test errors and failures.
"""
self.error_report_file = codecs.open(self.error_report_file_name, 'w',
self.encoding, 'replace')
self.stats['encoding'] = self.encoding
self.stats['testsuite_name'] = self.xunit_testsuite_name
self.stats['total'] = (self.stats['errors'] + self.stats['failures']
+ self.stats['passes'] + self.stats['skipped'])
self.error_report_file.write(
u'<?xml version="1.0" encoding="%(encoding)s"?>'
u'<testsuite name="%(testsuite_name)s" tests="%(total)d" '
u'errors="%(errors)d" failures="%(failures)d" '
u'skip="%(skipped)d">' % self.stats)
self.error_report_file.write(u''.join([force_unicode(e, self.encoding)
for e in self.errorlist]))
self.error_report_file.write(u'</testsuite>')
self.error_report_file.close()
if self.config.verbosity > 1:
stream.writeln("-" * 70)
stream.writeln("XML: %s" % self.error_report_file.name)
def _startCapture(self):
self._capture_stack.append((sys.stdout, sys.stderr))
self._currentStdout = StringIO()
self._currentStderr = StringIO()
sys.stdout = Tee(self.encoding, self._currentStdout, sys.stdout)
sys.stderr = Tee(self.encoding, self._currentStderr, sys.stderr)
def startContext(self, context):
self._startCapture()
def stopContext(self, context):
self._endCapture()
def beforeTest(self, test):
"""Initializes a timer before starting a test."""
self._timer = time()
self._startCapture()
def _endCapture(self):
if self._capture_stack:
sys.stdout, sys.stderr = self._capture_stack.pop()
def afterTest(self, test):
self._endCapture()
self._currentStdout = None
self._currentStderr = None
def finalize(self, test):
while self._capture_stack:
self._endCapture()
def _getCapturedStdout(self):
if self._currentStdout:
value = self._currentStdout.getvalue()
if value:
return '<system-out><![CDATA[%s]]></system-out>' % escape_cdata(
value)
return ''
def _getCapturedStderr(self):
if self._currentStderr:
value = self._currentStderr.getvalue()
if value:
return '<system-err><![CDATA[%s]]></system-err>' % escape_cdata(
value)
return ''
def addError(self, test, err, capt=None):
"""Add error output to Xunit report.
"""
taken = self._timeTaken()
if issubclass(err[0], SkipTest):
type = 'skipped'
self.stats['skipped'] += 1
else:
type = 'error'
self.stats['errors'] += 1
tb = format_exception(err, self.encoding)
id = test.id()
self.errorlist.append(
u'<testcase classname=%(cls)s name=%(name)s time="%(taken).3f">'
u'<%(type)s type=%(errtype)s message=%(message)s><![CDATA[%(tb)s]]>'
u'</%(type)s>%(systemout)s%(systemerr)s</testcase>' %
{'cls': self._quoteattr(id_split(id)[0]),
'name': self._quoteattr(id_split(id)[-1]),
'taken': taken,
'type': type,
'errtype': self._quoteattr(nice_classname(err[0])),
'message': self._quoteattr(exc_message(err)),
'tb': escape_cdata(tb),
'systemout': self._getCapturedStdout(),
'systemerr': self._getCapturedStderr(),
})
def addFailure(self, test, err, capt=None, tb_info=None):
"""Add failure output to Xunit report.
"""
taken = self._timeTaken()
tb = format_exception(err, self.encoding)
self.stats['failures'] += 1
id = test.id()
self.errorlist.append(
u'<testcase classname=%(cls)s name=%(name)s time="%(taken).3f">'
u'<failure type=%(errtype)s message=%(message)s><![CDATA[%(tb)s]]>'
u'</failure>%(systemout)s%(systemerr)s</testcase>' %
{'cls': self._quoteattr(id_split(id)[0]),
'name': self._quoteattr(id_split(id)[-1]),
'taken': taken,
'errtype': self._quoteattr(nice_classname(err[0])),
'message': self._quoteattr(exc_message(err)),
'tb': escape_cdata(tb),
'systemout': self._getCapturedStdout(),
'systemerr': self._getCapturedStderr(),
})
def addSuccess(self, test, capt=None):
"""Add success output to Xunit report.
"""
taken = self._timeTaken()
self.stats['passes'] += 1
id = test.id()
self.errorlist.append(
'<testcase classname=%(cls)s name=%(name)s '
'time="%(taken).3f">%(systemout)s%(systemerr)s</testcase>' %
{'cls': self._quoteattr(id_split(id)[0]),
'name': self._quoteattr(id_split(id)[-1]),
'taken': taken,
'systemout': self._getCapturedStdout(),
'systemerr': self._getCapturedStderr(),
})

188
lib/spack/external/nose/proxy.py vendored Normal file
View File

@ -0,0 +1,188 @@
"""
Result Proxy
------------
The result proxy wraps the result instance given to each test. It
performs two functions: enabling extended error/failure reporting
and calling plugins.
As each result event is fired, plugins are called with the same event;
however, plugins are called with the nose.case.Test instance that
wraps the actual test. So when a test fails and calls
result.addFailure(self, err), the result proxy calls
addFailure(self.test, err) for each plugin. This allows plugins to
have a single stable interface for all test types, and also to
manipulate the test object itself by setting the `test` attribute of
the nose.case.Test that they receive.
"""
import logging
from nose.config import Config
log = logging.getLogger(__name__)
def proxied_attribute(local_attr, proxied_attr, doc):
"""Create a property that proxies attribute ``proxied_attr`` through
the local attribute ``local_attr``.
"""
def fget(self):
return getattr(getattr(self, local_attr), proxied_attr)
def fset(self, value):
setattr(getattr(self, local_attr), proxied_attr, value)
def fdel(self):
delattr(getattr(self, local_attr), proxied_attr)
return property(fget, fset, fdel, doc)
class ResultProxyFactory(object):
"""Factory for result proxies. Generates a ResultProxy bound to each test
and the result passed to the test.
"""
def __init__(self, config=None):
if config is None:
config = Config()
self.config = config
self.__prepared = False
self.__result = None
def __call__(self, result, test):
"""Return a ResultProxy for the current test.
On first call, plugins are given a chance to replace the
result used for the remaining tests. If a plugin returns a
value from prepareTestResult, that object will be used as the
result for all tests.
"""
if not self.__prepared:
self.__prepared = True
plug_result = self.config.plugins.prepareTestResult(result)
if plug_result is not None:
self.__result = result = plug_result
if self.__result is not None:
result = self.__result
return ResultProxy(result, test, config=self.config)
class ResultProxy(object):
"""Proxy to TestResults (or other results handler).
One ResultProxy is created for each nose.case.Test. The result
proxy calls plugins with the nose.case.Test instance (instead of
the wrapped test case) as each result call is made. Finally, the
real result method is called, also with the nose.case.Test
instance as the test parameter.
"""
def __init__(self, result, test, config=None):
if config is None:
config = Config()
self.config = config
self.plugins = config.plugins
self.result = result
self.test = test
def __repr__(self):
return repr(self.result)
def _prepareErr(self, err):
if not isinstance(err[1], Exception) and isinstance(err[0], type):
# Turn value back into an Exception (required in Python 3.x).
# Plugins do all sorts of crazy things with exception values.
# Convert it to a custom subclass of Exception with the same
# name as the actual exception to make it print correctly.
value = type(err[0].__name__, (Exception,), {})(err[1])
err = (err[0], value, err[2])
return err
def assertMyTest(self, test):
# The test I was called with must be my .test or my
# .test's .test. or my .test.test's .case
case = getattr(self.test, 'test', None)
assert (test is self.test
or test is case
or test is getattr(case, '_nose_case', None)), (
"ResultProxy for %r (%s) was called with test %r (%s)"
% (self.test, id(self.test), test, id(test)))
def afterTest(self, test):
self.assertMyTest(test)
self.plugins.afterTest(self.test)
if hasattr(self.result, "afterTest"):
self.result.afterTest(self.test)
def beforeTest(self, test):
self.assertMyTest(test)
self.plugins.beforeTest(self.test)
if hasattr(self.result, "beforeTest"):
self.result.beforeTest(self.test)
def addError(self, test, err):
self.assertMyTest(test)
plugins = self.plugins
plugin_handled = plugins.handleError(self.test, err)
if plugin_handled:
return
# test.passed is set in result, to account for error classes
formatted = plugins.formatError(self.test, err)
if formatted is not None:
err = formatted
plugins.addError(self.test, err)
self.result.addError(self.test, self._prepareErr(err))
if not self.result.wasSuccessful() and self.config.stopOnError:
self.shouldStop = True
def addFailure(self, test, err):
self.assertMyTest(test)
plugins = self.plugins
plugin_handled = plugins.handleFailure(self.test, err)
if plugin_handled:
return
self.test.passed = False
formatted = plugins.formatFailure(self.test, err)
if formatted is not None:
err = formatted
plugins.addFailure(self.test, err)
self.result.addFailure(self.test, self._prepareErr(err))
if self.config.stopOnError:
self.shouldStop = True
def addSkip(self, test, reason):
# 2.7 compat shim
from nose.plugins.skip import SkipTest
self.assertMyTest(test)
plugins = self.plugins
if not isinstance(reason, Exception):
# for Python 3.2+
reason = Exception(reason)
plugins.addError(self.test, (SkipTest, reason, None))
self.result.addSkip(self.test, reason)
def addSuccess(self, test):
self.assertMyTest(test)
self.plugins.addSuccess(self.test)
self.result.addSuccess(self.test)
def startTest(self, test):
self.assertMyTest(test)
self.plugins.startTest(self.test)
self.result.startTest(self.test)
def stop(self):
self.result.stop()
def stopTest(self, test):
self.assertMyTest(test)
self.plugins.stopTest(self.test)
self.result.stopTest(self.test)
# proxied attributes
shouldStop = proxied_attribute('result', 'shouldStop',
"""Should the test run stop?""")
errors = proxied_attribute('result', 'errors',
"""Tests that raised an exception""")
failures = proxied_attribute('result', 'failures',
"""Tests that failed""")
testsRun = proxied_attribute('result', 'testsRun',
"""Number of tests run""")

215
lib/spack/external/nose/pyversion.py vendored Normal file
View File

@ -0,0 +1,215 @@
"""
This module contains fixups for using nose under different versions of Python.
"""
import sys
import os
import traceback
import types
import inspect
import nose.util
__all__ = ['make_instancemethod', 'cmp_to_key', 'sort_list', 'ClassType',
'TypeType', 'UNICODE_STRINGS', 'unbound_method', 'ismethod',
'bytes_', 'is_base_exception', 'force_unicode', 'exc_to_unicode',
'format_exception']
# In Python 3.x, all strings are unicode (the call to 'unicode()' in the 2.x
# source will be replaced with 'str()' when running 2to3, so this test will
# then become true)
UNICODE_STRINGS = (type(unicode()) == type(str()))
if sys.version_info[:2] < (3, 0):
def force_unicode(s, encoding='UTF-8'):
try:
s = unicode(s)
except UnicodeDecodeError:
s = str(s).decode(encoding, 'replace')
return s
else:
def force_unicode(s, encoding='UTF-8'):
return str(s)
# new.instancemethod() is obsolete for new-style classes (Python 3.x)
# We need to use descriptor methods instead.
try:
import new
def make_instancemethod(function, instance):
return new.instancemethod(function.im_func, instance,
instance.__class__)
except ImportError:
def make_instancemethod(function, instance):
return function.__get__(instance, instance.__class__)
# To be forward-compatible, we do all list sorts using keys instead of cmp
# functions. However, part of the unittest.TestLoader API involves a
# user-provideable cmp function, so we need some way to convert that.
def cmp_to_key(mycmp):
'Convert a cmp= function into a key= function'
class Key(object):
def __init__(self, obj):
self.obj = obj
def __lt__(self, other):
return mycmp(self.obj, other.obj) < 0
def __gt__(self, other):
return mycmp(self.obj, other.obj) > 0
def __eq__(self, other):
return mycmp(self.obj, other.obj) == 0
return Key
# Python 2.3 also does not support list-sorting by key, so we need to convert
# keys to cmp functions if we're running on old Python..
if sys.version_info < (2, 4):
def sort_list(l, key, reverse=False):
if reverse:
return l.sort(lambda a, b: cmp(key(b), key(a)))
else:
return l.sort(lambda a, b: cmp(key(a), key(b)))
else:
def sort_list(l, key, reverse=False):
return l.sort(key=key, reverse=reverse)
# In Python 3.x, all objects are "new style" objects descended from 'type', and
# thus types.ClassType and types.TypeType don't exist anymore. For
# compatibility, we make sure they still work.
if hasattr(types, 'ClassType'):
ClassType = types.ClassType
TypeType = types.TypeType
else:
ClassType = type
TypeType = type
# The following emulates the behavior (we need) of an 'unbound method' under
# Python 3.x (namely, the ability to have a class associated with a function
# definition so that things can do stuff based on its associated class)
class UnboundMethod:
def __init__(self, cls, func):
# Make sure we have all the same attributes as the original function,
# so that the AttributeSelector plugin will work correctly...
self.__dict__ = func.__dict__.copy()
self._func = func
self.__self__ = UnboundSelf(cls)
if sys.version_info < (3, 0):
self.im_class = cls
self.__doc__ = getattr(func, '__doc__', None)
def address(self):
cls = self.__self__.cls
modname = cls.__module__
module = sys.modules[modname]
filename = getattr(module, '__file__', None)
if filename is not None:
filename = os.path.abspath(filename)
return (nose.util.src(filename), modname, "%s.%s" % (cls.__name__,
self._func.__name__))
def __call__(self, *args, **kwargs):
return self._func(*args, **kwargs)
def __getattr__(self, attr):
return getattr(self._func, attr)
def __repr__(self):
return '<unbound method %s.%s>' % (self.__self__.cls.__name__,
self._func.__name__)
class UnboundSelf:
def __init__(self, cls):
self.cls = cls
# We have to do this hackery because Python won't let us override the
# __class__ attribute...
def __getattribute__(self, attr):
if attr == '__class__':
return self.cls
else:
return object.__getattribute__(self, attr)
def unbound_method(cls, func):
if inspect.ismethod(func):
return func
if not inspect.isfunction(func):
raise TypeError('%s is not a function' % (repr(func),))
return UnboundMethod(cls, func)
def ismethod(obj):
return inspect.ismethod(obj) or isinstance(obj, UnboundMethod)
# Make a pseudo-bytes function that can be called without the encoding arg:
if sys.version_info >= (3, 0):
def bytes_(s, encoding='utf8'):
if isinstance(s, bytes):
return s
return bytes(s, encoding)
else:
def bytes_(s, encoding=None):
return str(s)
if sys.version_info[:2] >= (2, 6):
def isgenerator(o):
if isinstance(o, UnboundMethod):
o = o._func
return inspect.isgeneratorfunction(o) or inspect.isgenerator(o)
else:
try:
from compiler.consts import CO_GENERATOR
except ImportError:
# IronPython doesn't have a complier module
CO_GENERATOR=0x20
def isgenerator(func):
try:
return func.func_code.co_flags & CO_GENERATOR != 0
except AttributeError:
return False
# Make a function to help check if an exception is derived from BaseException.
# In Python 2.4, we just use Exception instead.
if sys.version_info[:2] < (2, 5):
def is_base_exception(exc):
return isinstance(exc, Exception)
else:
def is_base_exception(exc):
return isinstance(exc, BaseException)
if sys.version_info[:2] < (3, 0):
def exc_to_unicode(ev, encoding='utf-8'):
if is_base_exception(ev):
if not hasattr(ev, '__unicode__'):
# 2.5-
if not hasattr(ev, 'message'):
# 2.4
msg = len(ev.args) and ev.args[0] or ''
else:
msg = ev.message
msg = force_unicode(msg, encoding=encoding)
clsname = force_unicode(ev.__class__.__name__,
encoding=encoding)
ev = u'%s: %s' % (clsname, msg)
elif not isinstance(ev, unicode):
ev = repr(ev)
return force_unicode(ev, encoding=encoding)
else:
def exc_to_unicode(ev, encoding='utf-8'):
return str(ev)
def format_exception(exc_info, encoding='UTF-8'):
ec, ev, tb = exc_info
# Our exception object may have been turned into a string, and Python 3's
# traceback.format_exception() doesn't take kindly to that (it expects an
# actual exception object). So we work around it, by doing the work
# ourselves if ev is not an exception object.
if not is_base_exception(ev):
tb_data = force_unicode(
''.join(traceback.format_tb(tb)),
encoding)
ev = exc_to_unicode(ev)
return tb_data + ev
else:
return force_unicode(
''.join(traceback.format_exception(*exc_info)),
encoding)

Some files were not shown because too many files have changed in this diff Show More