Merge branch 'develop' of https://github.com/LLNL/spack into features/install_with_phases_rebase

Conflicts:
	lib/spack/spack/cmd/install.py
	lib/spack/spack/cmd/setup.py
This commit is contained in:
alalazo
2016-10-21 12:38:43 +02:00
336 changed files with 15631 additions and 945 deletions

View File

@@ -39,27 +39,18 @@ Spack can install:
.. command-output:: spack list
The packages are listed by name in alphabetical order. If you specify a
pattern to match, it will follow this set of rules. A pattern with no
wildcards, ``*`` or ``?``, will be treated as though it started and ended with
``*``, so ``util`` is equivalent to ``*util*``. A pattern with no capital
letters will be treated as case-insensitive. You can also add the ``-i`` flag
to specify a case insensitive search, or ``-d`` to search the description of
The packages are listed by name in alphabetical order.
A pattern to match with no wildcards, ``*`` or ``?``,
will be treated as though it started and ended with
``*``, so ``util`` is equivalent to ``*util*``. All patterns will be treated
as case-insensitive. You can also add the ``-d`` to search the description of
the package in addition to the name. Some examples:
All packages whose names contain "sql" case insensitive:
All packages whose names contain "sql":
.. command-output:: spack list sql
All packages whose names start with a capital M:
.. command-output:: spack list 'M*'
All packages whose names or descriptions contain Documentation:
.. command-output:: spack list --search-description Documentation
All packages whose names contain documentation case insensitive:
All packages whose names or descriptions contain documentation:
.. command-output:: spack list --search-description documentation
@@ -1333,6 +1324,41 @@ load two or more versions of the same software at the same time.
.. note::
The ``conflict`` option is ``tcl`` specific
The names of environment modules generated by spack are not always easy to
fully comprehend due to the long hash in the name. There are two module
configuration options to help with that. The first is a global setting to
adjust the hash length. It can be set anywhere from 0 to 32 and has a default
length of 7. This is the representation of the hash in the module file name and
does not affect the size of the package hash. Be aware that the smaller the
hash length the more likely naming conflicts will occur. The following snippet
shows how to set hash length in the module file names:
.. code-block:: yaml
modules:
tcl:
hash_length: 7
To help make module names more readable, and to help alleviate name conflicts
with a short hash, one can use the ``suffixes`` option in the modules
configuration file. This option will add strings to modules that match a spec.
For instance, the following config options,
.. code-block:: yaml
modules:
tcl:
all:
suffixes:
^python@2.7.12: 'python-2.7.12'
^openblas: 'openblas'
will add a ``python-2.7.12`` version string to any packages compiled with
python matching the spec, ``python@2.7.12``. This is useful to know which
version of python a set of python extensions is associated with. Likewise, the
``openblas`` string is attached to any program that has openblas in the spec,
most likely via the ``+blas`` variant specification.
^^^^^^^^^^^^^^^^^^^^^^^^^
Regenerating Module files
^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -1,247 +1,521 @@
.. _contribution-guide:
==================
Contribution Guide
=====================
==================
This guide is intended for developers or administrators who want to
contribute a new package, feature, or bugfix to Spack.
It assumes that you have at least some familiarity with Git VCS and Github.
The guide will show a few examples of contributing workflow and discuss
the granularity of pull-requests (PRs).
The guide will show a few examples of contributing workflows and discuss
the granularity of pull-requests (PRs). It will also discuss the tests your
PR must pass in order to be accepted into Spack.
First, what is a PR? Quoting `Bitbucket's tutorials <https://www.atlassian.com/git/tutorials/making-a-pull-request/>`_:
Pull requests are a mechanism for a developer to notify team members that they have **completed a feature**.
The pull request is more than just a notification—its a dedicated forum for discussing the proposed feature
Pull requests are a mechanism for a developer to notify team members that
they have **completed a feature**. The pull request is more than just a
notification—its a dedicated forum for discussing the proposed feature.
Important is completed feature, i.e. the changes one propose in a PR should
Important is **completed feature**. The changes one proposes in a PR should
correspond to one feature/bugfix/extension/etc. One can create PRs with
changes relevant to different ideas, however reviewing such PRs becomes tedious
and error prone. If possible, try to follow the rule **one-PR-one-package/feature.**
and error prone. If possible, try to follow the **one-PR-one-package/feature** rule.
Spack uses a rough approximation of the `Git Flow <http://nvie.com/posts/a-successful-git-branching-model/>`_ branching
model. The develop branch contains the latest contributions, and master is
always tagged and points to the latest stable release. Thereby when you send
your request, make ``develop`` the destination branch on the
Spack uses a rough approximation of the `Git Flow <http://nvie.com/posts/a-successful-git-branching-model/>`_
branching model. The develop branch contains the latest contributions, and
master is always tagged and points to the latest stable release. Therefore, when
you send your request, make ``develop`` the destination branch on the
`Spack repository <https://github.com/LLNL/spack>`_.
Let's assume that the current (patched) state of your fork of Spack is only
relevant to yourself. Now you come across a bug in a package or would like to
extend a package and contribute this fix to Spack. It is important that
whenever you change something that might be of importance upstream,
create a pull-request (PR) as soon as possible. Do not wait for weeks/months to
do this: a) you might forget why did you modified certain files; b) it could get
difficult to isolate this change into a stand-alone clean PR.
----------------------
Continuous Integration
----------------------
Now let us discuss several approaches one may use to submit a PR while
also keeping your local version of Spack patched.
Spack uses `Travis CI <https://travis-ci.org/LLNL/spack>`_ for Continuous Integration
testing. This means that every time you submit a pull request, a series of tests will
be run to make sure you didn't accidentally introduce any bugs into Spack. Your PR
will not be accepted until it passes all of these tests. While you can certainly wait
for the results of these tests after submitting a PR, we recommend that you run them
locally to speed up the review process.
If you take a look in ``$SPACK_ROOT/.travis.yml``, you'll notice that we test
against Python 2.6 and 2.7. We currently perform 3 types of tests:
First approach (cherry-picking):
--------------------------------
^^^^^^^^^^
Unit Tests
^^^^^^^^^^
First approach is as follows.
You checkout your local develop branch, which for the purpose of this guide
will be called ``develop_modified``:
Unit tests ensure that core Spack features like fetching or spec resolution are
working as expected. If your PR only adds new packages or modifies existing ones,
there's very little chance that your changes could cause the unit tests to fail.
However, if you make changes to Spack's core libraries, you should run the unit
tests to make sure you didn't break anything.
Since they test things like fetching from VCS repos, the unit tests require
`git <https://git-scm.com/>`_, `mercurial <https://www.mercurial-scm.org/>`_,
and `subversion <https://subversion.apache.org/>`_ to run. Make sure these are
installed on your system and can be found in your ``PATH``. All of these can be
installed with Spack or with your system package manager.
To run *all* of the unit tests, use:
.. code-block:: console
$ git checkout develop_modified
$ spack test
Let us assume that lines in files you will be modifying
are the same in `develop_modified` branch and upstream ``develop``.
Next edit files, make sure they work for you and create a commit
These tests may take several minutes to complete. If you know you are only
modifying a single Spack feature, you can run a single unit test at a time:
.. code-block:: console
$ git add <files_to_be_commited>
$ git commit -m <descriptive note about changes>
$ spack test architecture
Normally we prefer that commits pertaining to a package ``<package-name>``` have
a message ``<package-name>: descriptive message``. It is important to add
descriptive message so that others, who might be looking at your changes later
(in a year or maybe two), would understand the rationale behind.
This allows you to develop iteratively: make a change, test that change, make
another change, test that change, etc. To get a list of all available unit
tests, run:
.. command-output:: spack test --list
Next we will create a branch off upstream's ``develop`` and copy this commit.
Before doing this, while still on your modified branch, get the hash of the
last commit
Unit tests are crucial to making sure bugs aren't introduced into Spack. If you
are modifying core Spack libraries or adding new functionality, please consider
adding new unit tests or strengthening existing tests.
.. note::
There is also a ``run-unit-tests`` script in ``share/spack/qa`` that runs the
unit tests. Afterwards, it reports back to Coverage with the percentage of Spack
that is covered by unit tests. This script is designed for Travis CI. If you
want to run the unit tests yourself, we suggest you use ``spack test``.
^^^^^^^^^^^^
Flake8 Tests
^^^^^^^^^^^^
Spack uses `Flake8 <http://flake8.pycqa.org/en/latest/>`_ to test for
`PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_ conformance. PEP 8 is
a series of style guides for Python that provide suggestions for everything
from variable naming to indentation. In order to limit the number of PRs that
were mostly style changes, we decided to enforce PEP 8 conformance. Your PR
needs to comply with PEP 8 in order to be accepted.
Testing for PEP 8 compliance is easy. Simply add the quality assurance
directory to your ``PATH`` and run the flake8 script:
.. code-block:: console
$ git log -1
$ export PATH+=":$SPACK_ROOT/share/spack/qa"
$ run-flake8-tests
and copy-paste this ``<hash>`` to the buffer. Now switch to upstream's ``develop``,
make sure it's updated, checkout the new branch, apply the patch and push to
GitHub:
``run-flake8-tests`` has a couple advantages over running ``flake8`` by hand:
#. It only tests files that you have modified since branching off of develop.
#. It works regardless of what directory you are in.
#. It automatically adds approved exemptions from the flake8 checks. For example,
URLs are often longer than 80 characters, so we exempt them from the line
length checks. We also exempt lines that start with "homepage", "url", "version",
"variant", "depends_on", and "extends" in the ``package.py`` files.
More approved flake8 exemptions can be found
`here <https://github.com/LLNL/spack/blob/develop/.flake8>`_.
If all is well, you'll see something like this:
.. code-block:: console
$ git checkout develop
$ git pull upstream develop
$ git checkout -b <descriptive_branch_name>
$ git cherry-pick <hash>
$ git push <your_origin> <descriptive_branch_name> -u
$ run-flake8-tests
Dependencies found.
=======================================================
flake8: running flake8 code checks on spack.
Here we assume that local ``develop`` branch tracks upstream develop branch of
Spack. This is not a requirement and you could also do the same with remote
branches. Yet to some it is more convenient to have a local branch that
Modified files:
var/spack/repos/builtin/packages/hdf5/package.py
var/spack/repos/builtin/packages/hdf/package.py
var/spack/repos/builtin/packages/netcdf/package.py
=======================================================
Flake8 checks were clean.
However, if you aren't compliant with PEP 8, flake8 will complain:
.. code-block:: console
var/spack/repos/builtin/packages/netcdf/package.py:26: [F401] 'os' imported but unused
var/spack/repos/builtin/packages/netcdf/package.py:61: [E303] too many blank lines (2)
var/spack/repos/builtin/packages/netcdf/package.py:106: [E501] line too long (92 > 79 characters)
Flake8 found errors.
Most of the error messages are straightforward, but if you don't understand what
they mean, just ask questions about them when you submit your PR. The line numbers
will change if you add or delete lines, so simply run ``run-flake8-tests`` again
to update them.
.. tip::
Try fixing flake8 errors in reverse order. This eliminates the need for
multiple runs of ``flake8`` just to re-compute line numbers and makes it
much easier to fix errors directly off of the Travis output.
.. warning::
Flake8 requires setuptools in order to run. If you installed ``py-flake8``
with Spack, make sure to add ``py-setuptools`` to your ``PYTHONPATH``.
Otherwise, you will get an error message like:
.. code-block:: console
Traceback (most recent call last):
File: "/usr/bin/flake8", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
^^^^^^^^^^^^^^^^^^^
Documentation Tests
^^^^^^^^^^^^^^^^^^^
Spack uses `Sphinx <http://www.sphinx-doc.org/en/stable/>`_ to build its
documentation. In order to prevent things like broken links and missing imports,
we added documentation tests that build the documentation and fail if there
are any warning or error messages.
Building the documentation requires several dependencies, all of which can be
installed with Spack:
* sphinx
* graphviz
* git
* mercurial
* subversion
.. warning::
Sphinx has `several required dependencies <https://github.com/LLNL/spack/blob/develop/var/spack/repos/builtin/packages/py-sphinx/package.py>`_.
If you installed ``py-sphinx`` with Spack, make sure to add all of these
dependencies to your ``PYTHONPATH``. The easiest way to do this is to run
``spack activate py-sphinx`` so that all of the dependencies are symlinked
to a central location. If you see an error message like:
.. code-block:: console
Traceback (most recent call last):
File: "/usr/bin/flake8", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
that means Sphinx couldn't find setuptools in your ``PYTHONPATH``.
Once all of the dependencies are installed, you can try building the documentation:
.. code-block:: console
$ cd "$SPACK_ROOT/lib/spack/docs"
$ make clean
$ make
If you see any warning or error messages, you will have to correct those before
your PR is accepted.
.. note::
There is also a ``run-doc-tests`` script in the Quality Assurance directory.
The only difference between running this script and running ``make`` by hand
is that the script will exit immediately if it encounters an error or warning.
This is necessary for Travis CI. If you made a lot of documentation tests, it
is much quicker to run ``make`` by hand so that you can see all of the warnings
at once.
If you are editing the documentation, you should obviously be running the
documentation tests. But even if you are simply adding a new package, your
changes could cause the documentation tests to fail:
.. code-block:: console
package_list.rst:8745: WARNING: Block quote ends without a blank line; unexpected unindent.
At first, this error message will mean nothing to you, since you didn't edit
that file. Until you look at line 8745 of the file in question:
.. code-block:: rst
Description:
NetCDF is a set of software libraries and self-describing, machine-
independent data formats that support the creation, access, and sharing
of array-oriented scientific data.
Our documentation includes :ref:`a list of all Spack packages <package-list>`.
If you add a new package, its docstring is added to this page. The problem in
this case was that the docstring looked like:
.. code-block:: python
class Netcdf(Package):
"""
NetCDF is a set of software libraries and self-describing,
machine-independent data formats that support the creation,
access, and sharing of array-oriented scientific data.
"""
Docstrings cannot start with a newline character, or else Sphinx will complain.
Instead, they should look like:
.. code-block:: python
class Netcdf(Package):
"""NetCDF is a set of software libraries and self-describing,
machine-independent data formats that support the creation,
access, and sharing of array-oriented scientific data."""
Documentation changes can result in much more obfuscated warning messages.
If you don't understand what they mean, feel free to ask when you submit
your PR.
-------------
Git Workflows
-------------
Spack is still in the beta stages of development. Most of our users run off of
the develop branch, and fixes and new features are constantly being merged. So
how do you keep up-to-date with upstream while maintaining your own local
differences and contributing PRs to Spack?
^^^^^^^^^
Branching
^^^^^^^^^
The easiest way to contribute a pull request is to make all of your changes on
new branches. Make sure your ``develop`` is up-to-date and create a new branch
off of it:
.. code-block:: console
$ git checkout develop
$ git pull upstream develop
$ git branch <descriptive_branch_name>
$ git checkout <descriptive_branch_name>
Here we assume that the local ``develop`` branch tracks the upstream develop
branch of Spack. This is not a requirement and you could also do the same with
remote branches. But for some it is more convenient to have a local branch that
tracks upstream.
Now you can create a PR from web-interface of GitHub. The net result is as
Normally we prefer that commits pertaining to a package ``<package-name>`` have
a message ``<package-name>: descriptive message``. It is important to add
descriptive message so that others, who might be looking at your changes later
(in a year or maybe two), would understand the rationale behind them.
Now, you can make your changes while keeping the ``develop`` branch pure.
Edit a few files and commit them by running:
.. code-block:: console
$ git add <files_to_be_part_of_the_commit>
$ git commit --message <descriptive_message_of_this_particular_commit>
Next, push it to your remote fork and create a PR:
.. code-block:: console
$ git push origin <descriptive_branch_name> --set-upstream
GitHub provides a `tutorial <https://help.github.com/articles/about-pull-requests/>`_
on how to file a pull request. When you send the request, make ``develop`` the
destination branch.
If you need this change immediately and don't have time to wait for your PR to
be merged, you can always work on this branch. But if you have multiple PRs,
another option is to maintain a Frankenstein branch that combines all of your
other branches:
.. code-block:: console
$ git co develop
$ git branch <your_modified_develop_branch>
$ git checkout <your_modified_develop_branch>
$ git merge <descriptive_branch_name>
This can be done with each new PR you submit. Just make sure to keep this local
branch up-to-date with upstream ``develop`` too.
^^^^^^^^^^^^^^
Cherry-Picking
^^^^^^^^^^^^^^
What if you made some changes to your local modified develop branch and already
committed them, but later decided to contribute them to Spack? You can use
cherry-picking to create a new branch with only these commits.
First, check out your local modified develop branch:
.. code-block:: console
$ git checkout <your_modified_develop_branch>
Now, get the hashes of the commits you want from the output of:
.. code-block:: console
$ git log
Next, create a new branch off of upstream ``develop`` and copy the commits
that you want in your PR:
.. code-block:: console
$ git checkout develop
$ git pull upstream develop
$ git branch <descriptive_branch_name>
$ git checkout <descriptive_branch_name>
$ git cherry-pick <hash>
$ git push origin <descriptive_branch_name> --set-upstream
Now you can create a PR from the web-interface of GitHub. The net result is as
follows:
#. You patched your local version of Spack and can use it further
#. You patched your local version of Spack and can use it further.
#. You "cherry-picked" these changes in a stand-alone branch and submitted it
as a PR upstream.
Should you have several commits to contribute, you could follow the same
procedure by getting hashes of all of them and cherry-picking to the PR branch.
This could get tedious and therefore there is another way:
.. note::
Second approach:
----------------
It is important that whenever you change something that might be of
importance upstream, create a pull request as soon as possible. Do not wait
for weeks/months to do this, because:
In the second approach we start from upstream ``develop`` (again assuming
that your local branch `develop` tracks upstream):
#. you might forget why you modified certain files
#. it could get difficult to isolate this change into a stand-alone clean PR.
^^^^^^^^
Rebasing
^^^^^^^^
Other developers are constantly making contributions to Spack, possibly on the
same files that your PR changed. If their PR is merged before yours, it can
create a merge conflict. This means that your PR can no longer be automatically
merged without a chance of breaking your changes. In this case, you will be
asked to rebase on top of the latest upstream ``develop``.
First, make sure your develop branch is up-to-date:
.. code-block:: console
$ git checkout develop
$ git pull upstream develop
$ git checkout -b <descriptive_branch_name>
$ git checkout develop
$ git pull upstream develop
Next edit a few files and create a few commits by
Now, we need to switch to the branch you submitted for your PR and rebase it
on top of develop:
.. code-block:: console
$ git add <files_to_be_part_of_the_commit>
$ git commit -m <descriptive_message_of_this_particular_commit>
$ git checkout <descriptive_branch_name>
$ git rebase develop
Now you can push it to your fork and create a PR
Git will likely ask you to resolve conflicts. Edit the file that it says can't
be merged automatically and resolve the conflict. Then, run:
.. code-block:: console
$ git push <your_origin> <descriptive_branch_name> -u
$ git add <file_that_could_not_be_merged>
$ git rebase --continue
Most likely you would want to have those changes in your (modified) local
version of Spack. To that end you need to merge this branch
You may have to repeat this process multiple times until all conflicts are resolved.
Once this is done, simply force push your rebased branch to your remote fork:
.. code-block:: console
$ git checkout develop_modified
$ git merge <descriptive_branch_name>
$ git push --force origin <descriptive_branch_name>
The net result is similar to the first approach with a minor difference that
you would also merge upstream develop into you modified version in the last
step. Should this not be desirable, you have to follow the first approach.
^^^^^^^^^^^^^^^^^^^^^^^^^
Rebasing with cherry-pick
^^^^^^^^^^^^^^^^^^^^^^^^^
You can also perform a rebase using ``cherry-pick``. First, create a temporary
backup branch:
.. code-block:: console
How to clean-up a branch by rewriting history:
-----------------------------------------------
$ git checkout <descriptive_branch_name>
$ git branch tmp
Sometimes you may end up on a branch that has a lot of commits, merges of
upstream branch and alike but it can't be rebased on ``develop`` due to a long
and convoluted history. If the current commits history is more of an experimental
nature and only the net result is important, you may rewrite the history.
To that end you need to first merge upstream `develop` and reset you branch to
it. So on the branch in question do:
If anything goes wrong, you can always go back to your ``tmp`` branch.
Now, look at the logs and save the hashes of any commits you would like to keep:
.. code-block:: console
$ git log
Next, go back to the original branch and reset it to ``develop``.
Before doing so, make sure that you local ``develop`` branch is up-to-date
with upstream:
.. code-block:: console
$ git checkout develop
$ git pull upstream develop
$ git checkout <descriptive_branch_name>
$ git reset --hard develop
Now you can cherry-pick relevant commits:
.. code-block:: console
$ git cherry-pick <hash1>
$ git cherry-pick <hash2>
Push the modified branch to your fork:
.. code-block:: console
$ git push --force origin <descriptive_branch_name>
If everything looks good, delete the backup branch:
.. code-block:: console
$ git branch --delete --force tmp
^^^^^^^^^^^^^^^^^^
Re-writing History
^^^^^^^^^^^^^^^^^^
Sometimes you may end up on a branch that has diverged so much from develop
that it cannot easily be rebased. If the current commits history is more of
an experimental nature and only the net result is important, you may rewrite
the history.
First, merge upstream ``develop`` and reset you branch to it. On the branch
in question, run:
.. code-block:: console
$ git merge develop
$ git reset develop
At this point you your branch will point to the same commit as develop and
At this point your branch will point to the same commit as develop and
thereby the two are indistinguishable. However, all the files that were
previously modified will stay as such. In other words, you do not loose the
changes you made. Changes can be reviewed by looking at diffs
previously modified will stay as such. In other words, you do not lose the
changes you made. Changes can be reviewed by looking at diffs:
.. code-block:: console
$ git status
$ git diff
One can also run GUI to visualize the current changes
.. code-block:: console
$ git difftool
Next step is to rewrite the history by adding files and creating commits
The next step is to rewrite the history by adding files and creating commits:
.. code-block:: console
$ git add <files_to_be_part_of_commit>
$ git commit -m <descriptive_message>
Shall you need to split changes within a file into separate commits, use
.. code-block:: console
$ git add <file> -p
$ git commit --message <descriptive_message>
After all changed files are committed, you can push the branch to your fork
and create a PR
and create a PR:
.. code-block:: console
$ git push <you_origin> -u
$ git push origin --set-upstream
How to fix a bad rebase by "cherry-picking" commits:
----------------------------------------------------
Say you are working on a branch ``feature1``. It has several commits and is
ready to be merged. However, there are a few minor merge conflicts and so
you are asked to rebase onto ``develop`` upstream branch. Occasionally, it
happens so that a contributor rebases not on top of the upstream branch, but
on his/her local outdated copy of it. This would lead to an inclusion of the
whole lot of duplicated history and of course can not be merged as-is.
One way to get out of troubles is to ``cherry-pick`` important commits. To
do that, first checkout a temporary back-up branch:
.. code-block:: console
git checkout -b tmp
Now look at logs and save hashes of commits you would like to keep
.. code-block:: console
git log
Next, go back to the original branch and reset it to ``develop``.
Before doing so, make sure that you local ``develop`` branch is up-to-date
with the upstream.
.. code-block:: console
git checkout feature1
git reset --hard develop
Now you can cherry-pick relevant commits
.. code-block:: console
git cherry-pick <hash1>
git cherry-pick <hash2>
push the modified branch to your fork
.. code-block:: console
git push -f
and if everything looks good, delete the back-up:
.. code-block:: console
git branch -D tmp

View File

@@ -3004,13 +3004,18 @@ Cleans up all of Spack's temporary and cached files. This can be used to
recover disk space if temporary files from interrupted or failed installs
accumulate in the staging area.
When called with ``--stage`` or ``--all`` (or without arguments, in which case
the default is ``--all``) this removes all staged files; this is equivalent to
running ``spack clean`` for every package you have fetched or staged.
When called with ``--stage`` or without arguments this removes all staged
files and will be equivalent to running ``spack clean`` for every package
you have fetched or staged.
When called with ``--cache`` or ``--all`` this will clear all resources
When called with ``--downloads`` this will clear all resources
:ref:`cached <caching>` during installs.
When called with ``--user-cache`` this will remove caches in the user home
directory, including cached virtual indices.
To remove all of the above, the command can be called with ``--all``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Keeping the stage directory on success
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -129,8 +129,9 @@
# User's editor from the environment
editor = Executable(os.environ.get("EDITOR", "vi"))
# Curl tool for fetching files.
curl = which("curl", required=True)
# If this is enabled, tools that use SSL should not verify
# certifiates. e.g., curl should use the -k option.
insecure = False
# Whether to build in tmp space or directly in the stage_path.
# If this is true, then spack will make stage directories in

View File

@@ -83,7 +83,6 @@
import llnl.util.tty as tty
import spack
import spack.compilers
from spack.util.naming import mod_to_class
from spack.util.environment import get_path
from spack.util.multiproc import parmap
@@ -276,6 +275,8 @@ def find_compilers(self, *paths):
# Once the paths are cleaned up, do a search for each type of
# compiler. We can spawn a bunch of parallel searches to reduce
# the overhead of spelunking all these directories.
# NOTE: we import spack.compilers here to avoid init order cycles
import spack.compilers
types = spack.compilers.all_compiler_types()
compiler_lists = parmap(lambda cmp_cls:
self.find_compiler(cmp_cls, *filtered_path),

View File

@@ -221,6 +221,8 @@ def set_compiler_environment_variables(pkg, env):
for mod in compiler.modules:
load_module(mod)
compiler.setup_custom_environment(env)
return env

View File

@@ -29,13 +29,8 @@
def setup_parser(subparser):
# User can only choose one
scope_group = subparser.add_mutually_exclusive_group()
scope_group.add_argument(
'--user', action='store_const', const='user', dest='scope',
help="Use config file in user home directory (default).")
scope_group.add_argument(
'--site', action='store_const', const='site', dest='scope',
help="Use config file in spack prefix.")
subparser.add_argument('--scope', choices=spack.config.config_scopes,
help="Configuration scope to read/modify.")
sp = subparser.add_subparsers(metavar='SUBCOMMAND', dest='config_command')

View File

@@ -116,6 +116,10 @@ def install(self, spec, prefix):
# FIXME: Add additional dependencies if required.
depends_on('scons', type='build')""",
'bazel': """\
# FIXME: Add additional dependencies if required.
depends_on('bazel', type='build')""",
'python': """\
extends('python')
@@ -164,6 +168,10 @@ def install(self, spec, prefix):
scons('prefix={0}'.format(prefix))
scons('install')""",
'bazel': """\
# FIXME: Add logic to build and install here.
bazel()""",
'python': """\
# FIXME: Add logic to build and install here.
setup_py('install', '--prefix={0}'.format(prefix))""",
@@ -238,7 +246,8 @@ def __call__(self, stage, url):
(r'/CMakeLists.txt$', 'cmake'),
(r'/SConstruct$', 'scons'),
(r'/setup.py$', 'python'),
(r'/NAMESPACE$', 'R')
(r'/NAMESPACE$', 'R'),
(r'/WORKSPACE$', 'bazel')
]
# Peek inside the compressed file.

View File

@@ -34,12 +34,15 @@
def setup_parser(subparser):
subparser.add_argument(
'-i', '--ignore-dependencies', action='store_true', dest='ignore_deps',
help="Do not try to install dependencies of requested packages.")
subparser.add_argument(
'-d', '--dependencies-only', action='store_true', dest='deps_only',
help='Install dependencies of this package, ' +
'but not the package itself.')
'--only',
default='package,dependencies',
dest='things_to_install',
choices=['package', 'dependencies', 'package,dependencies'],
help="""Select the mode of installation.
The default is to install the package along with all its dependencies.
Alternatively one can decide to install only the package or only
the dependencies."""
)
subparser.add_argument(
'-j', '--jobs', action='store', type=int,
help="Explicitly set number of make jobs. Default is #cpus.")
@@ -62,18 +65,17 @@ def setup_parser(subparser):
'--dirty', action='store_true', dest='dirty',
help="Install a package *without* cleaning the environment.")
subparser.add_argument(
'--stop-at', help="Stop at a particular phase of installation"
'package',
nargs=argparse.REMAINDER,
help="spec of the package to install"
)
subparser.add_argument(
'packages', nargs=argparse.REMAINDER,
help="specs of packages to install")
subparser.add_argument(
'--run-tests', action='store_true', dest='run_tests',
help="Run tests during installation of a package.")
def install(parser, args):
if not args.packages:
if not args.package:
tty.die("install requires at least one package argument")
if args.jobs is not None:
@@ -83,19 +85,33 @@ def install(parser, args):
if args.no_checksum:
spack.do_checksum = False # TODO: remove this global.
specs = spack.cmd.parse_specs(args.packages, concretize=True)
for spec in specs:
# Parse cli arguments and construct a dictionary
# that will be passed to Package.do_install API
kwargs = {
'keep_prefix': args.keep_prefix,
'keep_stage': args.keep_stage,
'install_deps': 'dependencies' in args.things_to_install,
'make_jobs': args.jobs,
'run_tests': args.run_tests,
'verbose': args.verbose,
'fake': args.fake,
'dirty': args.dirty
}
# Spec from cli
specs = spack.cmd.parse_specs(args.package, concretize=True)
if len(specs) != 1:
tty.error('only one spec can be installed at a time.')
spec = specs.pop()
if args.things_to_install == 'dependencies':
# Install dependencies as-if they were installed
# for root (explicit=False in the DB)
kwargs['explicit'] = False
for s in spec.dependencies():
p = spack.repo.get(s)
p.do_install(**kwargs)
else:
package = spack.repo.get(spec)
package.do_install(
keep_prefix=args.keep_prefix,
keep_stage=args.keep_stage,
install_deps=not args.ignore_deps,
install_self=not args.deps_only,
make_jobs=args.jobs,
run_tests=args.run_tests,
verbose=args.verbose,
fake=args.fake,
dirty=args.dirty,
explicit=True,
stop_at=args.stop_at
)
kwargs['explicit'] = True
package.do_install(**kwargs)

View File

@@ -181,7 +181,6 @@ def install_single_spec(spec, number_of_jobs):
package.do_install(keep_prefix=False,
keep_stage=True,
install_deps=True,
install_self=True,
make_jobs=number_of_jobs,
verbose=True,
fake=False)

View File

@@ -114,9 +114,15 @@ def fc_rpath_arg(self):
def __init__(self, cspec, operating_system,
paths, modules=[], alias=None, **kwargs):
self.operating_system = operating_system
self.spec = cspec
self.modules = modules
self.alias = alias
def check(exe):
if exe is None:
return None
exe = self._find_full_path(exe)
_verify_executables(exe)
return exe
@@ -138,11 +144,6 @@ def check(exe):
if value is not None:
self.flags[flag] = value.split()
self.operating_system = operating_system
self.spec = cspec
self.modules = modules
self.alias = alias
@property
def version(self):
return self.spec.version
@@ -269,6 +270,21 @@ def check(key):
successful.reverse()
return dict(((v, p, s), path) for v, p, s, path in successful)
def _find_full_path(self, path):
"""Return the actual path for a tool.
Some toolchains use forwarding executables (particularly Xcode-based
toolchains) which can be manipulated by external environment variables.
This method should be used to extract the actual path used for a tool
by finding out the end executable the forwarding executables end up
running.
"""
return path
def setup_custom_environment(self, env):
"""Set any environment variables necessary to use the compiler."""
pass
def __repr__(self):
"""Return a string representation of the compiler toolchain."""
return self.__str__()

View File

@@ -23,11 +23,14 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
import re
import os
import spack
import spack.compiler as cpr
from spack.compiler import *
from spack.util.executable import *
import llnl.util.tty as tty
from spack.version import ver
from shutil import copytree, ignore_patterns
class Clang(Compiler):
@@ -107,3 +110,79 @@ def default_version(cls, comp):
cpr._version_cache[comp] = ver
return cpr._version_cache[comp]
def _find_full_path(self, path):
basename = os.path.basename(path)
if not self.is_apple or basename not in ('clang', 'clang++'):
return super(Clang, self)._find_full_path(path)
xcrun = Executable('xcrun')
full_path = xcrun('-f', basename, output=str)
return full_path.strip()
def setup_custom_environment(self, env):
"""Set the DEVELOPER_DIR environment for the Xcode toolchain.
On macOS, not all buildsystems support querying CC and CXX for the
compilers to use and instead query the Xcode toolchain for what
compiler to run. This side-steps the spack wrappers. In order to inject
spack into this setup, we need to copy (a subset of) Xcode.app and
replace the compiler executables with symlinks to the spack wrapper.
Currently, the stage is used to store the Xcode.app copies. We then set
the 'DEVELOPER_DIR' environment variables to cause the xcrun and
related tools to use this Xcode.app.
"""
super(Clang, self).setup_custom_environment(env)
if not self.is_apple:
return
xcode_select = Executable('xcode-select')
real_root = xcode_select('--print-path', output=str).strip()
real_root = os.path.dirname(os.path.dirname(real_root))
developer_root = os.path.join(spack.stage_path,
'xcode-select',
self.name,
str(self.version))
xcode_link = os.path.join(developer_root, 'Xcode.app')
if not os.path.exists(developer_root):
tty.warn('Copying Xcode from %s to %s in order to add spack '
'wrappers to it. Please do not interrupt.'
% (real_root, developer_root))
# We need to make a new Xcode.app instance, but with symlinks to
# the spack wrappers for the compilers it ships. This is necessary
# because some projects insist on just asking xcrun and related
# tools where the compiler runs. These tools are very hard to trick
# as they do realpath and end up ignoring the symlinks in a
# "softer" tree of nothing but symlinks in the right places.
copytree(real_root, developer_root, symlinks=True,
ignore=ignore_patterns('AppleTV*.platform',
'Watch*.platform',
'iPhone*.platform',
'Documentation',
'swift*'))
real_dirs = [
'Toolchains/XcodeDefault.xctoolchain/usr/bin',
'usr/bin',
]
bins = ['c++', 'c89', 'c99', 'cc', 'clang', 'clang++', 'cpp']
for real_dir in real_dirs:
dev_dir = os.path.join(developer_root,
'Contents',
'Developer',
real_dir)
for fname in os.listdir(dev_dir):
if fname in bins:
os.unlink(os.path.join(dev_dir, fname))
os.symlink(os.path.join(spack.build_env_path, 'cc'),
os.path.join(dev_dir, fname))
os.symlink(developer_root, xcode_link)
env.set('DEVELOPER_DIR', xcode_link)

View File

@@ -30,18 +30,47 @@
When Spack runs, it pulls configuration data from several config
directories, each of which contains configuration files. In Spack,
there are two configuration scopes:
there are three configuration scopes (lowest to highest):
1. ``site``: Spack loads site-wide configuration options from
``$(prefix)/etc/spack/``.
1. ``defaults``: Spack loads default configuration settings from
``$(prefix)/etc/spack/defaults/``. These settings are the "out of the
box" settings Spack will use without site- or user- modification, and
this is where settings that are versioned with Spack should go.
2. ``user``: Spack next loads per-user configuration options from
``~/.spack/``.
2. ``site``: This scope affects only this *instance* of Spack, and
overrides the ``defaults`` scope. Configuration files in
``$(prefix)/etc/spack/`` determine site scope. These can be used for
per-project settings (for users with their own spack instance) or for
site-wide settings (for admins maintaining a common spack instance).
Spack may read configuration files from both of these locations. When
configurations conflict, the user config options take precedence over
the site configurations. Each configuration directory may contain
several configuration files, such as compilers.yaml or mirrors.yaml.
3. ``user``: User configuration goes in the user's home directory,
specifically in ``~/.spack/``.
Spack may read configuration files from any of these locations. When
configurations conflict, settings from higher-precedence scopes override
lower-precedence settings.
fCommands that modify scopes (``spack compilers``, ``spack config``,
etc.) take a ``--scope=<name>`` parameter that you can use to control
which scope is modified.
For each scope above, there can *also* be platform-specific
overrides. For example, on Blue Gene/Q machines, Spack needs to know the
location of cross-compilers for the compute nodes. This configuration is
in ``etc/spack/defaults/bgq/compilers.yaml``. It will take precedence
over settings in the ``defaults`` scope, but can still be overridden by
settings in ``site``, ``site/bgq``, ``user``, or ``user/bgq``. So, the
full list of scopes and their precedence is:
1. ``defaults``
2. ``defaults/<platform>``
3. ``site``
4. ``site/<platform>``
5. ``user``
6. ``user/<platform>``
Each configuration directory may contain several configuration files,
such as compilers.yaml or mirrors.yaml.
=========================
Configuration file format
@@ -118,6 +147,7 @@
Will make Spack take compilers *only* from the user configuration, and
the site configuration will be ignored.
"""
import copy
@@ -135,6 +165,7 @@
from llnl.util.filesystem import mkdirp
import spack
import spack.architecture
from spack.error import SpackError
import spack.schema
@@ -267,16 +298,30 @@ def clear(self):
"""Empty cached config information."""
self.sections = {}
#
# Below are configuration scopes.
#
# Each scope can have per-platfom overrides in subdirectories of the
# configuration directory.
#
_platform = spack.architecture.platform().name
"""Default configuration scope is the lowest-level scope. These are
versioned with Spack and can be overridden by sites or users."""
ConfigScope('defaults', os.path.join(spack.etc_path, 'spack', 'defaults'))
_defaults_path = os.path.join(spack.etc_path, 'spack', 'defaults')
ConfigScope('defaults', _defaults_path)
ConfigScope('defaults/%s' % _platform, os.path.join(_defaults_path, _platform))
"""Site configuration is per spack instance, for sites or projects.
No site-level configs should be checked into spack by default."""
ConfigScope('site', os.path.join(spack.etc_path, 'spack'))
_site_path = os.path.join(spack.etc_path, 'spack')
ConfigScope('site', _site_path)
ConfigScope('site/%s' % _platform, os.path.join(_site_path, _platform))
"""User configuration can override both spack defaults and site config."""
ConfigScope('user', spack.user_config_path)
_user_path = spack.user_config_path
ConfigScope('user', _user_path)
ConfigScope('user/%s' % _platform, os.path.join(_user_path, _platform))
def highest_precedence_scope():

View File

@@ -271,8 +271,12 @@ def from_sourcing_files(*args, **kwargs):
env = EnvironmentModifications()
# Check if the files are actually there
if not all(os.path.isfile(file) for file in args):
raise RuntimeError('trying to source non-existing files')
files = [line.split(' ')[0] for line in args]
non_existing = [file for file in files if not os.path.isfile(file)]
if non_existing:
message = 'trying to source non-existing files\n'
message += '\n'.join(non_existing)
raise RuntimeError(message)
# Relevant kwd parameters and formats
info = dict(kwargs)
info.setdefault('shell', '/bin/bash')

View File

@@ -158,12 +158,20 @@ def __init__(self, url=None, digest=None, **kwargs):
self.digest = digest
self.expand_archive = kwargs.get('expand', True)
self.extra_curl_options = kwargs.get('curl_options', [])
self._curl = None
self.extension = kwargs.get('extension', None)
if not self.url:
raise ValueError("URLFetchStrategy requires a url for fetching.")
@property
def curl(self):
if not self._curl:
self._curl = which('curl', required=True)
return self._curl
@_needs_stage
def fetch(self):
self.stage.chdir()
@@ -196,15 +204,21 @@ def fetch(self):
self.url,
]
if spack.insecure:
curl_args.append('-k')
if sys.stdout.isatty():
curl_args.append('-#') # status bar when using a tty
else:
curl_args.append('-sS') # just errors when not.
# Run curl but grab the mime type from the http headers
headers = spack.curl(*curl_args, output=str, fail_on_error=False)
curl_args += self.extra_curl_options
if spack.curl.returncode != 0:
# Run curl but grab the mime type from the http headers
curl = self.curl
headers = curl(*curl_args, output=str, fail_on_error=False)
if curl.returncode != 0:
# clean up archive on failure.
if self.archive_file:
os.remove(self.archive_file)
@@ -212,12 +226,12 @@ def fetch(self):
if partial_file and os.path.exists(partial_file):
os.remove(partial_file)
if spack.curl.returncode == 22:
if curl.returncode == 22:
# This is a 404. Curl will print the error.
raise FailedDownloadError(
self.url, "URL %s was not found!" % self.url)
elif spack.curl.returncode == 60:
elif curl.returncode == 60:
# This is a certificate error. Suggest spack -k
raise FailedDownloadError(
self.url,
@@ -233,7 +247,7 @@ def fetch(self):
# error, but print a spack message too
raise FailedDownloadError(
self.url,
"Curl failed with error %d" % spack.curl.returncode)
"Curl failed with error %d" % curl.returncode)
# Check if we somehow got an HTML file rather than the archive we
# asked for. We only look at the last content type, to handle
@@ -530,6 +544,12 @@ def git_version(self):
def git(self):
if not self._git:
self._git = which('git', required=True)
# If the user asked for insecure fetching, make that work
# with git as well.
if spack.insecure:
self._git.add_default_env('GIT_SSL_NO_VERIFY', 'true')
return self._git
@_needs_stage

View File

@@ -1115,8 +1115,6 @@ def do_install(self,
even with exceptions.
:param install_deps: Install dependencies before installing this \
package
:param install_self: Install this package once dependencies have \
been installed.
:param fake: Don't really build; install fake stub files instead.
:param skip_patch: Skip patch stage of build if True.
:param verbose: Display verbose build output (by default, suppresses \
@@ -1161,7 +1159,6 @@ def do_install(self,
keep_prefix=keep_prefix,
keep_stage=keep_stage,
install_deps=install_deps,
install_self=True,
fake=fake,
skip_patch=skip_patch,
verbose=verbose,
@@ -1169,11 +1166,6 @@ def do_install(self,
run_tests=run_tests,
dirty=dirty)
# The rest of this function is to install ourself,
# once deps have been installed.
if not install_self:
return
# Set run_tests flag before starting build.
self.run_tests = run_tests

View File

@@ -1,4 +1,4 @@
import subprocess
import platform
from spack.architecture import Platform, Target
from spack.operating_systems.mac_os import MacOs
@@ -22,6 +22,4 @@ def __init__(self):
@classmethod
def detect(self):
platform = subprocess.Popen(['uname', '-a'], stdout=subprocess.PIPE)
platform, _ = platform.communicate()
return 'darwin' in platform.strip().lower()
return 'darwin' in platform.system().lower()

View File

@@ -1,4 +1,3 @@
import subprocess
import platform
from spack.architecture import Platform, Target
from spack.operating_systems.linux_distro import LinuxDistro
@@ -27,6 +26,4 @@ def __init__(self):
@classmethod
def detect(self):
platform = subprocess.Popen(['uname', '-a'], stdout=subprocess.PIPE)
platform, _ = platform.communicate()
return 'linux' in platform.strip().lower()
return 'linux' in platform.system().lower()

View File

@@ -98,7 +98,7 @@
import base64
import hashlib
import imp
import sys
import ctypes
from StringIO import StringIO
from operator import attrgetter
@@ -203,6 +203,9 @@
legal_deps = tuple(special_types) + alldeps
"""Max integer helps avoid passing too large a value to cyaml."""
maxint = 2 ** (ctypes.sizeof(ctypes.c_int) * 8 - 1) - 1
def validate_deptype(deptype):
if isinstance(deptype, str):
@@ -969,12 +972,12 @@ def dag_hash(self, length=None):
return self._hash[:length]
else:
yaml_text = syaml.dump(
self.to_node_dict(), default_flow_style=True, width=sys.maxint)
self.to_node_dict(), default_flow_style=True, width=maxint)
sha = hashlib.sha1(yaml_text)
b32_hash = base64.b32encode(sha.digest()).lower()[:length]
b32_hash = base64.b32encode(sha.digest()).lower()
if self.concrete:
self._hash = b32_hash
return b32_hash
return b32_hash[:length]
def dag_hash_bit_prefix(self, bits):
"""Get the first <bits> bits of the DAG hash as an integer type."""

View File

@@ -0,0 +1,7 @@
#!/usr/bin/env bash
if [[ "$1" == "intel64" ]] ; then
export FOO='intel64'
else
export FOO='default'
fi

View File

@@ -119,7 +119,8 @@ def test_source_files(self):
'spack', 'test', 'data')
files = [
join_path(datadir, 'sourceme_first.sh'),
join_path(datadir, 'sourceme_second.sh')
join_path(datadir, 'sourceme_second.sh'),
join_path(datadir, 'sourceme_parameters.sh intel64')
]
env = EnvironmentModifications.from_sourcing_files(*files)
modifications = env.group_by_name()
@@ -134,6 +135,11 @@ def test_source_files(self):
self.assertEqual(len(modifications['NEW_VAR']), 1)
self.assertTrue(isinstance(modifications['NEW_VAR'][0], SetEnv))
self.assertEqual(modifications['NEW_VAR'][0].value, 'new')
self.assertEqual(len(modifications['FOO']), 1)
self.assertTrue(isinstance(modifications['FOO'][0], SetEnv))
self.assertEqual(modifications['FOO'][0].value, 'intel64')
# Unset variables
self.assertEqual(len(modifications['EMPTY_PATH_LIST']), 1)
self.assertTrue(isinstance(

View File

@@ -323,7 +323,7 @@ def parse_name_and_version(path):
def insensitize(string):
"""Change upper and lowercase letters to be case insensitive in
the provided string. e.g., 'a' because '[Aa]', 'B' becomes
the provided string. e.g., 'a' becomes '[Aa]', 'B' becomes
'[bB]', etc. Use for building regexes."""
def to_ins(match):
char = match.group(1)

View File

@@ -40,6 +40,7 @@ class Executable(object):
def __init__(self, name):
self.exe = name.split(' ')
self.default_env = {}
self.returncode = None
if not self.exe:
@@ -48,6 +49,9 @@ def __init__(self, name):
def add_default_arg(self, arg):
self.exe.append(arg)
def add_default_env(self, key, value):
self.default_env[key] = value
@property
def command(self):
return ' '.join(self.exe)
@@ -103,7 +107,13 @@ def __call__(self, *args, **kwargs):
fail_on_error = kwargs.pop("fail_on_error", True)
ignore_errors = kwargs.pop("ignore_errors", ())
# environment
env = kwargs.get('env', None)
if env is None:
env = os.environ.copy()
env.update(self.default_env)
else:
env = self.default_env.copy().update(env)
# TODO: This is deprecated. Remove in a future version.
return_output = kwargs.pop("return_output", False)
@@ -149,6 +159,7 @@ def streamify(arg, mode):
cmd_line = "'%s'" % "' '".join(
map(lambda arg: arg.replace("'", "'\"'\"'"), cmd))
tty.debug(cmd_line)
try:

View File

@@ -32,6 +32,10 @@
"""
import yaml
try:
from yaml import CLoader as Loader, CDumper as Dumper
except ImportError as e:
from yaml import Loader, Dumper
from yaml.nodes import *
from yaml.constructor import ConstructorError
from ordereddict_backport import OrderedDict
@@ -64,7 +68,7 @@ def mark(obj, node):
obj._end_mark = node.end_mark
class OrderedLineLoader(yaml.Loader):
class OrderedLineLoader(Loader):
"""YAML loader that preserves order and line numbers.
Mappings read in by this loader behave like an ordered dict.
@@ -156,7 +160,7 @@ def construct_mapping(self, node, deep=False):
u'tag:yaml.org,2002:str', OrderedLineLoader.construct_yaml_str)
class OrderedLineDumper(yaml.Dumper):
class OrderedLineDumper(Dumper):
"""Dumper that preserves ordering and formats ``syaml_*`` objects.
This dumper preserves insertion ordering ``syaml_dict`` objects

View File

@@ -227,7 +227,16 @@ def find_versions_of_archive(*archive_urls, **kwargs):
# We'll be a bit more liberal and just look for the archive
# part, not the full path.
regexes.append(os.path.basename(url_regex))
url_regex = os.path.basename(url_regex)
# We need to add a $ anchor to the end of the regex to prevent
# Spack from picking up signature files like:
# .asc
# .md5
# .sha256
# .sig
# However, SourceForge downloads still need to end in '/download'.
regexes.append(url_regex + '(\/download)?$')
# Build a dict version -> URL from any links that match the wildcards.
versions = {}