Merge branch 'develop' into features/shared
This commit is contained in:
@@ -30,11 +30,21 @@ Default is ``$spack/opt/spack``.
|
||||
``install_hash_length`` and ``install_path_scheme``
|
||||
---------------------------------------------------
|
||||
|
||||
The default Spack installation path can be very long and can create
|
||||
problems for scripts with hardcoded shebangs. There are two parameters
|
||||
to help with that. Firstly, the ``install_hash_length`` parameter can
|
||||
set the length of the hash in the installation path from 1 to 32. The
|
||||
default path uses the full 32 characters.
|
||||
The default Spack installation path can be very long and can create problems
|
||||
for scripts with hardcoded shebangs. Additionally, when using the Intel
|
||||
compiler, and if there is also a long list of dependencies, the compiler may
|
||||
segfault. If you see the following:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
: internal error: ** The compiler has encountered an unexpected problem.
|
||||
** Segmentation violation signal raised. **
|
||||
Access violation or stack overflow. Please contact Intel Support for assistance.
|
||||
|
||||
it may be because variables containing dependency specs may be too long. There
|
||||
are two parameters to help with long path names. Firstly, the
|
||||
``install_hash_length`` parameter can set the length of the hash in the
|
||||
installation path from 1 to 32. The default path uses the full 32 characters.
|
||||
|
||||
Secondly, it is also possible to modify the entire installation
|
||||
scheme. By default Spack uses
|
||||
|
||||
307
lib/spack/docs/containers.rst
Normal file
307
lib/spack/docs/containers.rst
Normal file
@@ -0,0 +1,307 @@
|
||||
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
|
||||
SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
.. _containers:
|
||||
|
||||
================
|
||||
Container Images
|
||||
================
|
||||
|
||||
Spack can be an ideal tool to setup images for containers since all the
|
||||
features discussed in :ref:`environments` can greatly help to manage
|
||||
the installation of software during the image build process. Nonetheless,
|
||||
building a production image from scratch still requires a lot of
|
||||
boilerplate to:
|
||||
|
||||
- Get Spack working within the image, possibly running as root
|
||||
- Minimize the physical size of the software installed
|
||||
- Properly update the system software in the base image
|
||||
|
||||
To facilitate users with these tedious tasks, Spack provides a command
|
||||
to automatically generate recipes for container images based on
|
||||
Environments:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls
|
||||
spack.yaml
|
||||
|
||||
$ spack containerize
|
||||
# Build stage with Spack pre-installed and ready to be used
|
||||
FROM spack/centos7:latest as builder
|
||||
|
||||
# What we want to install and how we want to install it
|
||||
# is specified in a manifest file (spack.yaml)
|
||||
RUN mkdir /opt/spack-environment \
|
||||
&& (echo "spack:" \
|
||||
&& echo " specs:" \
|
||||
&& echo " - gromacs+mpi" \
|
||||
&& echo " - mpich" \
|
||||
&& echo " concretization: together" \
|
||||
&& echo " config:" \
|
||||
&& echo " install_tree: /opt/software" \
|
||||
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
|
||||
|
||||
# Install the software, remove unecessary deps
|
||||
RUN cd /opt/spack-environment && spack install && spack gc -y
|
||||
|
||||
# Strip all the binaries
|
||||
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
|
||||
xargs file -i | \
|
||||
grep 'charset=binary' | \
|
||||
grep 'x-executable\|x-archive\|x-sharedlib' | \
|
||||
awk -F: '{print $1}' | xargs strip -s
|
||||
|
||||
# Modifications to the environment that are necessary to run
|
||||
RUN cd /opt/spack-environment && \
|
||||
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
|
||||
|
||||
|
||||
# Bare OS image to run the installed executables
|
||||
FROM centos:7
|
||||
|
||||
COPY --from=builder /opt/spack-environment /opt/spack-environment
|
||||
COPY --from=builder /opt/software /opt/software
|
||||
COPY --from=builder /opt/view /opt/view
|
||||
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
|
||||
|
||||
RUN yum update -y && yum install -y epel-release && yum update -y \
|
||||
&& yum install -y libgomp \
|
||||
&& rm -rf /var/cache/yum && yum clean all
|
||||
|
||||
RUN echo 'export PS1="\[$(tput bold)\]\[$(tput setaf 1)\][gromacs]\[$(tput setaf 2)\]\u\[$(tput sgr0)\]:\w $ \[$(tput sgr0)\]"' >> ~/.bashrc
|
||||
|
||||
|
||||
LABEL "app"="gromacs"
|
||||
LABEL "mpi"="mpich"
|
||||
|
||||
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
|
||||
|
||||
|
||||
The bits that make this automation possible are discussed in details
|
||||
below. All the images generated in this way will be based on
|
||||
multi-stage builds with:
|
||||
|
||||
- A fat ``build`` stage containing common build tools and Spack itself
|
||||
- A minimal ``final`` stage containing only the software requested by the user
|
||||
|
||||
-----------------
|
||||
Spack Base Images
|
||||
-----------------
|
||||
|
||||
Docker images with Spack preinstalled and ready to be used are
|
||||
built on `Docker Hub <https://hub.docker.com/u/spack>`_
|
||||
at every push to ``develop`` or to a release branch. The OS that
|
||||
are currently supported are summarized in the table below:
|
||||
|
||||
.. _containers-supported-os:
|
||||
|
||||
.. list-table:: Supported operating systems
|
||||
:header-rows: 1
|
||||
|
||||
* - Operating System
|
||||
- Base Image
|
||||
- Spack Image
|
||||
* - Ubuntu 16.04
|
||||
- ``ubuntu:16.04``
|
||||
- ``spack/ubuntu-xenial``
|
||||
* - Ubuntu 18.04
|
||||
- ``ubuntu:16.04``
|
||||
- ``spack/ubuntu-bionic``
|
||||
* - CentOS 6
|
||||
- ``centos:6``
|
||||
- ``spack/centos6``
|
||||
* - CentOS 7
|
||||
- ``centos:7``
|
||||
- ``spack/centos7``
|
||||
|
||||
All the images are tagged with the corresponding release of Spack:
|
||||
|
||||
.. image:: dockerhub_spack.png
|
||||
|
||||
with the exception of the ``latest`` tag that points to the HEAD
|
||||
of the ``develop`` branch. These images are available for anyone
|
||||
to use and take care of all the repetitive tasks that are necessary
|
||||
to setup Spack within a container. All the container recipes generated
|
||||
automatically by Spack use them as base images for their ``build`` stage.
|
||||
|
||||
|
||||
-------------------------
|
||||
Environment Configuration
|
||||
-------------------------
|
||||
|
||||
Any Spack Environment can be used for the automatic generation of container
|
||||
recipes. Sensible defaults are provided for things like the base image or the
|
||||
version of Spack used in the image. If a finer tuning is needed it can be
|
||||
obtained by adding the relevant metadata under the ``container`` attribute
|
||||
of environments:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
spack:
|
||||
specs:
|
||||
- gromacs+mpi
|
||||
- mpich
|
||||
|
||||
container:
|
||||
# Select the format of the recipe e.g. docker,
|
||||
# singularity or anything else that is currently supported
|
||||
format: docker
|
||||
|
||||
# Select from a valid list of images
|
||||
base:
|
||||
image: "centos:7"
|
||||
spack: develop
|
||||
|
||||
# Whether or not to strip binaries
|
||||
strip: true
|
||||
|
||||
# Additional system packages that are needed at runtime
|
||||
os_packages:
|
||||
- libgomp
|
||||
|
||||
# Extra instructions
|
||||
extra_instructions:
|
||||
final: |
|
||||
RUN echo 'export PS1="\[$(tput bold)\]\[$(tput setaf 1)\][gromacs]\[$(tput setaf 2)\]\u\[$(tput sgr0)\]:\w $ \[$(tput sgr0)\]"' >> ~/.bashrc
|
||||
|
||||
# Labels for the image
|
||||
labels:
|
||||
app: "gromacs"
|
||||
mpi: "mpich"
|
||||
|
||||
The tables below describe the configuration options that are currently supported:
|
||||
|
||||
.. list-table:: General configuration options for the ``container`` section of ``spack.yaml``
|
||||
:header-rows: 1
|
||||
|
||||
* - Option Name
|
||||
- Description
|
||||
- Allowed Values
|
||||
- Required
|
||||
* - ``format``
|
||||
- The format of the recipe
|
||||
- ``docker`` or ``singularity``
|
||||
- Yes
|
||||
* - ``base:image``
|
||||
- Base image for ``final`` stage
|
||||
- See :ref:`containers-supported-os`
|
||||
- Yes
|
||||
* - ``base:spack``
|
||||
- Version of Spack
|
||||
- Valid tags for ``base:image``
|
||||
- Yes
|
||||
* - ``strip``
|
||||
- Whether to strip binaries
|
||||
- ``true`` (default) or ``false``
|
||||
- No
|
||||
* - ``os_packages``
|
||||
- System packages to be installed
|
||||
- Valid packages for the ``final`` OS
|
||||
- No
|
||||
* - ``extra_instructions:build``
|
||||
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``build`` stage
|
||||
- Anything understood by the current ``format``
|
||||
- No
|
||||
* - ``extra_instructions:final``
|
||||
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``final`` stage
|
||||
- Anything understood by the current ``format``
|
||||
- No
|
||||
* - ``labels``
|
||||
- Labels to tag the image
|
||||
- Pairs of key-value strings
|
||||
- No
|
||||
|
||||
.. list-table:: Configuration options specific to Singularity
|
||||
:header-rows: 1
|
||||
|
||||
* - Option Name
|
||||
- Description
|
||||
- Allowed Values
|
||||
- Required
|
||||
* - ``singularity:runscript``
|
||||
- Content of ``%runscript``
|
||||
- Any valid script
|
||||
- No
|
||||
* - ``singularity:startscript``
|
||||
- Content of ``%startscript``
|
||||
- Any valid script
|
||||
- No
|
||||
* - ``singularity:test``
|
||||
- Content of ``%test``
|
||||
- Any valid script
|
||||
- No
|
||||
* - ``singularity:help``
|
||||
- Description of the image
|
||||
- Description string
|
||||
- No
|
||||
|
||||
Once the Environment is properly configured a recipe for a container
|
||||
image can be printed to standard output by issuing the following
|
||||
command from the directory where the ``spack.yaml`` resides:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack containerize
|
||||
|
||||
The example ``spack.yaml`` above would produce for instance the
|
||||
following ``Dockerfile``:
|
||||
|
||||
.. code-block:: docker
|
||||
|
||||
# Build stage with Spack pre-installed and ready to be used
|
||||
FROM spack/centos7:latest as builder
|
||||
|
||||
# What we want to install and how we want to install it
|
||||
# is specified in a manifest file (spack.yaml)
|
||||
RUN mkdir /opt/spack-environment \
|
||||
&& (echo "spack:" \
|
||||
&& echo " specs:" \
|
||||
&& echo " - gromacs+mpi" \
|
||||
&& echo " - mpich" \
|
||||
&& echo " concretization: together" \
|
||||
&& echo " config:" \
|
||||
&& echo " install_tree: /opt/software" \
|
||||
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
|
||||
|
||||
# Install the software, remove unecessary deps
|
||||
RUN cd /opt/spack-environment && spack install && spack gc -y
|
||||
|
||||
# Strip all the binaries
|
||||
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
|
||||
xargs file -i | \
|
||||
grep 'charset=binary' | \
|
||||
grep 'x-executable\|x-archive\|x-sharedlib' | \
|
||||
awk -F: '{print $1}' | xargs strip -s
|
||||
|
||||
# Modifications to the environment that are necessary to run
|
||||
RUN cd /opt/spack-environment && \
|
||||
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
|
||||
|
||||
|
||||
# Bare OS image to run the installed executables
|
||||
FROM centos:7
|
||||
|
||||
COPY --from=builder /opt/spack-environment /opt/spack-environment
|
||||
COPY --from=builder /opt/software /opt/software
|
||||
COPY --from=builder /opt/view /opt/view
|
||||
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
|
||||
|
||||
RUN yum update -y && yum install -y epel-release && yum update -y \
|
||||
&& yum install -y libgomp \
|
||||
&& rm -rf /var/cache/yum && yum clean all
|
||||
|
||||
RUN echo 'export PS1="\[$(tput bold)\]\[$(tput setaf 1)\][gromacs]\[$(tput setaf 2)\]\u\[$(tput sgr0)\]:\w $ \[$(tput sgr0)\]"' >> ~/.bashrc
|
||||
|
||||
|
||||
LABEL "app"="gromacs"
|
||||
LABEL "mpi"="mpich"
|
||||
|
||||
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
|
||||
|
||||
.. note::
|
||||
Spack can also produce Singularity definition files to build the image. The
|
||||
minimum version of Singularity required to build a SIF (Singularity Image Format)
|
||||
from them is ``3.5.3``.
|
||||
BIN
lib/spack/docs/dockerhub_spack.png
Normal file
BIN
lib/spack/docs/dockerhub_spack.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 88 KiB |
@@ -49,6 +49,8 @@ Spack uses a "manifest and lock" model similar to `Bundler gemfiles
|
||||
managers. The user input file is named ``spack.yaml`` and the lock
|
||||
file is named ``spack.lock``
|
||||
|
||||
.. _environments-using:
|
||||
|
||||
------------------
|
||||
Using Environments
|
||||
------------------
|
||||
@@ -382,11 +384,12 @@ the Environment.
|
||||
Loading
|
||||
^^^^^^^
|
||||
|
||||
Once an environment has been installed, the following creates a load script for it:
|
||||
Once an environment has been installed, the following creates a load
|
||||
script for it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack env myenv loads -r
|
||||
$ spack env loads -r
|
||||
|
||||
This creates a file called ``loads`` in the environment directory.
|
||||
Sourcing that file in Bash will make the environment available to the
|
||||
|
||||
@@ -66,6 +66,7 @@ or refer to the full manual below.
|
||||
config_yaml
|
||||
build_settings
|
||||
environments
|
||||
containers
|
||||
mirrors
|
||||
module_file_support
|
||||
repositories
|
||||
|
||||
@@ -929,6 +929,9 @@ Git fetching supports the following parameters to ``version``:
|
||||
* ``tag``: Name of a tag to fetch.
|
||||
* ``commit``: SHA hash (or prefix) of a commit to fetch.
|
||||
* ``submodules``: Also fetch submodules recursively when checking out this repository.
|
||||
* ``submodules_delete``: A list of submodules to forcibly delete from the repository
|
||||
after fetching. Useful if a version in the repository has submodules that
|
||||
have disappeared/are no longer accessible.
|
||||
* ``get_full_repo``: Ensure the full git history is checked out with all remote
|
||||
branch information. Normally (``get_full_repo=False``, the default), the git
|
||||
option ``--depth 1`` will be used if the version of git and the specified
|
||||
@@ -1989,6 +1992,28 @@ inject the dependency's ``prefix/lib`` directory, but the package needs to
|
||||
be in ``PATH`` and ``PYTHONPATH`` during the build process and later when
|
||||
a user wants to run the package.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Conditional dependencies
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You may have a package that only requires a dependency under certain
|
||||
conditions. For example, you may have a package that has optional MPI support,
|
||||
- MPI is only a dependency when you want to enable MPI support for the
|
||||
package. In that case, you could say something like:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
variant('mpi', default=False)
|
||||
depends_on('mpi', when='+mpi')
|
||||
|
||||
``when`` can include constraints on the variant, version, compiler, etc. and
|
||||
the :mod:`syntax<spack.spec>` is the same as for Specs written on the command
|
||||
line.
|
||||
|
||||
If a dependency/feature of a package isn't typically used, you can save time
|
||||
by making it conditional (since Spack will not build the dependency unless it
|
||||
is required for the Spec).
|
||||
|
||||
.. _dependency_dependency_patching:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
@@ -1095,6 +1095,248 @@ or filesystem views. However, it has some drawbacks:
|
||||
integrate Spack explicitly in their workflow. Not all users are
|
||||
willing to do this.
|
||||
|
||||
-------------------------------------
|
||||
Using Spack to Replace Homebrew/Conda
|
||||
-------------------------------------
|
||||
|
||||
Spack is an incredibly powerful package manager, designed for supercomputers
|
||||
where users have diverse installation needs. But Spack can also be used to
|
||||
handle simple single-user installations on your laptop. Most macOS users are
|
||||
already familiar with package managers like Homebrew and Conda, where all
|
||||
installed packages are symlinked to a single central location like ``/usr/local``.
|
||||
In this section, we will show you how to emulate the behavior of Homebrew/Conda
|
||||
using :ref:`environments`!
|
||||
|
||||
^^^^^
|
||||
Setup
|
||||
^^^^^
|
||||
|
||||
First, let's create a new environment. We'll assume that Spack is already set up
|
||||
correctly, and that you've already sourced the setup script for your shell.
|
||||
To create a new environment, simply run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack env create myenv
|
||||
==> Updating view at /Users/me/spack/var/spack/environments/myenv/.spack-env/view
|
||||
==> Created environment 'myenv' in /Users/me/spack/var/spack/environments/myenv
|
||||
$ spack env activate myenv
|
||||
|
||||
Here, *myenv* can be anything you want to name your environment. Next, we can add
|
||||
a list of packages we would like to install into our environment. Let's say we
|
||||
want a newer version of Bash than the one that comes with macOS, and we want a
|
||||
few Python libraries. We can run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack add bash
|
||||
==> Adding bash to environment myenv
|
||||
==> Updating view at /Users/me/spack/var/spack/environments/myenv/.spack-env/view
|
||||
$ spack add python@3:
|
||||
==> Adding python@3: to environment myenv
|
||||
==> Updating view at /Users/me/spack/var/spack/environments/myenv/.spack-env/view
|
||||
$ spack add py-numpy py-scipy py-matplotlib
|
||||
==> Adding py-numpy to environment myenv
|
||||
==> Adding py-scipy to environment myenv
|
||||
==> Adding py-matplotlib to environment myenv
|
||||
==> Updating view at /Users/me/spack/var/spack/environments/myenv/.spack-env/view
|
||||
|
||||
Each package can be listed on a separate line, or combined into a single line.
|
||||
Notice that we're explicitly asking for Python 3 here. You can use any spec
|
||||
you would normally use on the command line with other Spack commands.
|
||||
|
||||
Next, we want to manually configure a couple of things. In the ``myenv``
|
||||
directory, we can find the ``spack.yaml`` that actually defines our environment.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ vim ~/spack/var/spack/environments/myenv/spack.yaml
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# This is a Spack Environment file.
|
||||
#
|
||||
# It describes a set of packages to be installed, along with
|
||||
# configuration settings.
|
||||
spack:
|
||||
# add package specs to the `specs` list
|
||||
specs: [bash, 'python@3:', py-numpy, py-scipy, py-matplotlib]
|
||||
view:
|
||||
default:
|
||||
root: /Users/me/spack/var/spack/environments/myenv/.spack-env/view
|
||||
projections: {}
|
||||
config: {}
|
||||
mirrors: {}
|
||||
modules:
|
||||
enable: []
|
||||
packages: {}
|
||||
repos: []
|
||||
upstreams: {}
|
||||
definitions: []
|
||||
concretization: separately
|
||||
|
||||
You can see the packages we added earlier in the ``specs:`` section. If you
|
||||
ever want to add more packages, you can either use ``spack add`` or manually
|
||||
edit this file.
|
||||
|
||||
We also need to change the ``concretization:`` option. By default, Spack
|
||||
concretizes each spec *separately*, allowing multiple versions of the same
|
||||
package to coexist. Since we want a single consistent environment, we want to
|
||||
concretize all of the specs *together*.
|
||||
|
||||
Here is what your ``spack.yaml`` looks like with these new settings, and with
|
||||
some of the sections we don't plan on using removed:
|
||||
|
||||
.. code-block:: diff
|
||||
|
||||
spack:
|
||||
- specs: [bash, 'python@3:', py-numpy, py-scipy, py-matplotlib]
|
||||
+ specs:
|
||||
+ - bash
|
||||
+ - 'python@3:'
|
||||
+ - py-numpy
|
||||
+ - py-scipy
|
||||
+ - py-matplotlib
|
||||
- view:
|
||||
- default:
|
||||
- root: /Users/me/spack/var/spack/environments/myenv/.spack-env/view
|
||||
- projections: {}
|
||||
+ view: /Users/me/spack/var/spack/environments/myenv/.spack-env/view
|
||||
- config: {}
|
||||
- mirrors: {}
|
||||
- modules:
|
||||
- enable: []
|
||||
- packages: {}
|
||||
- repos: []
|
||||
- upstreams: {}
|
||||
- definitions: []
|
||||
+ concretization: together
|
||||
- concretization: separately
|
||||
|
||||
""""""""""""""""
|
||||
Symlink location
|
||||
""""""""""""""""
|
||||
|
||||
In the ``spack.yaml`` file above, you'll notice that by default, Spack symlinks
|
||||
all installations to ``/Users/me/spack/var/spack/environments/myenv/.spack-env/view``.
|
||||
You can actually change this to any directory you want. For example, Homebrew
|
||||
uses ``/usr/local``, while Conda uses ``/Users/me/anaconda``. In order to access
|
||||
files in these locations, you need to update ``PATH`` and other environment variables
|
||||
to point to them. Activating the Spack environment does this automatically, but
|
||||
you can also manually set them in your ``.bashrc``.
|
||||
|
||||
.. warning::
|
||||
|
||||
There are several reasons why you shouldn't use ``/usr/local``:
|
||||
|
||||
1. If you are on macOS 10.11+ (El Capitan and newer), Apple makes it hard
|
||||
for you. You may notice permissions issues on ``/usr/local`` due to their
|
||||
`System Integrity Protection <https://support.apple.com/en-us/HT204899>`_.
|
||||
By default, users don't have permissions to install anything in ``/usr/local``,
|
||||
and you can't even change this using ``sudo chown`` or ``sudo chmod``.
|
||||
2. Other package managers like Homebrew will try to install things to the
|
||||
same directory. If you plan on using Homebrew in conjunction with Spack,
|
||||
don't symlink things to ``/usr/local``.
|
||||
3. If you are on a shared workstation, or don't have sudo priveleges, you
|
||||
can't do this.
|
||||
|
||||
If you still want to do this anyway, there are several ways around SIP.
|
||||
You could disable SIP by booting into recovery mode and running
|
||||
``csrutil disable``, but this is not recommended, as it can open up your OS
|
||||
to security vulnerabilities. Another technique is to run ``spack concretize``
|
||||
and ``spack install`` using ``sudo``. This is also not recommended.
|
||||
|
||||
The safest way I've found is to create your installation directories using
|
||||
sudo, then change ownership back to the user like so:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for directory in .spack bin contrib include lib man share
|
||||
do
|
||||
sudo mkdir -p /usr/local/$directory
|
||||
sudo chown $(id -un):$(id -gn) /usr/local/$directory
|
||||
done
|
||||
|
||||
Depending on the packages you install in your environment, the exact list of
|
||||
directories you need to create may vary. You may also find some packages
|
||||
like Java libraries that install a single file to the installation prefix
|
||||
instead of in a subdirectory. In this case, the action is the same, just replace
|
||||
``mkdir -p`` with ``touch`` in the for-loop above.
|
||||
|
||||
But again, it's safer just to use the default symlink location.
|
||||
|
||||
|
||||
^^^^^^^^^^^^
|
||||
Installation
|
||||
^^^^^^^^^^^^
|
||||
|
||||
To actually concretize the environment, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack concretize
|
||||
|
||||
This will tell you which if any packages are already installed, and alert you
|
||||
to any conflicting specs.
|
||||
|
||||
To actually install these packages and symlink them to your ``view:``
|
||||
directory, simply run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install
|
||||
|
||||
Now, when you type ``which python3``, it should find the one you just installed.
|
||||
|
||||
In order to change the default shell to our newer Bash installation, we first
|
||||
need to add it to this list of acceptable shells. Run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo vim /etc/shells
|
||||
|
||||
and add the absolute path to your bash executable. Then run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ chsh -s /path/to/bash
|
||||
|
||||
Now, when you log out and log back in, ``echo $SHELL`` should point to the
|
||||
newer version of Bash.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Updating Installed Packages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Let's say you upgraded to a new version of macOS, or a new version of Python
|
||||
was released, and you want to rebuild your entire software stack. To do this,
|
||||
simply run the following commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack env activate myenv
|
||||
$ spack concretize --force
|
||||
$ spack install
|
||||
|
||||
The ``--force`` flag tells Spack to overwrite its previous concretization
|
||||
decisions, allowing you to choose a new version of Python. If any of the new
|
||||
packages like Bash are already installed, ``spack install`` won't re-install
|
||||
them, it will keep the symlinks in place.
|
||||
|
||||
^^^^^^^^^^^^^^
|
||||
Uninstallation
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
If you decide that Spack isn't right for you, uninstallation is simple.
|
||||
Just run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack env activate myenv
|
||||
$ spack uninstall --all
|
||||
|
||||
This will uninstall all packages in your environment and remove the symlinks.
|
||||
|
||||
------------------------
|
||||
Using Spack on Travis-CI
|
||||
------------------------
|
||||
|
||||
@@ -1298,6 +1298,20 @@
|
||||
"ppc64"
|
||||
]
|
||||
},
|
||||
"vsx": {
|
||||
"reason": "VSX alitvec extensions are supported by PowerISA from v2.06 (Power7+), but might not be listed in features",
|
||||
"families": [
|
||||
"ppc64le",
|
||||
"ppc64"
|
||||
]
|
||||
},
|
||||
"fma": {
|
||||
"reason": "FMA has been supported by PowerISA since Power1, but might not be listed in features",
|
||||
"families": [
|
||||
"ppc64le",
|
||||
"ppc64"
|
||||
]
|
||||
},
|
||||
"sse4.1": {
|
||||
"reason": "permits to refer to sse4_1 also as sse4.1",
|
||||
"any_of": [
|
||||
|
||||
@@ -201,7 +201,6 @@ def groupid_to_group(x):
|
||||
output_file.writelines(input_file.readlines())
|
||||
|
||||
except BaseException:
|
||||
os.remove(tmp_filename)
|
||||
# clean up the original file on failure.
|
||||
shutil.move(backup_filename, filename)
|
||||
raise
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
|
||||
|
||||
#: major, minor, patch version for Spack, in a tuple
|
||||
spack_version_info = (0, 13, 3)
|
||||
spack_version_info = (0, 13, 4)
|
||||
|
||||
#: String containing Spack version joined with .'s
|
||||
spack_version = '.'.join(str(v) for v in spack_version_info)
|
||||
|
||||
@@ -33,6 +33,7 @@
|
||||
from spack.spec import Spec
|
||||
from spack.stage import Stage
|
||||
from spack.util.gpg import Gpg
|
||||
import spack.architecture as architecture
|
||||
|
||||
_build_cache_relative_path = 'build_cache'
|
||||
|
||||
@@ -659,19 +660,85 @@ def extract_tarball(spec, filename, allow_root=False, unsigned=False,
|
||||
shutil.rmtree(tmpdir)
|
||||
|
||||
|
||||
#: Internal cache for get_specs
|
||||
_cached_specs = None
|
||||
# Internal cache for downloaded specs
|
||||
_cached_specs = set()
|
||||
|
||||
|
||||
def get_specs(force=False, use_arch=False):
|
||||
def try_download_specs(urls=None, force=False):
|
||||
'''
|
||||
Try to download the urls and cache them
|
||||
'''
|
||||
global _cached_specs
|
||||
if urls is None:
|
||||
return {}
|
||||
for link in urls:
|
||||
with Stage(link, name="build_cache", keep=True) as stage:
|
||||
if force and os.path.exists(stage.save_filename):
|
||||
os.remove(stage.save_filename)
|
||||
if not os.path.exists(stage.save_filename):
|
||||
try:
|
||||
stage.fetch()
|
||||
except fs.FetchError:
|
||||
continue
|
||||
with open(stage.save_filename, 'r') as f:
|
||||
# read the spec from the build cache file. All specs
|
||||
# in build caches are concrete (as they are built) so
|
||||
# we need to mark this spec concrete on read-in.
|
||||
spec = Spec.from_yaml(f)
|
||||
spec._mark_concrete()
|
||||
_cached_specs.add(spec)
|
||||
|
||||
return _cached_specs
|
||||
|
||||
|
||||
def get_spec(spec=None, force=False):
|
||||
"""
|
||||
Check if spec.yaml exists on mirrors and return it if it does
|
||||
"""
|
||||
global _cached_specs
|
||||
urls = set()
|
||||
if spec is None:
|
||||
return {}
|
||||
specfile_name = tarball_name(spec, '.spec.yaml')
|
||||
|
||||
if not spack.mirror.MirrorCollection():
|
||||
tty.debug("No Spack mirrors are currently configured")
|
||||
return {}
|
||||
|
||||
if spec in _cached_specs:
|
||||
return _cached_specs
|
||||
|
||||
for mirror in spack.mirror.MirrorCollection().values():
|
||||
fetch_url_build_cache = url_util.join(
|
||||
mirror.fetch_url, _build_cache_relative_path)
|
||||
|
||||
mirror_dir = url_util.local_file_path(fetch_url_build_cache)
|
||||
if mirror_dir:
|
||||
tty.msg("Finding buildcaches in %s" % mirror_dir)
|
||||
link = url_util.join(fetch_url_build_cache, specfile_name)
|
||||
urls.add(link)
|
||||
|
||||
else:
|
||||
tty.msg("Finding buildcaches at %s" %
|
||||
url_util.format(fetch_url_build_cache))
|
||||
link = url_util.join(fetch_url_build_cache, specfile_name)
|
||||
urls.add(link)
|
||||
|
||||
return try_download_specs(urls=urls, force=force)
|
||||
|
||||
|
||||
def get_specs(force=False, allarch=False):
|
||||
"""
|
||||
Get spec.yaml's for build caches available on mirror
|
||||
"""
|
||||
global _cached_specs
|
||||
arch = architecture.Arch(architecture.platform(),
|
||||
'default_os', 'default_target')
|
||||
arch_pattern = ('([^-]*-[^-]*-[^-]*)')
|
||||
if not allarch:
|
||||
arch_pattern = '(%s-%s-[^-]*)' % (arch.platform, arch.os)
|
||||
|
||||
if _cached_specs:
|
||||
tty.debug("Using previously-retrieved specs")
|
||||
return _cached_specs
|
||||
regex_pattern = '%s(.*)(spec.yaml$)' % (arch_pattern)
|
||||
arch_re = re.compile(regex_pattern)
|
||||
|
||||
if not spack.mirror.MirrorCollection():
|
||||
tty.debug("No Spack mirrors are currently configured")
|
||||
@@ -688,49 +755,21 @@ def get_specs(force=False, use_arch=False):
|
||||
if os.path.exists(mirror_dir):
|
||||
files = os.listdir(mirror_dir)
|
||||
for file in files:
|
||||
if re.search('spec.yaml', file):
|
||||
m = arch_re.search(file)
|
||||
if m:
|
||||
link = url_util.join(fetch_url_build_cache, file)
|
||||
if use_arch and re.search('%s-%s' %
|
||||
(spack.architecture.platform,
|
||||
spack.architecture.os),
|
||||
file):
|
||||
urls.add(link)
|
||||
else:
|
||||
urls.add(link)
|
||||
urls.add(link)
|
||||
else:
|
||||
tty.msg("Finding buildcaches at %s" %
|
||||
url_util.format(fetch_url_build_cache))
|
||||
p, links = web_util.spider(
|
||||
url_util.join(fetch_url_build_cache, 'index.html'))
|
||||
for link in links:
|
||||
if re.search("spec.yaml", link):
|
||||
if use_arch and re.search('%s-%s' %
|
||||
(spack.architecture.platform,
|
||||
spack.architecture.os),
|
||||
link):
|
||||
urls.add(link)
|
||||
else:
|
||||
urls.add(link)
|
||||
m = arch_re.search(link)
|
||||
if m:
|
||||
urls.add(link)
|
||||
|
||||
_cached_specs = []
|
||||
for link in urls:
|
||||
with Stage(link, name="build_cache", keep=True) as stage:
|
||||
if force and os.path.exists(stage.save_filename):
|
||||
os.remove(stage.save_filename)
|
||||
if not os.path.exists(stage.save_filename):
|
||||
try:
|
||||
stage.fetch()
|
||||
except fs.FetchError:
|
||||
continue
|
||||
with open(stage.save_filename, 'r') as f:
|
||||
# read the spec from the build cache file. All specs
|
||||
# in build caches are concrete (as they are built) so
|
||||
# we need to mark this spec concrete on read-in.
|
||||
spec = Spec.from_yaml(f)
|
||||
spec._mark_concrete()
|
||||
_cached_specs.append(spec)
|
||||
|
||||
return _cached_specs
|
||||
return try_download_specs(urls=urls, force=force)
|
||||
|
||||
|
||||
def get_keys(install=False, trust=False, force=False):
|
||||
|
||||
@@ -25,10 +25,11 @@ def setup_parser(subparser):
|
||||
def add(parser, args):
|
||||
env = ev.get_env(args, 'add', required=True)
|
||||
|
||||
for spec in spack.cmd.parse_specs(args.specs):
|
||||
if not env.add(spec, args.list_name):
|
||||
tty.msg("Package {0} was already added to {1}"
|
||||
.format(spec.name, env.name))
|
||||
else:
|
||||
tty.msg('Adding %s to environment %s' % (spec, env.name))
|
||||
env.write()
|
||||
with env.write_transaction():
|
||||
for spec in spack.cmd.parse_specs(args.specs):
|
||||
if not env.add(spec, args.list_name):
|
||||
tty.msg("Package {0} was already added to {1}"
|
||||
.format(spec.name, env.name))
|
||||
else:
|
||||
tty.msg('Adding %s to environment %s' % (spec, env.name))
|
||||
env.write()
|
||||
|
||||
@@ -87,6 +87,9 @@ def setup_parser(subparser):
|
||||
help='show variants in output (can be long)')
|
||||
listcache.add_argument('-f', '--force', action='store_true',
|
||||
help="force new download of specs")
|
||||
listcache.add_argument('-a', '--allarch', action='store_true',
|
||||
help="list specs for all available architectures" +
|
||||
" instead of default platform and OS")
|
||||
arguments.add_common_arguments(listcache, ['specs'])
|
||||
listcache.set_defaults(func=listspecs)
|
||||
|
||||
@@ -263,7 +266,8 @@ def match_downloaded_specs(pkgs, allow_multiple_matches=False, force=False):
|
||||
# List of specs that match expressions given via command line
|
||||
specs_from_cli = []
|
||||
has_errors = False
|
||||
specs = bindist.get_specs(force)
|
||||
allarch = False
|
||||
specs = bindist.get_specs(force, allarch)
|
||||
for pkg in pkgs:
|
||||
matches = []
|
||||
tty.msg("buildcache spec(s) matching %s \n" % pkg)
|
||||
@@ -415,7 +419,7 @@ def install_tarball(spec, args):
|
||||
|
||||
def listspecs(args):
|
||||
"""list binary packages available from mirrors"""
|
||||
specs = bindist.get_specs(args.force)
|
||||
specs = bindist.get_specs(args.force, args.allarch)
|
||||
if args.specs:
|
||||
constraints = set(args.specs)
|
||||
specs = [s for s in specs if any(s.satisfies(c) for c in constraints)]
|
||||
|
||||
@@ -18,6 +18,7 @@ def setup_parser(subparser):
|
||||
|
||||
def concretize(parser, args):
|
||||
env = ev.get_env(args, 'concretize', required=True)
|
||||
concretized_specs = env.concretize(force=args.force)
|
||||
ev.display_specs(concretized_specs)
|
||||
env.write()
|
||||
with env.write_transaction():
|
||||
concretized_specs = env.concretize(force=args.force)
|
||||
ev.display_specs(concretized_specs)
|
||||
env.write()
|
||||
|
||||
25
lib/spack/spack/cmd/containerize.py
Normal file
25
lib/spack/spack/cmd/containerize.py
Normal file
@@ -0,0 +1,25 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import os
|
||||
import os.path
|
||||
import spack.container
|
||||
|
||||
description = ("creates recipes to build images for different"
|
||||
" container runtimes")
|
||||
section = "container"
|
||||
level = "long"
|
||||
|
||||
|
||||
def containerize(parser, args):
|
||||
config_dir = args.env_dir or os.getcwd()
|
||||
config_file = os.path.abspath(os.path.join(config_dir, 'spack.yaml'))
|
||||
if not os.path.exists(config_file):
|
||||
msg = 'file not found: {0}'
|
||||
raise ValueError(msg.format(config_file))
|
||||
|
||||
config = spack.container.validate(config_file)
|
||||
|
||||
recipe = spack.container.recipe(config)
|
||||
print(recipe)
|
||||
@@ -44,7 +44,8 @@ def update_kwargs_from_args(args, kwargs):
|
||||
'upstream': args.upstream,
|
||||
'cache_only': args.cache_only,
|
||||
'explicit': True, # Always true for install command
|
||||
'stop_at': args.until
|
||||
'stop_at': args.until,
|
||||
'unsigned': args.unsigned,
|
||||
})
|
||||
|
||||
kwargs.update({
|
||||
@@ -100,6 +101,10 @@ def setup_parser(subparser):
|
||||
'--cache-only', action='store_true', dest='cache_only', default=False,
|
||||
help="only install package from binary mirrors")
|
||||
|
||||
subparser.add_argument(
|
||||
'--no-check-signature', action='store_true',
|
||||
dest='unsigned', default=False,
|
||||
help="do not check signatures of binary packages")
|
||||
subparser.add_argument(
|
||||
'--show-log-on-error', action='store_true',
|
||||
help="print full build log to stderr if build fails")
|
||||
@@ -237,8 +242,13 @@ def install_spec(cli_args, kwargs, abstract_spec, spec):
|
||||
env = ev.get_env(cli_args, 'install')
|
||||
|
||||
if env:
|
||||
env.install(abstract_spec, spec, **kwargs)
|
||||
env.write()
|
||||
with env.write_transaction():
|
||||
concrete = env.concretize_and_add(
|
||||
abstract_spec, spec)
|
||||
env.write(regenerate_views=False)
|
||||
env._install(concrete, **kwargs)
|
||||
with env.write_transaction():
|
||||
env.regenerate_views()
|
||||
else:
|
||||
spec.package.do_install(**kwargs)
|
||||
spack.config.set('config:active_tree', '~/.spack/opt/spack',
|
||||
@@ -301,16 +311,20 @@ def install(parser, args, **kwargs):
|
||||
env = ev.get_env(args, 'install')
|
||||
if env:
|
||||
if not args.only_concrete:
|
||||
concretized_specs = env.concretize()
|
||||
ev.display_specs(concretized_specs)
|
||||
with env.write_transaction():
|
||||
concretized_specs = env.concretize()
|
||||
ev.display_specs(concretized_specs)
|
||||
|
||||
# save view regeneration for later, so that we only do it
|
||||
# once, as it can be slow.
|
||||
env.write(regenerate_views=False)
|
||||
# save view regeneration for later, so that we only do it
|
||||
# once, as it can be slow.
|
||||
env.write(regenerate_views=False)
|
||||
|
||||
tty.msg("Installing environment %s" % env.name)
|
||||
env.install_all(args)
|
||||
env.regenerate_views()
|
||||
with env.write_transaction():
|
||||
# It is not strictly required to synchronize view regeneration
|
||||
# but doing so can prevent redundant work in the filesystem.
|
||||
env.regenerate_views()
|
||||
return
|
||||
else:
|
||||
tty.die("install requires a package argument or a spack.yaml file")
|
||||
|
||||
@@ -12,6 +12,7 @@
|
||||
import spack.environment as ev
|
||||
import spack.util.environment
|
||||
import spack.user_environment as uenv
|
||||
import spack.store
|
||||
|
||||
description = "add package to the user environment"
|
||||
section = "user environment"
|
||||
@@ -63,15 +64,17 @@ def load(parser, args):
|
||||
tty.msg(*msg)
|
||||
return 1
|
||||
|
||||
if 'dependencies' in args.things_to_load:
|
||||
include_roots = 'package' in args.things_to_load
|
||||
specs = [dep for spec in specs
|
||||
for dep in spec.traverse(root=include_roots, order='post')]
|
||||
with spack.store.db.read_transaction():
|
||||
if 'dependencies' in args.things_to_load:
|
||||
include_roots = 'package' in args.things_to_load
|
||||
specs = [dep for spec in specs
|
||||
for dep in
|
||||
spec.traverse(root=include_roots, order='post')]
|
||||
|
||||
env_mod = spack.util.environment.EnvironmentModifications()
|
||||
for spec in specs:
|
||||
env_mod.extend(uenv.environment_modifications_for_spec(spec))
|
||||
env_mod.prepend_path(uenv.spack_loaded_hashes_var, spec.dag_hash())
|
||||
cmds = env_mod.shell_modifications(args.shell)
|
||||
env_mod = spack.util.environment.EnvironmentModifications()
|
||||
for spec in specs:
|
||||
env_mod.extend(uenv.environment_modifications_for_spec(spec))
|
||||
env_mod.prepend_path(uenv.spack_loaded_hashes_var, spec.dag_hash())
|
||||
cmds = env_mod.shell_modifications(args.shell)
|
||||
|
||||
sys.stdout.write(cmds)
|
||||
sys.stdout.write(cmds)
|
||||
|
||||
@@ -31,10 +31,11 @@ def setup_parser(subparser):
|
||||
def remove(parser, args):
|
||||
env = ev.get_env(args, 'remove', required=True)
|
||||
|
||||
if args.all:
|
||||
env.clear()
|
||||
else:
|
||||
for spec in spack.cmd.parse_specs(args.specs):
|
||||
tty.msg('Removing %s from environment %s' % (spec, env.name))
|
||||
env.remove(spec, args.list_name, force=args.force)
|
||||
env.write()
|
||||
with env.write_transaction():
|
||||
if args.all:
|
||||
env.clear()
|
||||
else:
|
||||
for spec in spack.cmd.parse_specs(args.specs):
|
||||
tty.msg('Removing %s from environment %s' % (spec, env.name))
|
||||
env.remove(spec, args.list_name, force=args.force)
|
||||
env.write()
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
import argparse
|
||||
import copy
|
||||
import sys
|
||||
import itertools
|
||||
|
||||
import spack.cmd
|
||||
import spack.environment as ev
|
||||
@@ -253,9 +254,6 @@ def do_uninstall(env, specs, force):
|
||||
# want to uninstall.
|
||||
spack.package.Package.uninstall_by_spec(item, force=True)
|
||||
|
||||
if env:
|
||||
_remove_from_env(item, env)
|
||||
|
||||
# A package is ready to be uninstalled when nothing else references it,
|
||||
# unless we are requested to force uninstall it.
|
||||
is_ready = lambda x: not spack.store.db.query_by_spec_hash(x)[1].ref_count
|
||||
@@ -375,9 +373,13 @@ def uninstall_specs(args, specs):
|
||||
if not args.yes_to_all:
|
||||
confirm_removal(anything_to_do)
|
||||
|
||||
# just force-remove things in the remove list
|
||||
for spec in remove_list:
|
||||
_remove_from_env(spec, env)
|
||||
if env:
|
||||
# Remove all the specs that are supposed to be uninstalled or just
|
||||
# removed.
|
||||
with env.write_transaction():
|
||||
for spec in itertools.chain(remove_list, uninstall_list):
|
||||
_remove_from_env(spec, env)
|
||||
env.write()
|
||||
|
||||
# Uninstall everything on the list
|
||||
do_uninstall(env, uninstall_list, args.force)
|
||||
|
||||
@@ -61,3 +61,7 @@ def c11_flag(self):
|
||||
@property
|
||||
def pic_flag(self):
|
||||
return "-KPIC"
|
||||
|
||||
def setup_custom_environment(self, pkg, env):
|
||||
env.append_flags('fcc_ENV', '-Nclang')
|
||||
env.append_flags('FCC_ENV', '-Nclang')
|
||||
|
||||
81
lib/spack/spack/container/__init__.py
Normal file
81
lib/spack/spack/container/__init__.py
Normal file
@@ -0,0 +1,81 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
"""Package that provides functions and classes to
|
||||
generate container recipes from a Spack environment
|
||||
"""
|
||||
import warnings
|
||||
|
||||
import spack.environment
|
||||
import spack.schema.env as env
|
||||
import spack.util.spack_yaml as syaml
|
||||
from .writers import recipe
|
||||
|
||||
__all__ = ['validate', 'recipe']
|
||||
|
||||
|
||||
def validate(configuration_file):
|
||||
"""Validate a Spack environment YAML file that is being used to generate a
|
||||
recipe for a container.
|
||||
|
||||
Since a few attributes of the configuration must have specific values for
|
||||
the container recipe, this function returns a sanitized copy of the
|
||||
configuration in the input file. If any modification is needed, a warning
|
||||
will be issued.
|
||||
|
||||
Args:
|
||||
configuration_file (str): path to the Spack environment YAML file
|
||||
|
||||
Returns:
|
||||
A sanitized copy of the configuration stored in the input file
|
||||
"""
|
||||
import jsonschema
|
||||
with open(configuration_file) as f:
|
||||
config = syaml.load(f)
|
||||
|
||||
# Ensure we have a "container" attribute with sensible defaults set
|
||||
env_dict = spack.environment.config_dict(config)
|
||||
env_dict.setdefault('container', {
|
||||
'format': 'docker',
|
||||
'base': {'image': 'ubuntu:18.04', 'spack': 'develop'}
|
||||
})
|
||||
env_dict['container'].setdefault('format', 'docker')
|
||||
env_dict['container'].setdefault(
|
||||
'base', {'image': 'ubuntu:18.04', 'spack': 'develop'}
|
||||
)
|
||||
|
||||
# Remove attributes that are not needed / allowed in the
|
||||
# container recipe
|
||||
for subsection in ('cdash', 'gitlab_ci', 'modules'):
|
||||
if subsection in env_dict:
|
||||
msg = ('the subsection "{0}" in "{1}" is not used when generating'
|
||||
' container recipes and will be discarded')
|
||||
warnings.warn(msg.format(subsection, configuration_file))
|
||||
env_dict.pop(subsection)
|
||||
|
||||
# Set the default value of the concretization strategy to "together" and
|
||||
# warn if the user explicitly set another value
|
||||
env_dict.setdefault('concretization', 'together')
|
||||
if env_dict['concretization'] != 'together':
|
||||
msg = ('the "concretization" attribute of the environment is set '
|
||||
'to "{0}" [the advised value is instead "together"]')
|
||||
warnings.warn(msg.format(env_dict['concretization']))
|
||||
|
||||
# Check if the install tree was explicitly set to a custom value and warn
|
||||
# that it will be overridden
|
||||
environment_config = env_dict.get('config', {})
|
||||
if environment_config.get('install_tree', None):
|
||||
msg = ('the "config:install_tree" attribute has been set explicitly '
|
||||
'and will be overridden in the container image')
|
||||
warnings.warn(msg)
|
||||
|
||||
# Likewise for the view
|
||||
environment_view = env_dict.get('view', None)
|
||||
if environment_view:
|
||||
msg = ('the "view" attribute has been set explicitly '
|
||||
'and will be overridden in the container image')
|
||||
warnings.warn(msg)
|
||||
|
||||
jsonschema.validate(config, schema=env.schema)
|
||||
return config
|
||||
50
lib/spack/spack/container/images.json
Normal file
50
lib/spack/spack/container/images.json
Normal file
@@ -0,0 +1,50 @@
|
||||
{
|
||||
"ubuntu:18.04": {
|
||||
"update": "apt-get -yqq update && apt-get -yqq upgrade",
|
||||
"install": "apt-get -yqq install",
|
||||
"clean": "rm -rf /var/lib/apt/lists/*",
|
||||
"environment": [],
|
||||
"build": "spack/ubuntu-bionic",
|
||||
"build_tags": {
|
||||
"develop": "latest",
|
||||
"0.14": "0.14",
|
||||
"0.14.0": "0.14.0"
|
||||
}
|
||||
},
|
||||
"ubuntu:16.04": {
|
||||
"update": "apt-get -yqq update && apt-get -yqq upgrade",
|
||||
"install": "apt-get -yqq install",
|
||||
"clean": "rm -rf /var/lib/apt/lists/*",
|
||||
"environment": [],
|
||||
"build": "spack/ubuntu-xenial",
|
||||
"build_tags": {
|
||||
"develop": "latest",
|
||||
"0.14": "0.14",
|
||||
"0.14.0": "0.14.0"
|
||||
}
|
||||
},
|
||||
"centos:7": {
|
||||
"update": "yum update -y && yum install -y epel-release && yum update -y",
|
||||
"install": "yum install -y",
|
||||
"clean": "rm -rf /var/cache/yum && yum clean all",
|
||||
"environment": [],
|
||||
"build": "spack/centos7",
|
||||
"build_tags": {
|
||||
"develop": "latest",
|
||||
"0.14": "0.14",
|
||||
"0.14.0": "0.14.0"
|
||||
}
|
||||
},
|
||||
"centos:6": {
|
||||
"update": "yum update -y && yum install -y epel-release && yum update -y",
|
||||
"install": "yum install -y",
|
||||
"clean": "rm -rf /var/cache/yum && yum clean all",
|
||||
"environment": [],
|
||||
"build": "spack/centos6",
|
||||
"build_tags": {
|
||||
"develop": "latest",
|
||||
"0.14": "0.14",
|
||||
"0.14.0": "0.14.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
72
lib/spack/spack/container/images.py
Normal file
72
lib/spack/spack/container/images.py
Normal file
@@ -0,0 +1,72 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
"""Manages the details on the images used in the build and the run stage."""
|
||||
import json
|
||||
import os.path
|
||||
|
||||
#: Global variable used to cache in memory the content of images.json
|
||||
_data = None
|
||||
|
||||
|
||||
def data():
|
||||
"""Returns a dictionary with the static data on the images.
|
||||
|
||||
The dictionary is read from a JSON file lazily the first time
|
||||
this function is called.
|
||||
"""
|
||||
global _data
|
||||
if not _data:
|
||||
json_dir = os.path.abspath(os.path.dirname(__file__))
|
||||
json_file = os.path.join(json_dir, 'images.json')
|
||||
with open(json_file) as f:
|
||||
_data = json.load(f)
|
||||
return _data
|
||||
|
||||
|
||||
def build_info(image, spack_version):
|
||||
"""Returns the name of the build image and its tag.
|
||||
|
||||
Args:
|
||||
image (str): image to be used at run-time. Should be of the form
|
||||
<image_name>:<image_tag> e.g. "ubuntu:18.04"
|
||||
spack_version (str): version of Spack that we want to use to build
|
||||
|
||||
Returns:
|
||||
A tuple with (image_name, image_tag) for the build image
|
||||
"""
|
||||
# Don't handle error here, as a wrong image should have been
|
||||
# caught by the JSON schema
|
||||
image_data = data()[image]
|
||||
build_image = image_data['build']
|
||||
|
||||
# Try to check if we have a tag for this Spack version
|
||||
try:
|
||||
build_tag = image_data['build_tags'][spack_version]
|
||||
except KeyError:
|
||||
msg = ('the image "{0}" has no tag for Spack version "{1}" '
|
||||
'[valid versions are {2}]')
|
||||
msg = msg.format(build_image, spack_version,
|
||||
', '.join(image_data['build_tags'].keys()))
|
||||
raise ValueError(msg)
|
||||
|
||||
return build_image, build_tag
|
||||
|
||||
|
||||
def package_info(image):
|
||||
"""Returns the commands used to update system repositories, install
|
||||
system packages and clean afterwards.
|
||||
|
||||
Args:
|
||||
image (str): image to be used at run-time. Should be of the form
|
||||
<image_name>:<image_tag> e.g. "ubuntu:18.04"
|
||||
|
||||
Returns:
|
||||
A tuple of (update, install, clean) commands.
|
||||
"""
|
||||
image_data = data()[image]
|
||||
update = image_data['update']
|
||||
install = image_data['install']
|
||||
clean = image_data['clean']
|
||||
return update, install, clean
|
||||
154
lib/spack/spack/container/writers/__init__.py
Normal file
154
lib/spack/spack/container/writers/__init__.py
Normal file
@@ -0,0 +1,154 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
"""Writers for different kind of recipes and related
|
||||
convenience functions.
|
||||
"""
|
||||
import collections
|
||||
import copy
|
||||
|
||||
import spack.environment
|
||||
import spack.schema.env
|
||||
import spack.tengine as tengine
|
||||
import spack.util.spack_yaml as syaml
|
||||
|
||||
from spack.container.images import build_info, package_info
|
||||
|
||||
#: Caches all the writers that are currently supported
|
||||
_writer_factory = {}
|
||||
|
||||
|
||||
def writer(name):
|
||||
"""Decorator to register a factory for a recipe writer.
|
||||
|
||||
Each factory should take a configuration dictionary and return a
|
||||
properly configured writer that, when called, prints the
|
||||
corresponding recipe.
|
||||
"""
|
||||
def _decorator(factory):
|
||||
_writer_factory[name] = factory
|
||||
return factory
|
||||
return _decorator
|
||||
|
||||
|
||||
def create(configuration):
|
||||
"""Returns a writer that conforms to the configuration passed as input.
|
||||
|
||||
Args:
|
||||
configuration: how to generate the current recipe
|
||||
"""
|
||||
name = spack.environment.config_dict(configuration)['container']['format']
|
||||
return _writer_factory[name](configuration)
|
||||
|
||||
|
||||
def recipe(configuration):
|
||||
"""Returns a recipe that conforms to the configuration passed as input.
|
||||
|
||||
Args:
|
||||
configuration: how to generate the current recipe
|
||||
"""
|
||||
return create(configuration)()
|
||||
|
||||
|
||||
class PathContext(tengine.Context):
|
||||
"""Generic context used to instantiate templates of recipes that
|
||||
install software in a common location and make it available
|
||||
directly via PATH.
|
||||
"""
|
||||
def __init__(self, config):
|
||||
self.config = spack.environment.config_dict(config)
|
||||
self.container_config = self.config['container']
|
||||
|
||||
@tengine.context_property
|
||||
def run(self):
|
||||
"""Information related to the run image."""
|
||||
image = self.container_config['base']['image']
|
||||
Run = collections.namedtuple('Run', ['image'])
|
||||
return Run(image=image)
|
||||
|
||||
@tengine.context_property
|
||||
def build(self):
|
||||
"""Information related to the build image."""
|
||||
|
||||
# Map the final image to the correct build image
|
||||
run_image = self.container_config['base']['image']
|
||||
spack_version = self.container_config['base']['spack']
|
||||
image, tag = build_info(run_image, spack_version)
|
||||
|
||||
Build = collections.namedtuple('Build', ['image', 'tag'])
|
||||
return Build(image=image, tag=tag)
|
||||
|
||||
@tengine.context_property
|
||||
def strip(self):
|
||||
"""Whether or not to strip binaries in the image"""
|
||||
return self.container_config.get('strip', True)
|
||||
|
||||
@tengine.context_property
|
||||
def paths(self):
|
||||
"""Important paths in the image"""
|
||||
Paths = collections.namedtuple('Paths', [
|
||||
'environment', 'store', 'view'
|
||||
])
|
||||
return Paths(
|
||||
environment='/opt/spack-environment',
|
||||
store='/opt/software',
|
||||
view='/opt/view'
|
||||
)
|
||||
|
||||
@tengine.context_property
|
||||
def manifest(self):
|
||||
"""The spack.yaml file that should be used in the image"""
|
||||
import jsonschema
|
||||
# Copy in the part of spack.yaml prescribed in the configuration file
|
||||
manifest = copy.deepcopy(self.config)
|
||||
manifest.pop('container')
|
||||
|
||||
# Ensure that a few paths are where they need to be
|
||||
manifest.setdefault('config', syaml.syaml_dict())
|
||||
manifest['config']['install_tree'] = self.paths.store
|
||||
manifest['view'] = self.paths.view
|
||||
manifest = {'spack': manifest}
|
||||
|
||||
# Validate the manifest file
|
||||
jsonschema.validate(manifest, schema=spack.schema.env.schema)
|
||||
|
||||
return syaml.dump(manifest, default_flow_style=False).strip()
|
||||
|
||||
@tengine.context_property
|
||||
def os_packages(self):
|
||||
"""Additional system packages that are needed at run-time."""
|
||||
package_list = self.container_config.get('os_packages', None)
|
||||
if not package_list:
|
||||
return package_list
|
||||
|
||||
image = self.container_config['base']['image']
|
||||
update, install, clean = package_info(image)
|
||||
Packages = collections.namedtuple(
|
||||
'Packages', ['update', 'install', 'list', 'clean']
|
||||
)
|
||||
return Packages(update=update, install=install,
|
||||
list=package_list, clean=clean)
|
||||
|
||||
@tengine.context_property
|
||||
def extra_instructions(self):
|
||||
Extras = collections.namedtuple('Extra', ['build', 'final'])
|
||||
extras = self.container_config.get('extra_instructions', {})
|
||||
build, final = extras.get('build', None), extras.get('final', None)
|
||||
return Extras(build=build, final=final)
|
||||
|
||||
@tengine.context_property
|
||||
def labels(self):
|
||||
return self.container_config.get('labels', {})
|
||||
|
||||
def __call__(self):
|
||||
"""Returns the recipe as a string"""
|
||||
env = tengine.make_environment()
|
||||
t = env.get_template(self.template_name)
|
||||
return t.render(**self.to_dict())
|
||||
|
||||
|
||||
# Import after function definition all the modules in this package,
|
||||
# so that registration of writers will happen automatically
|
||||
import spack.container.writers.singularity # noqa
|
||||
import spack.container.writers.docker # noqa
|
||||
30
lib/spack/spack/container/writers/docker.py
Normal file
30
lib/spack/spack/container/writers/docker.py
Normal file
@@ -0,0 +1,30 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import spack.tengine as tengine
|
||||
|
||||
from . import writer, PathContext
|
||||
|
||||
|
||||
@writer('docker')
|
||||
class DockerContext(PathContext):
|
||||
"""Context used to instantiate a Dockerfile"""
|
||||
#: Name of the template used for Dockerfiles
|
||||
template_name = 'container/Dockerfile'
|
||||
|
||||
@tengine.context_property
|
||||
def manifest(self):
|
||||
manifest_str = super(DockerContext, self).manifest
|
||||
# Docker doesn't support HEREDOC so we need to resort to
|
||||
# a horrible echo trick to have the manifest in the Dockerfile
|
||||
echoed_lines = []
|
||||
for idx, line in enumerate(manifest_str.split('\n')):
|
||||
if idx == 0:
|
||||
echoed_lines.append('&& (echo "' + line + '" \\')
|
||||
continue
|
||||
echoed_lines.append('&& echo "' + line + '" \\')
|
||||
|
||||
echoed_lines[-1] = echoed_lines[-1].replace(' \\', ')')
|
||||
|
||||
return '\n'.join(echoed_lines)
|
||||
33
lib/spack/spack/container/writers/singularity.py
Normal file
33
lib/spack/spack/container/writers/singularity.py
Normal file
@@ -0,0 +1,33 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import spack.tengine as tengine
|
||||
from . import writer, PathContext
|
||||
|
||||
|
||||
@writer('singularity')
|
||||
class SingularityContext(PathContext):
|
||||
"""Context used to instantiate a Singularity definition file"""
|
||||
#: Name of the template used for Singularity definition files
|
||||
template_name = 'container/singularity.def'
|
||||
|
||||
@property
|
||||
def singularity_config(self):
|
||||
return self.container_config.get('singularity', {})
|
||||
|
||||
@tengine.context_property
|
||||
def runscript(self):
|
||||
return self.singularity_config.get('runscript', '')
|
||||
|
||||
@tengine.context_property
|
||||
def startscript(self):
|
||||
return self.singularity_config.get('startscript', '')
|
||||
|
||||
@tengine.context_property
|
||||
def test(self):
|
||||
return self.singularity_config.get('test', '')
|
||||
|
||||
@tengine.context_property
|
||||
def help(self):
|
||||
return self.singularity_config.get('help', '')
|
||||
@@ -1145,7 +1145,27 @@ def activated_extensions_for(self, extendee_spec, extensions_layout=None):
|
||||
continue
|
||||
# TODO: conditional way to do this instead of catching exceptions
|
||||
|
||||
def get_by_hash_local(self, dag_hash, default=None, installed=any):
|
||||
def _get_by_hash_local(self, dag_hash, default=None, installed=any):
|
||||
# hash is a full hash and is in the data somewhere
|
||||
if dag_hash in self._data:
|
||||
rec = self._data[dag_hash]
|
||||
if rec.install_type_matches(installed):
|
||||
return [rec.spec]
|
||||
else:
|
||||
return default
|
||||
|
||||
# check if hash is a prefix of some installed (or previously
|
||||
# installed) spec.
|
||||
matches = [record.spec for h, record in self._data.items()
|
||||
if h.startswith(dag_hash) and
|
||||
record.install_type_matches(installed)]
|
||||
if matches:
|
||||
return matches
|
||||
|
||||
# nothing found
|
||||
return default
|
||||
|
||||
def get_by_hash_local(self, *args, **kwargs):
|
||||
"""Look up a spec in *this DB* by DAG hash, or by a DAG hash prefix.
|
||||
|
||||
Arguments:
|
||||
@@ -1169,24 +1189,7 @@ def get_by_hash_local(self, dag_hash, default=None, installed=any):
|
||||
|
||||
"""
|
||||
with self.read_transaction():
|
||||
# hash is a full hash and is in the data somewhere
|
||||
if dag_hash in self._data:
|
||||
rec = self._data[dag_hash]
|
||||
if rec.install_type_matches(installed):
|
||||
return [rec.spec]
|
||||
else:
|
||||
return default
|
||||
|
||||
# check if hash is a prefix of some installed (or previously
|
||||
# installed) spec.
|
||||
matches = [record.spec for h, record in self._data.items()
|
||||
if h.startswith(dag_hash) and
|
||||
record.install_type_matches(installed)]
|
||||
if matches:
|
||||
return matches
|
||||
|
||||
# nothing found
|
||||
return default
|
||||
return self._get_by_hash_local(*args, **kwargs)
|
||||
|
||||
def get_by_hash(self, dag_hash, default=None, installed=any):
|
||||
"""Look up a spec by DAG hash, or by a DAG hash prefix.
|
||||
@@ -1211,9 +1214,14 @@ def get_by_hash(self, dag_hash, default=None, installed=any):
|
||||
(list): a list of specs matching the hash or hash prefix
|
||||
|
||||
"""
|
||||
search_path = [self] + self.upstream_dbs
|
||||
for db in search_path:
|
||||
spec = db.get_by_hash_local(
|
||||
|
||||
spec = self.get_by_hash_local(
|
||||
dag_hash, default=default, installed=installed)
|
||||
if spec is not None:
|
||||
return spec
|
||||
|
||||
for upstream_db in self.upstream_dbs:
|
||||
spec = upstream_db._get_by_hash_local(
|
||||
dag_hash, default=default, installed=installed)
|
||||
if spec is not None:
|
||||
return spec
|
||||
@@ -1273,7 +1281,8 @@ def _query(
|
||||
if not (start_date < inst_date < end_date):
|
||||
continue
|
||||
|
||||
if query_spec is any or rec.spec.satisfies(query_spec):
|
||||
if (query_spec is any or
|
||||
rec.spec.satisfies(query_spec, strict=True)):
|
||||
results.append(rec.spec)
|
||||
|
||||
return results
|
||||
|
||||
@@ -36,6 +36,7 @@
|
||||
from spack.spec import Spec
|
||||
from spack.spec_list import SpecList, InvalidSpecConstraintError
|
||||
from spack.variant import UnknownVariantError
|
||||
import spack.util.lock as lk
|
||||
|
||||
#: environment variable used to indicate the active environment
|
||||
spack_env_var = 'SPACK_ENV'
|
||||
@@ -557,12 +558,18 @@ def __init__(self, path, init_file=None, with_view=None):
|
||||
path to the view.
|
||||
"""
|
||||
self.path = os.path.abspath(path)
|
||||
|
||||
self.txlock = lk.Lock(self._transaction_lock_path)
|
||||
|
||||
# This attribute will be set properly from configuration
|
||||
# during concretization
|
||||
self.concretization = None
|
||||
self.clear()
|
||||
|
||||
if init_file:
|
||||
# If we are creating the environment from an init file, we don't
|
||||
# need to lock, because there are no Spack operations that alter
|
||||
# the init file.
|
||||
with fs.open_if_filename(init_file) as f:
|
||||
if hasattr(f, 'name') and f.name.endswith('.lock'):
|
||||
self._read_manifest(default_manifest_yaml)
|
||||
@@ -571,26 +578,8 @@ def __init__(self, path, init_file=None, with_view=None):
|
||||
else:
|
||||
self._read_manifest(f, raw_yaml=default_manifest_yaml)
|
||||
else:
|
||||
default_manifest = not os.path.exists(self.manifest_path)
|
||||
if default_manifest:
|
||||
# No manifest, use default yaml
|
||||
self._read_manifest(default_manifest_yaml)
|
||||
else:
|
||||
with open(self.manifest_path) as f:
|
||||
self._read_manifest(f)
|
||||
|
||||
if os.path.exists(self.lock_path):
|
||||
with open(self.lock_path) as f:
|
||||
read_lock_version = self._read_lockfile(f)
|
||||
if default_manifest:
|
||||
# No manifest, set user specs from lockfile
|
||||
self._set_user_specs_from_lockfile()
|
||||
|
||||
if read_lock_version == 1:
|
||||
tty.debug(
|
||||
"Storing backup of old lockfile {0} at {1}".format(
|
||||
self.lock_path, self._lock_backup_v1_path))
|
||||
shutil.copy(self.lock_path, self._lock_backup_v1_path)
|
||||
with lk.ReadTransaction(self.txlock):
|
||||
self._read()
|
||||
|
||||
if with_view is False:
|
||||
self.views = {}
|
||||
@@ -602,6 +591,42 @@ def __init__(self, path, init_file=None, with_view=None):
|
||||
# If with_view is None, then defer to the view settings determined by
|
||||
# the manifest file
|
||||
|
||||
def _re_read(self):
|
||||
"""Reinitialize the environment object if it has been written (this
|
||||
may not be true if the environment was just created in this running
|
||||
instance of Spack)."""
|
||||
if not os.path.exists(self.manifest_path):
|
||||
return
|
||||
|
||||
self.clear()
|
||||
self._read()
|
||||
|
||||
def _read(self):
|
||||
default_manifest = not os.path.exists(self.manifest_path)
|
||||
if default_manifest:
|
||||
# No manifest, use default yaml
|
||||
self._read_manifest(default_manifest_yaml)
|
||||
else:
|
||||
with open(self.manifest_path) as f:
|
||||
self._read_manifest(f)
|
||||
|
||||
if os.path.exists(self.lock_path):
|
||||
with open(self.lock_path) as f:
|
||||
read_lock_version = self._read_lockfile(f)
|
||||
if default_manifest:
|
||||
# No manifest, set user specs from lockfile
|
||||
self._set_user_specs_from_lockfile()
|
||||
|
||||
if read_lock_version == 1:
|
||||
tty.debug(
|
||||
"Storing backup of old lockfile {0} at {1}".format(
|
||||
self.lock_path, self._lock_backup_v1_path))
|
||||
shutil.copy(self.lock_path, self._lock_backup_v1_path)
|
||||
|
||||
def write_transaction(self):
|
||||
"""Get a write lock context manager for use in a `with` block."""
|
||||
return lk.WriteTransaction(self.txlock, acquire=self._re_read)
|
||||
|
||||
def _read_manifest(self, f, raw_yaml=None):
|
||||
"""Read manifest file and set up user specs."""
|
||||
if raw_yaml:
|
||||
@@ -694,6 +719,13 @@ def manifest_path(self):
|
||||
"""Path to spack.yaml file in this environment."""
|
||||
return os.path.join(self.path, manifest_name)
|
||||
|
||||
@property
|
||||
def _transaction_lock_path(self):
|
||||
"""The location of the lock file used to synchronize multiple
|
||||
processes updating the same environment.
|
||||
"""
|
||||
return os.path.join(self.env_subdir_path, 'transaction_lock')
|
||||
|
||||
@property
|
||||
def lock_path(self):
|
||||
"""Path to spack.lock file in this environment."""
|
||||
@@ -986,11 +1018,18 @@ def _concretize_separately(self):
|
||||
concretized_specs.append((uspec, concrete))
|
||||
return concretized_specs
|
||||
|
||||
def install(self, user_spec, concrete_spec=None, **install_args):
|
||||
"""Install a single spec into an environment.
|
||||
def concretize_and_add(self, user_spec, concrete_spec=None):
|
||||
"""Concretize and add a single spec to the environment.
|
||||
|
||||
This will automatically concretize the single spec, but it won't
|
||||
affect other as-yet unconcretized specs.
|
||||
Concretize the provided ``user_spec`` and add it along with the
|
||||
concretized result to the environment. If the given ``user_spec`` was
|
||||
already present in the environment, this does not add a duplicate.
|
||||
The concretized spec will be added unless the ``user_spec`` was
|
||||
already present and an associated concrete spec was already present.
|
||||
|
||||
Args:
|
||||
concrete_spec: if provided, then it is assumed that it is the
|
||||
result of concretizing the provided ``user_spec``
|
||||
"""
|
||||
if self.concretization == 'together':
|
||||
msg = 'cannot install a single spec in an environment that is ' \
|
||||
@@ -1001,37 +1040,21 @@ def install(self, user_spec, concrete_spec=None, **install_args):
|
||||
|
||||
spec = Spec(user_spec)
|
||||
|
||||
with spack.store.db.read_transaction():
|
||||
if self.add(spec):
|
||||
concrete = concrete_spec or spec.concretized()
|
||||
if self.add(spec):
|
||||
concrete = concrete_spec or spec.concretized()
|
||||
self._add_concrete_spec(spec, concrete)
|
||||
else:
|
||||
# spec might be in the user_specs, but not installed.
|
||||
# TODO: Redo name-based comparison for old style envs
|
||||
spec = next(
|
||||
s for s in self.user_specs if s.satisfies(user_spec)
|
||||
)
|
||||
concrete = self.specs_by_hash.get(spec.build_hash())
|
||||
if not concrete:
|
||||
concrete = spec.concretized()
|
||||
self._add_concrete_spec(spec, concrete)
|
||||
else:
|
||||
# spec might be in the user_specs, but not installed.
|
||||
# TODO: Redo name-based comparison for old style envs
|
||||
spec = next(
|
||||
s for s in self.user_specs if s.satisfies(user_spec)
|
||||
)
|
||||
concrete = self.specs_by_hash.get(spec.build_hash())
|
||||
if not concrete:
|
||||
concrete = spec.concretized()
|
||||
self._add_concrete_spec(spec, concrete)
|
||||
|
||||
self._install(concrete, **install_args)
|
||||
|
||||
def _install(self, spec, **install_args):
|
||||
spec.package.do_install(**install_args)
|
||||
|
||||
# Make sure log directory exists
|
||||
log_path = self.log_path
|
||||
fs.mkdirp(log_path)
|
||||
|
||||
with fs.working_dir(self.path):
|
||||
# Link the resulting log file into logs dir
|
||||
build_log_link = os.path.join(
|
||||
log_path, '%s-%s.log' % (spec.name, spec.dag_hash(7)))
|
||||
if os.path.lexists(build_log_link):
|
||||
os.remove(build_log_link)
|
||||
os.symlink(spec.package.build_log_path, build_log_link)
|
||||
return concrete
|
||||
|
||||
@property
|
||||
def default_view(self):
|
||||
@@ -1131,6 +1154,33 @@ def _add_concrete_spec(self, spec, concrete, new=True):
|
||||
self.concretized_order.append(h)
|
||||
self.specs_by_hash[h] = concrete
|
||||
|
||||
def install(self, user_spec, concrete_spec=None, **install_args):
|
||||
"""Install a single spec into an environment.
|
||||
|
||||
This will automatically concretize the single spec, but it won't
|
||||
affect other as-yet unconcretized specs.
|
||||
"""
|
||||
concrete = self.concretize_and_add(user_spec, concrete_spec)
|
||||
|
||||
self._install(concrete, **install_args)
|
||||
|
||||
def _install(self, spec, **install_args):
|
||||
# "spec" must be concrete
|
||||
spec.package.do_install(**install_args)
|
||||
|
||||
if not spec.external:
|
||||
# Make sure log directory exists
|
||||
log_path = self.log_path
|
||||
fs.mkdirp(log_path)
|
||||
|
||||
with fs.working_dir(self.path):
|
||||
# Link the resulting log file into logs dir
|
||||
build_log_link = os.path.join(
|
||||
log_path, '%s-%s.log' % (spec.name, spec.dag_hash(7)))
|
||||
if os.path.lexists(build_log_link):
|
||||
os.remove(build_log_link)
|
||||
os.symlink(spec.package.build_log_path, build_log_link)
|
||||
|
||||
def install_all(self, args=None):
|
||||
"""Install all concretized specs in an environment.
|
||||
|
||||
@@ -1138,25 +1188,27 @@ def install_all(self, args=None):
|
||||
that needs to be done separately with a call to write().
|
||||
|
||||
"""
|
||||
|
||||
# If "spack install" is invoked repeatedly for a large environment
|
||||
# where all specs are already installed, the operation can take
|
||||
# a large amount of time due to repeatedly acquiring and releasing
|
||||
# locks, this does an initial check across all specs within a single
|
||||
# DB read transaction to reduce time spent in this case.
|
||||
uninstalled_specs = []
|
||||
with spack.store.db.read_transaction():
|
||||
for concretized_hash in self.concretized_order:
|
||||
spec = self.specs_by_hash[concretized_hash]
|
||||
if not spec.package.installed:
|
||||
uninstalled_specs.append(spec)
|
||||
|
||||
# Parse cli arguments and construct a dictionary
|
||||
# that will be passed to Package.do_install API
|
||||
kwargs = dict()
|
||||
if args:
|
||||
spack.cmd.install.update_kwargs_from_args(args, kwargs)
|
||||
for spec in uninstalled_specs:
|
||||
# Parse cli arguments and construct a dictionary
|
||||
# that will be passed to Package.do_install API
|
||||
kwargs = dict()
|
||||
if args:
|
||||
spack.cmd.install.update_kwargs_from_args(args, kwargs)
|
||||
|
||||
self._install(spec, **kwargs)
|
||||
|
||||
if not spec.external:
|
||||
# Link the resulting log file into logs dir
|
||||
log_name = '%s-%s' % (spec.name, spec.dag_hash(7))
|
||||
build_log_link = os.path.join(self.log_path, log_name)
|
||||
if os.path.lexists(build_log_link):
|
||||
os.remove(build_log_link)
|
||||
os.symlink(spec.package.build_log_path, build_log_link)
|
||||
self._install(spec, **kwargs)
|
||||
|
||||
def all_specs_by_hash(self):
|
||||
"""Map of hashes to spec for all specs in this environment."""
|
||||
@@ -1424,13 +1476,13 @@ def write(self, regenerate_views=True):
|
||||
|
||||
# Remove yaml sections that are shadowing defaults
|
||||
# construct garbage path to ensure we don't find a manifest by accident
|
||||
bare_env = Environment(os.path.join(self.manifest_path, 'garbage'),
|
||||
with_view=self.view_path_default)
|
||||
keys_present = list(yaml_dict.keys())
|
||||
for key in keys_present:
|
||||
if yaml_dict[key] == config_dict(bare_env.yaml).get(key, None):
|
||||
if key not in raw_yaml_dict:
|
||||
del yaml_dict[key]
|
||||
with fs.temp_cwd() as env_dir:
|
||||
bare_env = Environment(env_dir, with_view=self.view_path_default)
|
||||
keys_present = list(yaml_dict.keys())
|
||||
for key in keys_present:
|
||||
if yaml_dict[key] == config_dict(bare_env.yaml).get(key, None):
|
||||
if key not in raw_yaml_dict:
|
||||
del yaml_dict[key]
|
||||
|
||||
# if all that worked, write out the manifest file at the top level
|
||||
# Only actually write if it has changed or was never written
|
||||
|
||||
@@ -714,7 +714,8 @@ class GitFetchStrategy(VCSFetchStrategy):
|
||||
Repositories are cloned into the standard stage source path directory.
|
||||
"""
|
||||
url_attr = 'git'
|
||||
optional_attrs = ['tag', 'branch', 'commit', 'submodules', 'get_full_repo']
|
||||
optional_attrs = ['tag', 'branch', 'commit', 'submodules',
|
||||
'get_full_repo', 'submodules_delete']
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
# Discards the keywords in kwargs that may conflict with the next call
|
||||
@@ -725,6 +726,7 @@ def __init__(self, **kwargs):
|
||||
|
||||
self._git = None
|
||||
self.submodules = kwargs.get('submodules', False)
|
||||
self.submodules_delete = kwargs.get('submodules_delete', False)
|
||||
self.get_full_repo = kwargs.get('get_full_repo', False)
|
||||
|
||||
@property
|
||||
@@ -858,6 +860,14 @@ def fetch(self):
|
||||
git(*pull_args, ignore_errors=1)
|
||||
git(*co_args)
|
||||
|
||||
if self.submodules_delete:
|
||||
with working_dir(self.stage.source_path):
|
||||
for submodule_to_delete in self.submodules_delete:
|
||||
args = ['rm', submodule_to_delete]
|
||||
if not spack.config.get('config:debug'):
|
||||
args.insert(1, '--quiet')
|
||||
git(*args)
|
||||
|
||||
# Init submodules if the user asked for them.
|
||||
if self.submodules:
|
||||
with working_dir(self.stage.source_path):
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
import sys
|
||||
import re
|
||||
import os
|
||||
import os.path
|
||||
import inspect
|
||||
import pstats
|
||||
import argparse
|
||||
@@ -21,6 +22,7 @@
|
||||
from six import StringIO
|
||||
|
||||
import llnl.util.cpu
|
||||
import llnl.util.filesystem as fs
|
||||
import llnl.util.tty as tty
|
||||
import llnl.util.tty.color as color
|
||||
from llnl.util.tty.log import log_output
|
||||
@@ -35,6 +37,7 @@
|
||||
import spack.store
|
||||
import spack.util.debug
|
||||
import spack.util.path
|
||||
import spack.util.executable as exe
|
||||
from spack.error import SpackError
|
||||
|
||||
|
||||
@@ -107,6 +110,35 @@ def add_all_commands(parser):
|
||||
parser.add_command(cmd)
|
||||
|
||||
|
||||
def get_version():
|
||||
"""Get a descriptive version of this instance of Spack.
|
||||
|
||||
If this is a git repository, and if it is not on a release tag,
|
||||
return a string like:
|
||||
|
||||
release_version-commits_since_release-commit
|
||||
|
||||
If we *are* at a release tag, or if this is not a git repo, return
|
||||
the real spack release number (e.g., 0.13.3).
|
||||
|
||||
"""
|
||||
git_path = os.path.join(spack.paths.prefix, ".git")
|
||||
if os.path.exists(git_path):
|
||||
git = exe.which("git")
|
||||
if git:
|
||||
with fs.working_dir(spack.paths.prefix):
|
||||
desc = git(
|
||||
"describe", "--tags", output=str, fail_on_error=False)
|
||||
|
||||
if git.returncode == 0:
|
||||
match = re.match(r"v([^-]+)-([^-]+)-g([a-f\d]+)", desc)
|
||||
if match:
|
||||
v, n, commit = match.groups()
|
||||
return "%s-%s-%s" % (v, n, commit)
|
||||
|
||||
return spack.spack_version
|
||||
|
||||
|
||||
def index_commands():
|
||||
"""create an index of commands by section for this help level"""
|
||||
index = {}
|
||||
@@ -680,7 +712,7 @@ def main(argv=None):
|
||||
# -h, -H, and -V are special as they do not require a command, but
|
||||
# all the other options do nothing without a command.
|
||||
if args.version:
|
||||
print(spack.spack_version)
|
||||
print(get_version())
|
||||
return 0
|
||||
elif args.help:
|
||||
sys.stdout.write(parser.format_help(level=args.help))
|
||||
|
||||
@@ -1262,7 +1262,10 @@ def content_hash(self, content=None):
|
||||
raise spack.error.SpackError(err_msg)
|
||||
|
||||
hash_content = list()
|
||||
source_id = fs.for_package_version(self, self.version).source_id()
|
||||
try:
|
||||
source_id = fs.for_package_version(self, self.version).source_id()
|
||||
except fs.ExtrapolationError:
|
||||
source_id = None
|
||||
if not source_id:
|
||||
# TODO? in cases where a digest or source_id isn't available,
|
||||
# should this attempt to download the source and set one? This
|
||||
@@ -1505,9 +1508,10 @@ def _update_explicit_entry_in_db(self, rec, explicit):
|
||||
message = '{s.name}@{s.version} : marking the package explicit'
|
||||
tty.msg(message.format(s=self))
|
||||
|
||||
def try_install_from_binary_cache(self, explicit):
|
||||
def try_install_from_binary_cache(self, explicit, unsigned=False):
|
||||
tty.msg('Searching for binary cache of %s' % self.name)
|
||||
specs = binary_distribution.get_specs(use_arch=True)
|
||||
specs = binary_distribution.get_spec(spec=self.spec,
|
||||
force=False)
|
||||
binary_spec = spack.spec.Spec.from_dict(self.spec.to_dict())
|
||||
binary_spec._mark_concrete()
|
||||
if binary_spec not in specs:
|
||||
@@ -1521,7 +1525,7 @@ def try_install_from_binary_cache(self, explicit):
|
||||
tty.msg('Installing %s from binary cache' % self.name)
|
||||
binary_distribution.extract_tarball(
|
||||
binary_spec, tarball, allow_root=False,
|
||||
unsigned=False, force=False)
|
||||
unsigned=unsigned, force=False)
|
||||
self.installed_from_binary_cache = True
|
||||
spack.store.db.add(
|
||||
self.spec, spack.store.layout, explicit=explicit)
|
||||
@@ -1662,7 +1666,8 @@ def do_install(self, **kwargs):
|
||||
tty.msg(colorize('@*{Installing} @*g{%s}' % self.name))
|
||||
|
||||
if kwargs.get('use_cache', True):
|
||||
if self.try_install_from_binary_cache(explicit):
|
||||
if self.try_install_from_binary_cache(
|
||||
explicit, unsigned=kwargs.get('unsigned', False)):
|
||||
tty.msg('Successfully installed %s from binary cache'
|
||||
% self.name)
|
||||
print_pkg(self.prefix)
|
||||
|
||||
@@ -932,7 +932,7 @@ def dump_provenance(self, spec, path):
|
||||
tty.warn("Patch file did not exist: %s" % patch.path)
|
||||
|
||||
# Install the package.py file itself.
|
||||
install(self.filename_for_package_name(spec), path)
|
||||
install(self.filename_for_package_name(spec.name), path)
|
||||
|
||||
def purge(self):
|
||||
"""Clear entire package instance cache."""
|
||||
@@ -974,20 +974,12 @@ def providers_for(self, vpkg_spec):
|
||||
def extensions_for(self, extendee_spec):
|
||||
return [p for p in self.all_packages() if p.extends(extendee_spec)]
|
||||
|
||||
def _check_namespace(self, spec):
|
||||
"""Check that the spec's namespace is the same as this repository's."""
|
||||
if spec.namespace and spec.namespace != self.namespace:
|
||||
raise UnknownNamespaceError(spec.namespace)
|
||||
|
||||
@autospec
|
||||
def dirname_for_package_name(self, spec):
|
||||
def dirname_for_package_name(self, pkg_name):
|
||||
"""Get the directory name for a particular package. This is the
|
||||
directory that contains its package.py file."""
|
||||
self._check_namespace(spec)
|
||||
return os.path.join(self.packages_path, spec.name)
|
||||
return os.path.join(self.packages_path, pkg_name)
|
||||
|
||||
@autospec
|
||||
def filename_for_package_name(self, spec):
|
||||
def filename_for_package_name(self, pkg_name):
|
||||
"""Get the filename for the module we should load for a particular
|
||||
package. Packages for a Repo live in
|
||||
``$root/<package_name>/package.py``
|
||||
@@ -996,8 +988,7 @@ def filename_for_package_name(self, spec):
|
||||
package doesn't exist yet, so callers will need to ensure
|
||||
the package exists before importing.
|
||||
"""
|
||||
self._check_namespace(spec)
|
||||
pkg_dir = self.dirname_for_package_name(spec.name)
|
||||
pkg_dir = self.dirname_for_package_name(pkg_name)
|
||||
return os.path.join(pkg_dir, package_file_name)
|
||||
|
||||
@property
|
||||
|
||||
82
lib/spack/spack/schema/container.py
Normal file
82
lib/spack/spack/schema/container.py
Normal file
@@ -0,0 +1,82 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
"""Schema for the 'container' subsection of Spack environments."""
|
||||
|
||||
#: Schema for the container attribute included in Spack environments
|
||||
container_schema = {
|
||||
'type': 'object',
|
||||
'additionalProperties': False,
|
||||
'properties': {
|
||||
# The recipe formats that are currently supported by the command
|
||||
'format': {
|
||||
'type': 'string',
|
||||
'enum': ['docker', 'singularity']
|
||||
},
|
||||
# Describes the base image to start from and the version
|
||||
# of Spack to be used
|
||||
'base': {
|
||||
'type': 'object',
|
||||
'additionalProperties': False,
|
||||
'properties': {
|
||||
'image': {
|
||||
'type': 'string',
|
||||
'enum': ['ubuntu:18.04',
|
||||
'ubuntu:16.04',
|
||||
'centos:7',
|
||||
'centos:6']
|
||||
},
|
||||
'spack': {
|
||||
'type': 'string',
|
||||
'enum': ['develop', '0.14', '0.14.0']
|
||||
}
|
||||
},
|
||||
'required': ['image', 'spack']
|
||||
},
|
||||
# Whether or not to strip installed binaries
|
||||
'strip': {
|
||||
'type': 'boolean',
|
||||
'default': True
|
||||
},
|
||||
# Additional system packages that are needed at runtime
|
||||
'os_packages': {
|
||||
'type': 'array',
|
||||
'items': {
|
||||
'type': 'string'
|
||||
}
|
||||
},
|
||||
# Add labels to the image
|
||||
'labels': {
|
||||
'type': 'object',
|
||||
},
|
||||
# Add a custom extra section at the bottom of a stage
|
||||
'extra_instructions': {
|
||||
'type': 'object',
|
||||
'additionalProperties': False,
|
||||
'properties': {
|
||||
'build': {'type': 'string'},
|
||||
'final': {'type': 'string'}
|
||||
}
|
||||
},
|
||||
# Reserved for properties that are specific to each format
|
||||
'singularity': {
|
||||
'type': 'object',
|
||||
'additionalProperties': False,
|
||||
'default': {},
|
||||
'properties': {
|
||||
'runscript': {'type': 'string'},
|
||||
'startscript': {'type': 'string'},
|
||||
'test': {'type': 'string'},
|
||||
'help': {'type': 'string'}
|
||||
}
|
||||
},
|
||||
'docker': {
|
||||
'type': 'object',
|
||||
'additionalProperties': False,
|
||||
'default': {},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
properties = {'container': container_schema}
|
||||
@@ -13,6 +13,7 @@
|
||||
import spack.schema.cdash
|
||||
import spack.schema.compilers
|
||||
import spack.schema.config
|
||||
import spack.schema.container
|
||||
import spack.schema.gitlab_ci
|
||||
import spack.schema.mirrors
|
||||
import spack.schema.modules
|
||||
@@ -26,6 +27,7 @@
|
||||
spack.schema.cdash.properties,
|
||||
spack.schema.compilers.properties,
|
||||
spack.schema.config.properties,
|
||||
spack.schema.container.properties,
|
||||
spack.schema.gitlab_ci.properties,
|
||||
spack.schema.mirrors.properties,
|
||||
spack.schema.modules.properties,
|
||||
|
||||
@@ -1117,6 +1117,18 @@ def _add_dependency(self, spec, deptypes):
|
||||
self._dependencies[spec.name] = dspec
|
||||
spec._dependents[self.name] = dspec
|
||||
|
||||
def _add_default_platform(self):
|
||||
"""If a spec has an os or a target and no platform, give it
|
||||
the default platform.
|
||||
|
||||
This is private because it is used by the parser -- it's not
|
||||
expected to be used outside of ``spec.py``.
|
||||
|
||||
"""
|
||||
arch = self.architecture
|
||||
if arch and not arch.platform and (arch.os or arch.target):
|
||||
self._set_architecture(platform=spack.architecture.platform().name)
|
||||
|
||||
#
|
||||
# Public interface
|
||||
#
|
||||
@@ -4053,14 +4065,6 @@ def do_parse(self):
|
||||
except spack.parse.ParseError as e:
|
||||
raise SpecParseError(e)
|
||||
|
||||
# If the spec has an os or a target and no platform, give it
|
||||
# the default platform
|
||||
platform_default = spack.architecture.platform().name
|
||||
for spec in specs:
|
||||
for s in spec.traverse():
|
||||
if s.architecture and not s.architecture.platform and \
|
||||
(s.architecture.os or s.architecture.target):
|
||||
s._set_architecture(platform=platform_default)
|
||||
return specs
|
||||
|
||||
def spec_from_file(self):
|
||||
@@ -4192,6 +4196,7 @@ def spec(self, name):
|
||||
else:
|
||||
break
|
||||
|
||||
spec._add_default_platform()
|
||||
return spec
|
||||
|
||||
def variant(self, name=None):
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
def mock_get_specs(database, monkeypatch):
|
||||
specs = database.query_local()
|
||||
monkeypatch.setattr(
|
||||
spack.binary_distribution, 'get_specs', lambda x: specs
|
||||
spack.binary_distribution, 'get_specs', lambda x, y: specs
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -95,12 +95,16 @@ def test_create_template(parser, mock_test_repo, args, name, expected):
|
||||
(' ', 'name must be provided'),
|
||||
('bad#name', 'name can only contain'),
|
||||
])
|
||||
def test_create_template_bad_name(parser, mock_test_repo, name, expected):
|
||||
def test_create_template_bad_name(
|
||||
parser, mock_test_repo, name, expected, capsys):
|
||||
"""Test template creation with bad name options."""
|
||||
constr_args = parser.parse_args(['--skip-editor', '-n', name])
|
||||
with pytest.raises(SystemExit, matches=expected):
|
||||
with pytest.raises(SystemExit):
|
||||
spack.cmd.create.create(parser, constr_args)
|
||||
|
||||
captured = capsys.readouterr()
|
||||
assert expected in str(captured)
|
||||
|
||||
|
||||
def test_build_system_guesser_no_stage(parser):
|
||||
"""Test build system guesser when stage not provided."""
|
||||
@@ -108,7 +112,7 @@ def test_build_system_guesser_no_stage(parser):
|
||||
|
||||
# Ensure get the expected build system
|
||||
with pytest.raises(AttributeError,
|
||||
matches="'NoneType' object has no attribute"):
|
||||
match="'NoneType' object has no attribute"):
|
||||
guesser(None, '/the/url/does/not/matter')
|
||||
|
||||
|
||||
@@ -142,7 +146,7 @@ def test_get_name_urls(parser, url, expected):
|
||||
assert name == expected
|
||||
|
||||
|
||||
def test_get_name_error(parser, monkeypatch):
|
||||
def test_get_name_error(parser, monkeypatch, capsys):
|
||||
"""Test get_name UndetectableNameError exception path."""
|
||||
def _parse_name_offset(path, v):
|
||||
raise UndetectableNameError(path)
|
||||
@@ -152,5 +156,7 @@ def _parse_name_offset(path, v):
|
||||
url = 'downloads.sourceforge.net/noapp/'
|
||||
args = parser.parse_args([url])
|
||||
|
||||
with pytest.raises(SystemExit, matches="Couldn't guess a name"):
|
||||
with pytest.raises(SystemExit):
|
||||
spack.cmd.create.get_name(args)
|
||||
captured = capsys.readouterr()
|
||||
assert "Couldn't guess a name" in str(captured)
|
||||
|
||||
@@ -30,7 +30,9 @@ def test_packages_are_removed(config, mutable_database, capsys):
|
||||
|
||||
|
||||
@pytest.mark.db
|
||||
def test_gc_with_environment(config, mutable_database, capsys):
|
||||
def test_gc_with_environment(
|
||||
config, mutable_database, mutable_mock_env_path, capsys
|
||||
):
|
||||
s = spack.spec.Spec('simple-inheritance')
|
||||
s.concretize()
|
||||
s.package.do_install(fake=True, explicit=True)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
@@ -170,7 +170,7 @@ def ignore_stage_files():
|
||||
Used to track which leftover files in the stage have been seen.
|
||||
"""
|
||||
# to start with, ignore the .lock file at the stage root.
|
||||
return set(['.lock', spack.stage._source_path_subdir])
|
||||
return set(['.lock', spack.stage._source_path_subdir, 'build_cache'])
|
||||
|
||||
|
||||
def remove_whatever_it_is(path):
|
||||
@@ -744,11 +744,31 @@ def mock_archive(request, tmpdir_factory):
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def mock_git_repository(tmpdir_factory):
|
||||
"""Creates a very simple git repository with two branches and
|
||||
two commits.
|
||||
"""Creates a simple git repository with two branches,
|
||||
two commits and two submodules. Each submodule has one commit.
|
||||
"""
|
||||
git = spack.util.executable.which('git', required=True)
|
||||
|
||||
suburls = []
|
||||
for submodule_count in range(2):
|
||||
tmpdir = tmpdir_factory.mktemp('mock-git-repo-submodule-dir-{0}'
|
||||
.format(submodule_count))
|
||||
tmpdir.ensure(spack.stage._source_path_subdir, dir=True)
|
||||
repodir = tmpdir.join(spack.stage._source_path_subdir)
|
||||
suburls.append((submodule_count, 'file://' + str(repodir)))
|
||||
|
||||
# Initialize the repository
|
||||
with repodir.as_cwd():
|
||||
git('init')
|
||||
git('config', 'user.name', 'Spack')
|
||||
git('config', 'user.email', 'spack@spack.io')
|
||||
|
||||
# r0 is just the first commit
|
||||
submodule_file = 'r0_file_{0}'.format(submodule_count)
|
||||
repodir.ensure(submodule_file)
|
||||
git('add', submodule_file)
|
||||
git('commit', '-m', 'mock-git-repo r0 {0}'.format(submodule_count))
|
||||
|
||||
tmpdir = tmpdir_factory.mktemp('mock-git-repo-dir')
|
||||
tmpdir.ensure(spack.stage._source_path_subdir, dir=True)
|
||||
repodir = tmpdir.join(spack.stage._source_path_subdir)
|
||||
@@ -759,6 +779,9 @@ def mock_git_repository(tmpdir_factory):
|
||||
git('config', 'user.name', 'Spack')
|
||||
git('config', 'user.email', 'spack@spack.io')
|
||||
url = 'file://' + str(repodir)
|
||||
for number, suburl in suburls:
|
||||
git('submodule', 'add', suburl,
|
||||
'third_party/submodule{0}'.format(number))
|
||||
|
||||
# r0 is just the first commit
|
||||
r0_file = 'r0_file'
|
||||
|
||||
16
lib/spack/spack/test/container/cli.py
Normal file
16
lib/spack/spack/test/container/cli.py
Normal file
@@ -0,0 +1,16 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import llnl.util.filesystem as fs
|
||||
import spack.main
|
||||
|
||||
|
||||
containerize = spack.main.SpackCommand('containerize')
|
||||
|
||||
|
||||
def test_command(configuration_dir, capsys):
|
||||
with capsys.disabled():
|
||||
with fs.working_dir(configuration_dir):
|
||||
output = containerize()
|
||||
assert 'FROM spack/ubuntu-bionic' in output
|
||||
43
lib/spack/spack/test/container/conftest.py
Normal file
43
lib/spack/spack/test/container/conftest.py
Normal file
@@ -0,0 +1,43 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import pytest
|
||||
|
||||
import spack.util.spack_yaml as syaml
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def minimal_configuration():
|
||||
return {
|
||||
'spack': {
|
||||
'specs': [
|
||||
'gromacs',
|
||||
'mpich',
|
||||
'fftw precision=float'
|
||||
],
|
||||
'container': {
|
||||
'format': 'docker',
|
||||
'base': {
|
||||
'image': 'ubuntu:18.04',
|
||||
'spack': 'develop'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def config_dumper(tmpdir):
|
||||
"""Function that dumps an environment config in a temporary folder."""
|
||||
def dumper(configuration):
|
||||
content = syaml.dump(configuration, default_flow_style=False)
|
||||
config_file = tmpdir / 'spack.yaml'
|
||||
config_file.write(content)
|
||||
return str(tmpdir)
|
||||
return dumper
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def configuration_dir(minimal_configuration, config_dumper):
|
||||
return config_dumper(minimal_configuration)
|
||||
74
lib/spack/spack/test/container/docker.py
Normal file
74
lib/spack/spack/test/container/docker.py
Normal file
@@ -0,0 +1,74 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import spack.container.writers as writers
|
||||
|
||||
|
||||
def test_manifest(minimal_configuration):
|
||||
writer = writers.create(minimal_configuration)
|
||||
manifest_str = writer.manifest
|
||||
for line in manifest_str.split('\n'):
|
||||
assert 'echo' in line
|
||||
|
||||
|
||||
def test_build_and_run_images(minimal_configuration):
|
||||
writer = writers.create(minimal_configuration)
|
||||
|
||||
# Test the output of run property
|
||||
run = writer.run
|
||||
assert run.image == 'ubuntu:18.04'
|
||||
|
||||
# Test the output of the build property
|
||||
build = writer.build
|
||||
assert build.image == 'spack/ubuntu-bionic'
|
||||
assert build.tag == 'latest'
|
||||
|
||||
|
||||
def test_packages(minimal_configuration):
|
||||
# In this minimal configuration we don't have packages
|
||||
writer = writers.create(minimal_configuration)
|
||||
assert writer.os_packages is None
|
||||
|
||||
# If we add them a list should be returned
|
||||
pkgs = ['libgomp1']
|
||||
minimal_configuration['spack']['container']['os_packages'] = pkgs
|
||||
writer = writers.create(minimal_configuration)
|
||||
p = writer.os_packages
|
||||
assert p.update
|
||||
assert p.install
|
||||
assert p.clean
|
||||
assert p.list == pkgs
|
||||
|
||||
|
||||
def test_ensure_render_works(minimal_configuration):
|
||||
# Here we just want to ensure that nothing is raised
|
||||
writer = writers.create(minimal_configuration)
|
||||
writer()
|
||||
|
||||
|
||||
def test_strip_is_set_from_config(minimal_configuration):
|
||||
writer = writers.create(minimal_configuration)
|
||||
assert writer.strip is True
|
||||
|
||||
minimal_configuration['spack']['container']['strip'] = False
|
||||
writer = writers.create(minimal_configuration)
|
||||
assert writer.strip is False
|
||||
|
||||
|
||||
def test_extra_instructions_is_set_from_config(minimal_configuration):
|
||||
writer = writers.create(minimal_configuration)
|
||||
assert writer.extra_instructions == (None, None)
|
||||
|
||||
test_line = 'RUN echo Hello world!'
|
||||
e = minimal_configuration['spack']['container']
|
||||
e['extra_instructions'] = {}
|
||||
e['extra_instructions']['build'] = test_line
|
||||
writer = writers.create(minimal_configuration)
|
||||
assert writer.extra_instructions == (test_line, None)
|
||||
|
||||
e['extra_instructions']['final'] = test_line
|
||||
del e['extra_instructions']['build']
|
||||
writer = writers.create(minimal_configuration)
|
||||
assert writer.extra_instructions == (None, test_line)
|
||||
58
lib/spack/spack/test/container/images.py
Normal file
58
lib/spack/spack/test/container/images.py
Normal file
@@ -0,0 +1,58 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import os.path
|
||||
|
||||
import pytest
|
||||
|
||||
import spack.container
|
||||
|
||||
|
||||
@pytest.mark.parametrize('image,spack_version,expected', [
|
||||
('ubuntu:18.04', 'develop', ('spack/ubuntu-bionic', 'latest')),
|
||||
('ubuntu:18.04', '0.14.0', ('spack/ubuntu-bionic', '0.14.0')),
|
||||
])
|
||||
def test_build_info(image, spack_version, expected):
|
||||
output = spack.container.images.build_info(image, spack_version)
|
||||
assert output == expected
|
||||
|
||||
|
||||
@pytest.mark.parametrize('image,spack_version', [
|
||||
('ubuntu:18.04', 'doesnotexist')
|
||||
])
|
||||
def test_build_info_error(image, spack_version):
|
||||
with pytest.raises(ValueError, match=r"has no tag for"):
|
||||
spack.container.images.build_info(image, spack_version)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('image', [
|
||||
'ubuntu:18.04'
|
||||
])
|
||||
def test_package_info(image):
|
||||
update, install, clean = spack.container.images.package_info(image)
|
||||
assert update
|
||||
assert install
|
||||
assert clean
|
||||
|
||||
|
||||
@pytest.mark.parametrize('extra_config,expected_msg', [
|
||||
({'modules': {'enable': ['tcl']}}, 'the subsection "modules" in'),
|
||||
({'concretization': 'separately'}, 'the "concretization" attribute'),
|
||||
({'config': {'install_tree': '/some/dir'}},
|
||||
'the "config:install_tree" attribute has been set'),
|
||||
({'view': '/some/dir'}, 'the "view" attribute has been set')
|
||||
])
|
||||
def test_validate(
|
||||
extra_config, expected_msg, minimal_configuration, config_dumper
|
||||
):
|
||||
minimal_configuration['spack'].update(extra_config)
|
||||
spack_yaml_dir = config_dumper(minimal_configuration)
|
||||
spack_yaml = os.path.join(spack_yaml_dir, 'spack.yaml')
|
||||
|
||||
with pytest.warns(UserWarning) as w:
|
||||
spack.container.validate(spack_yaml)
|
||||
|
||||
# Tests are designed to raise only one warning
|
||||
assert len(w) == 1
|
||||
assert expected_msg in str(w.pop().message)
|
||||
16
lib/spack/spack/test/container/schema.py
Normal file
16
lib/spack/spack/test/container/schema.py
Normal file
@@ -0,0 +1,16 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import spack.container
|
||||
import spack.schema.container
|
||||
|
||||
|
||||
def test_images_in_schema():
|
||||
properties = spack.schema.container.container_schema['properties']
|
||||
allowed_images = set(
|
||||
properties['base']['properties']['image']['enum']
|
||||
)
|
||||
images_in_json = set(x for x in spack.container.images.data())
|
||||
assert images_in_json == allowed_images
|
||||
42
lib/spack/spack/test/container/singularity.py
Normal file
42
lib/spack/spack/test/container/singularity.py
Normal file
@@ -0,0 +1,42 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import pytest
|
||||
|
||||
import spack.container.writers as writers
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def singularity_configuration(minimal_configuration):
|
||||
minimal_configuration['spack']['container']['format'] = 'singularity'
|
||||
return minimal_configuration
|
||||
|
||||
|
||||
def test_ensure_render_works(singularity_configuration):
|
||||
container_config = singularity_configuration['spack']['container']
|
||||
assert container_config['format'] == 'singularity'
|
||||
# Here we just want to ensure that nothing is raised
|
||||
writer = writers.create(singularity_configuration)
|
||||
writer()
|
||||
|
||||
|
||||
@pytest.mark.parametrize('properties,expected', [
|
||||
({'runscript': '/opt/view/bin/h5ls'},
|
||||
{'runscript': '/opt/view/bin/h5ls',
|
||||
'startscript': '',
|
||||
'test': '',
|
||||
'help': ''})
|
||||
])
|
||||
def test_singularity_specific_properties(
|
||||
properties, expected, singularity_configuration
|
||||
):
|
||||
# Set the property in the configuration
|
||||
container_config = singularity_configuration['spack']['container']
|
||||
for name, value in properties.items():
|
||||
container_config.setdefault('singularity', {})[name] = value
|
||||
|
||||
# Assert the properties return the expected values
|
||||
writer = writers.create(singularity_configuration)
|
||||
for name, value in expected.items():
|
||||
assert getattr(writer, name) == value
|
||||
@@ -14,4 +14,3 @@ config:
|
||||
dirty: false
|
||||
module_roots:
|
||||
tcl: $spack/share/spack/modules
|
||||
lmod: $spack/share/spack/lmod
|
||||
|
||||
@@ -55,10 +55,11 @@ def test_global_db_initializtion():
|
||||
@pytest.fixture()
|
||||
def upstream_and_downstream_db(tmpdir_factory, gen_mock_layout):
|
||||
mock_db_root = str(tmpdir_factory.mktemp('mock_db_root'))
|
||||
upstream_db = spack.database.Database(mock_db_root)
|
||||
upstream_write_db = spack.database.Database(mock_db_root)
|
||||
upstream_db = spack.database.Database(mock_db_root, is_upstream=True)
|
||||
# Generate initial DB file to avoid reindex
|
||||
with open(upstream_db._index_path, 'w') as db_file:
|
||||
upstream_db._write_to_file(db_file)
|
||||
with open(upstream_write_db._index_path, 'w') as db_file:
|
||||
upstream_write_db._write_to_file(db_file)
|
||||
upstream_layout = gen_mock_layout('/a/')
|
||||
|
||||
downstream_db_root = str(
|
||||
@@ -69,13 +70,14 @@ def upstream_and_downstream_db(tmpdir_factory, gen_mock_layout):
|
||||
downstream_db._write_to_file(db_file)
|
||||
downstream_layout = gen_mock_layout('/b/')
|
||||
|
||||
yield upstream_db, upstream_layout, downstream_db, downstream_layout
|
||||
yield upstream_write_db, upstream_db, upstream_layout,\
|
||||
downstream_db, downstream_layout
|
||||
|
||||
|
||||
@pytest.mark.usefixtures('config')
|
||||
def test_installed_upstream(upstream_and_downstream_db):
|
||||
upstream_db, upstream_layout, downstream_db, downstream_layout = (
|
||||
upstream_and_downstream_db)
|
||||
upstream_write_db, upstream_db, upstream_layout,\
|
||||
downstream_db, downstream_layout = (upstream_and_downstream_db)
|
||||
|
||||
default = ('build', 'link')
|
||||
x = MockPackage('x', [], [])
|
||||
@@ -89,7 +91,14 @@ def test_installed_upstream(upstream_and_downstream_db):
|
||||
spec.concretize()
|
||||
|
||||
for dep in spec.traverse(root=False):
|
||||
upstream_db.add(dep, upstream_layout)
|
||||
upstream_write_db.add(dep, upstream_layout)
|
||||
upstream_db._read()
|
||||
|
||||
for dep in spec.traverse(root=False):
|
||||
record = downstream_db.get_by_hash(dep.dag_hash())
|
||||
assert record is not None
|
||||
with pytest.raises(spack.database.ForbiddenLockError):
|
||||
record = upstream_db.get_by_hash(dep.dag_hash())
|
||||
|
||||
new_spec = spack.spec.Spec('w')
|
||||
new_spec.concretize()
|
||||
@@ -110,8 +119,8 @@ def test_installed_upstream(upstream_and_downstream_db):
|
||||
|
||||
@pytest.mark.usefixtures('config')
|
||||
def test_removed_upstream_dep(upstream_and_downstream_db):
|
||||
upstream_db, upstream_layout, downstream_db, downstream_layout = (
|
||||
upstream_and_downstream_db)
|
||||
upstream_write_db, upstream_db, upstream_layout,\
|
||||
downstream_db, downstream_layout = (upstream_and_downstream_db)
|
||||
|
||||
default = ('build', 'link')
|
||||
z = MockPackage('z', [], [])
|
||||
@@ -122,13 +131,15 @@ def test_removed_upstream_dep(upstream_and_downstream_db):
|
||||
spec = spack.spec.Spec('y')
|
||||
spec.concretize()
|
||||
|
||||
upstream_db.add(spec['z'], upstream_layout)
|
||||
upstream_write_db.add(spec['z'], upstream_layout)
|
||||
upstream_db._read()
|
||||
|
||||
new_spec = spack.spec.Spec('y')
|
||||
new_spec.concretize()
|
||||
downstream_db.add(new_spec, downstream_layout)
|
||||
|
||||
upstream_db.remove(new_spec['z'])
|
||||
upstream_write_db.remove(new_spec['z'])
|
||||
upstream_db._read()
|
||||
|
||||
new_downstream = spack.database.Database(
|
||||
downstream_db.root, upstream_dbs=[upstream_db])
|
||||
@@ -143,8 +154,8 @@ def test_add_to_upstream_after_downstream(upstream_and_downstream_db):
|
||||
DB. When a package is recorded as installed in both, the results should
|
||||
refer to the downstream DB.
|
||||
"""
|
||||
upstream_db, upstream_layout, downstream_db, downstream_layout = (
|
||||
upstream_and_downstream_db)
|
||||
upstream_write_db, upstream_db, upstream_layout,\
|
||||
downstream_db, downstream_layout = (upstream_and_downstream_db)
|
||||
|
||||
x = MockPackage('x', [], [])
|
||||
mock_repo = MockPackageMultiRepo([x])
|
||||
@@ -155,7 +166,8 @@ def test_add_to_upstream_after_downstream(upstream_and_downstream_db):
|
||||
|
||||
downstream_db.add(spec, downstream_layout)
|
||||
|
||||
upstream_db.add(spec, upstream_layout)
|
||||
upstream_write_db.add(spec, upstream_layout)
|
||||
upstream_db._read()
|
||||
|
||||
upstream, record = downstream_db.query_by_spec_hash(spec.dag_hash())
|
||||
# Even though the package is recorded as installed in the upstream DB,
|
||||
@@ -731,3 +743,23 @@ def test_query_unused_specs(mutable_database):
|
||||
unused = spack.store.db.unused_specs
|
||||
assert len(unused) == 1
|
||||
assert unused[0].name == 'cmake'
|
||||
|
||||
|
||||
@pytest.mark.regression('10019')
|
||||
def test_query_spec_with_conditional_dependency(mutable_database):
|
||||
# The issue is triggered by having dependencies that are
|
||||
# conditional on a Boolean variant
|
||||
s = spack.spec.Spec('hdf5~mpi')
|
||||
s.concretize()
|
||||
s.package.do_install(fake=True, explicit=True)
|
||||
|
||||
results = spack.store.db.query_local('hdf5 ^mpich')
|
||||
assert not results
|
||||
|
||||
|
||||
@pytest.mark.regression('10019')
|
||||
def test_query_spec_with_non_conditional_virtual_dependency(database):
|
||||
# Ensure the same issue doesn't come up for virtual
|
||||
# dependency that are not conditional on variants
|
||||
results = spack.store.db.query_local('mpileaks ^mpich')
|
||||
assert len(results) == 1
|
||||
|
||||
@@ -19,7 +19,6 @@
|
||||
from spack.fetch_strategy import GitFetchStrategy
|
||||
from spack.util.executable import which
|
||||
|
||||
|
||||
pytestmark = pytest.mark.skipif(
|
||||
not which('git'), reason='requires git to be installed')
|
||||
|
||||
@@ -169,7 +168,7 @@ def test_git_extra_fetch(tmpdir):
|
||||
def test_needs_stage():
|
||||
"""Trigger a NoStageError when attempt a fetch without a stage."""
|
||||
with pytest.raises(spack.fetch_strategy.NoStageError,
|
||||
matches=_mock_transport_error):
|
||||
match=r"set_stage.*before calling fetch"):
|
||||
fetcher = GitFetchStrategy(git='file:///not-a-real-git-repo')
|
||||
fetcher.fetch()
|
||||
|
||||
@@ -217,3 +216,59 @@ def test_get_full_repo(get_full_repo, git_version, mock_git_repository,
|
||||
else:
|
||||
assert(nbranches == 2)
|
||||
assert(ncommits == 1)
|
||||
|
||||
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
@pytest.mark.parametrize("submodules", [True, False])
|
||||
def test_gitsubmodule(submodules, mock_git_repository, config,
|
||||
mutable_mock_repo):
|
||||
"""
|
||||
Test GitFetchStrategy behavior with submodules
|
||||
"""
|
||||
type_of_test = 'tag-branch'
|
||||
t = mock_git_repository.checks[type_of_test]
|
||||
|
||||
# Construct the package under test
|
||||
spec = Spec('git-test')
|
||||
spec.concretize()
|
||||
pkg = spack.repo.get(spec)
|
||||
args = copy.copy(t.args)
|
||||
args['submodules'] = submodules
|
||||
pkg.versions[ver('git')] = args
|
||||
pkg.do_stage()
|
||||
with working_dir(pkg.stage.source_path):
|
||||
for submodule_count in range(2):
|
||||
file_path = os.path.join(pkg.stage.source_path,
|
||||
'third_party/submodule{0}/r0_file_{0}'
|
||||
.format(submodule_count))
|
||||
if submodules:
|
||||
assert os.path.isfile(file_path)
|
||||
else:
|
||||
assert not os.path.isfile(file_path)
|
||||
|
||||
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_gitsubmodules_delete(mock_git_repository, config, mutable_mock_repo):
|
||||
"""
|
||||
Test GitFetchStrategy behavior with submodules_delete
|
||||
"""
|
||||
type_of_test = 'tag-branch'
|
||||
t = mock_git_repository.checks[type_of_test]
|
||||
|
||||
# Construct the package under test
|
||||
spec = Spec('git-test')
|
||||
spec.concretize()
|
||||
pkg = spack.repo.get(spec)
|
||||
args = copy.copy(t.args)
|
||||
args['submodules'] = True
|
||||
args['submodules_delete'] = ['third_party/submodule0',
|
||||
'third_party/submodule1']
|
||||
pkg.versions[ver('git')] = args
|
||||
pkg.do_stage()
|
||||
with working_dir(pkg.stage.source_path):
|
||||
file_path = os.path.join(pkg.stage.source_path,
|
||||
'third_party/submodule0')
|
||||
assert not os.path.isdir(file_path)
|
||||
file_path = os.path.join(pkg.stage.source_path,
|
||||
'third_party/submodule1')
|
||||
assert not os.path.isdir(file_path)
|
||||
|
||||
@@ -316,14 +316,14 @@ def test_uninstall_by_spec_errors(mutable_database):
|
||||
"""Test exceptional cases with the uninstall command."""
|
||||
|
||||
# Try to uninstall a spec that has not been installed
|
||||
rec = mutable_database.get_record('zmpi')
|
||||
with pytest.raises(InstallError, matches="not installed"):
|
||||
PackageBase.uninstall_by_spec(rec.spec)
|
||||
spec = Spec('dependent-install')
|
||||
spec.concretize()
|
||||
with pytest.raises(InstallError, match="is not installed"):
|
||||
PackageBase.uninstall_by_spec(spec)
|
||||
|
||||
# Try an unforced uninstall of a spec with dependencies
|
||||
rec = mutable_database.get_record('mpich')
|
||||
|
||||
with pytest.raises(PackageStillNeededError, matches="cannot uninstall"):
|
||||
with pytest.raises(PackageStillNeededError, match="Cannot uninstall"):
|
||||
PackageBase.uninstall_by_spec(rec.spec)
|
||||
|
||||
|
||||
|
||||
@@ -245,7 +245,7 @@ def test_unsupported_optimization_flags(target_name, compiler, version):
|
||||
target = llnl.util.cpu.targets[target_name]
|
||||
with pytest.raises(
|
||||
llnl.util.cpu.UnsupportedMicroarchitecture,
|
||||
matches='cannot produce optimized binary'
|
||||
match='cannot produce optimized binary'
|
||||
):
|
||||
target.optimization_flags(compiler, version)
|
||||
|
||||
@@ -287,5 +287,5 @@ def test_invalid_family():
|
||||
vendor='Imagination', features=[], compilers={}, generation=0
|
||||
)
|
||||
with pytest.raises(AssertionError,
|
||||
matches='a target is expected to belong'):
|
||||
match='a target is expected to belong'):
|
||||
multi_parents.family
|
||||
|
||||
@@ -116,13 +116,13 @@ def test_parent_dir(self, stage):
|
||||
|
||||
# Make sure we get the right error if we try to copy a parent into
|
||||
# a descendent directory.
|
||||
with pytest.raises(ValueError, matches="Cannot copy"):
|
||||
with pytest.raises(ValueError, match="Cannot copy"):
|
||||
with fs.working_dir(str(stage)):
|
||||
fs.copy_tree('source', 'source/sub/directory')
|
||||
|
||||
# Only point with this check is to make sure we don't try to perform
|
||||
# the copy.
|
||||
with pytest.raises(IOError, matches="No such file or directory"):
|
||||
with pytest.raises(IOError, match="No such file or directory"):
|
||||
with fs.working_dir(str(stage)):
|
||||
fs.copy_tree('foo/ba', 'foo/bar')
|
||||
|
||||
|
||||
63
lib/spack/spack/test/main.py
Normal file
63
lib/spack/spack/test/main.py
Normal file
@@ -0,0 +1,63 @@
|
||||
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import os
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
import spack.paths
|
||||
from spack.main import get_version, main
|
||||
|
||||
|
||||
def test_get_version_no_match_git(tmpdir, working_env):
|
||||
git = str(tmpdir.join("git"))
|
||||
with open(git, "w") as f:
|
||||
f.write("""#!/bin/sh
|
||||
echo v0.13.3
|
||||
""")
|
||||
fs.set_executable(git)
|
||||
|
||||
os.environ["PATH"] = str(tmpdir)
|
||||
assert spack.spack_version == get_version()
|
||||
|
||||
|
||||
def test_get_version_match_git(tmpdir, working_env):
|
||||
git = str(tmpdir.join("git"))
|
||||
with open(git, "w") as f:
|
||||
f.write("""#!/bin/sh
|
||||
echo v0.13.3-912-g3519a1762
|
||||
""")
|
||||
fs.set_executable(git)
|
||||
|
||||
os.environ["PATH"] = str(tmpdir)
|
||||
assert "0.13.3-912-3519a1762" == get_version()
|
||||
|
||||
|
||||
def test_get_version_no_repo(tmpdir, monkeypatch):
|
||||
monkeypatch.setattr(spack.paths, "prefix", str(tmpdir))
|
||||
assert spack.spack_version == get_version()
|
||||
|
||||
|
||||
def test_get_version_no_git(tmpdir, working_env):
|
||||
os.environ["PATH"] = str(tmpdir)
|
||||
assert spack.spack_version == get_version()
|
||||
|
||||
|
||||
def test_main_calls_get_version(tmpdir, capsys, working_env):
|
||||
os.environ["PATH"] = str(tmpdir)
|
||||
main(["-V"])
|
||||
assert spack.spack_version == capsys.readouterr()[0].strip()
|
||||
|
||||
|
||||
def test_get_version_bad_git(tmpdir, working_env):
|
||||
bad_git = str(tmpdir.join("git"))
|
||||
with open(bad_git, "w") as f:
|
||||
f.write("""#!/bin/sh
|
||||
exit 1
|
||||
""")
|
||||
fs.set_executable(bad_git)
|
||||
|
||||
os.environ["PATH"] = str(tmpdir)
|
||||
assert spack.spack_version == get_version()
|
||||
@@ -214,7 +214,7 @@ def test_buildcache(mock_archive, tmpdir):
|
||||
stage.destroy()
|
||||
|
||||
# Remove cached binary specs since we deleted the mirror
|
||||
bindist._cached_specs = None
|
||||
bindist._cached_specs = set()
|
||||
|
||||
|
||||
def test_relocate_text(tmpdir):
|
||||
|
||||
@@ -927,8 +927,11 @@ def test_stage_create_replace_path(tmp_build_stage_dir):
|
||||
assert os.path.isdir(stage.path)
|
||||
|
||||
|
||||
def test_cannot_access():
|
||||
def test_cannot_access(capsys):
|
||||
"""Ensure can_access dies with the expected error."""
|
||||
with pytest.raises(SystemExit, matches='Insufficient permissions'):
|
||||
with pytest.raises(SystemExit):
|
||||
# It's far more portable to use a non-existent filename.
|
||||
spack.stage.ensure_access('/no/such/file')
|
||||
|
||||
captured = capsys.readouterr()
|
||||
assert 'Insufficient permissions' in str(captured)
|
||||
|
||||
Reference in New Issue
Block a user