* Separate Apple Clang from LLVM Clang
Apple Clang is a compiler of its own. All places
referring to "-apple" suffix have been updated.
* Hack to use a dash in 'apple-clang'
To be able to use autodoc from Sphinx we need
a valid Python name for the module that contains
Apple's Clang code.
* Updated packages to account for the existence of apple-clang
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added unit test for XCode related functions
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This change also adds a code path through the spack ci pipelines
infrastructure which supports PR testing on the Spack repository.
Gitlab pipelines run as a result of a PR (either creation or pushing
to a PR branch) will only verify that the packages in the environment
build without error. When the PR branch is merged to develop,
another pipeline will run which results in the generated binaries
getting pushed to the binary mirror.
Packages built with lmod core_compiler are placed in `Core`.
Other packages may belong in `Core`. For example, python may be built with a proprietary compiler for performance, but belong on the `Core` directory.
With this PR, lmod config can include a `core_specs` list. Any package that satisfies a spec in that list is placed in `Core`, regardless of its compiler or dependencies.
This improves the documentation for `spack external find` in several ways:
* Provide a code example of implementing `determine_spec_details` for a package
* Explain how to define executables to look for (and also e.g. that they are treated as regular expressions and so can pull in unexpected files).
* Add the "why" for a couple of constraints (i.e. explain that this logic only works for build/run deps because it examines `PATH` for executables)
* Spread the docs between build customization and packaging sections
* Add cross-references
* Add a label so that `spack external find` is linked from the command reference.
Add a `spack external find` command that tries to populate
`packages.yaml` with external packages from the user's `$PATH`. This
focuses on finding build dependencies. Currently, support has only been
added for `cmake`.
For a package to be discoverable with `spack external find`, it must define:
* an `executables` class attribute containing a list of
regular expressions that match executable names.
* a `determine_spec_details(prefix, specs_in_prefix)` method
Spack will call `determine_spec_details()` once for each prefix where
executables are found, passing in the path to the prefix and the path to
all found executables. The package is responsible for invoking the
executables and figuring out what type of installation(s) are in the
prefix, and returning one or more specs (each with version, variants or
whatever else the user decides to include in the spec).
The found specs and prefixes will be added to the user's `packages.yaml`
file. Providing the `--not-buildable` option will mark all generated
entries in `packages.yaml` as `buildable: False`
Since #9481 Python's None is not permitted as a value for
MV variants. The string 'none' is used instead.
Add the same fix for the amgx and lammps packages
To specify an environment for a comment, the user can specify
"spack -e <env>". The documentation incorrectly specified "-E" (which
is actually used to ignore any implicit use of environments).
Since #16132, we've consolidated the setting of FORCE_UNSAFE_CONFIGURE to
`autotools.py`, so we don't need to use it in packages like `coreutils`,
in our commands, or in our container recipes.
- [x] Remove FORCE_UNSAFE_CONFIGURE from packages
- [x] Remove FORCE_UNSAFE_CONFIGURE from container recipes
- [x] Remove FORCE_UNSAFE_CONFIGURE from `spack ci` command
* Moved link to the right place in the docs
* Fixed a few minor issues in extensions docs
Fixed a typo, added a subsubsection for better
navigation, reworded "modules in Python" as
"Python packages"
Currently, to force Spack to use an external MPI, you have to specify `buildable: False`
for every MPI provider in Spack in your packages.yaml file. This is both tedious and
fragile, as new MPI providers can be added and break your workflow when you do a
git pull.
This PR allows you to specify an entire virtual dependency as non-buildable, and
specify particular implementations to be built:
```
packages:
all:
providers:
mpi: [mpich]
mpi:
buildable: false
paths:
mpich@3.2 %gcc@7.3.0: /usr/packages/mpich-3.2-gcc-7.3.0
```
will force all Spack builds to use the specified `mpich` install.
* try extend path to solve PyQt5.sip not found issue
* disable private sip installation in sippackage class
* undo manual PyQt5 dir creation in py-sip site-packages dir
* fix typo
* fix typo
* also apply fix to PyQt4
* tidy up
* flake8 and tidy up
* tidy and undo hardcoding of python_include_dir
* replace hardcoded python inc dir
* fix minor issues
* rethink include dir variable name
* improve style
* add new versions
* implement new sip setup to qsci installation
* set sip-incdir correctly for the new setup
* setup extend_path thing before qsci python bindings
* take care of conflict
* flake8
* also extend for PyQt4
* improve style
* improve style
* SipPackage build sys should depend on py-sip
* consolidate extend_path fixes into SipPackage
* fix typo
* fix bugs
* flake8
* revert sip doc to pre-resource setup
* import os module
* flake8
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Add a 'define_from_variant` helper function to CMake-based Spack
packages to convert package variants into CMake arguments. For
example:
args.append('-DFOO=%s' % ('ON' if '+foo' in self.spec else 'OFF'))
can be replaced with:
args.append(self.define_from_variant('foo'))
The following conversions are handled automatically:
* Flag variants will be converted to CMake booleans
* Multivalued variants will be converted to semicolon-separated strings
* Other variant values are converted to CMake string arguments
This also adds a 'define' helper method to convert any variable to
a CMake argument. It has the same conversion rules as
'define_from_variant' (but operates directly on values rather than
requiring the user to supply the name of a package variant).
Add an optional 'submodules_delete' field to Git versions in Spack
packages that allows them to remove specific submodules.
For example: the nervanagpu submodule has become unavailable for the
PyTorch project (see issue 19457 at
https://github.com/pytorch/pytorch/issues/). Removing this submodule
allows 0.4.1 to build.
This PR adds a new command to Spack:
```console
$ spack containerize -h
usage: spack containerize [-h] [--config CONFIG]
creates recipes to build images for different container runtimes
optional arguments:
-h, --help show this help message and exit
--config CONFIG configuration for the container recipe that will be generated
```
which takes an environment with an additional `container` section:
```yaml
spack:
specs:
- gromacs build_type=Release
- mpich
- fftw precision=float
packages:
all:
target: [broadwell]
container:
# Select the format of the recipe e.g. docker,
# singularity or anything else that is currently supported
format: docker
# Select from a valid list of images
base:
image: "ubuntu:18.04"
spack: prerelease
# Additional system packages that are needed at runtime
os_packages:
- libgomp1
```
and turns it into a `Dockerfile` or a Singularity definition file, for instance:
```Dockerfile
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-bionic:prerelease as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs build_type=Release" \
&& echo " - mpich" \
&& echo " - fftw precision=float" \
&& echo " packages:" \
&& echo " all:" \
&& echo " target:" \
&& echo " - broadwell" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " concretization: together" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unecessary deps and strip executables
RUN cd /opt/spack-environment && spack install && spack autoremove -y
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:18.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN apt-get -yqq update && apt-get -yqq upgrade \
&& apt-get -yqq install libgomp1 \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
```
* Unified environment modifications in config files
fixes#13357
This commit factors all the code that is involved in
the validation (schema) and parsing of environment modifications
from configuration files in a single place. The factored out
code is then used for module files and compiler configuration.
Attributes were separated by dashes in `compilers.yaml` files and
by underscores in `modules.yaml` files. This PR unifies the syntax
on attributes separated by underscores.
Unit testing of environment modifications in compilers
has been refactored and simplified.
Previously the `spack load` command was a wrapper around `module load`. This required some bootstrapping of modules to make `spack load` work properly.
With this PR, the `spack` shell function handles the environment modifications necessary to add packages to your user environment. This removes the dependence on environment modules or lmod and removes the requirement to bootstrap spack (beyond using the setup-env scripts).
Included in this PR is support for MacOS when using Apple's System Integrity Protection (SIP), which is enabled by default in modern MacOS versions. SIP clears the `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH` variables on process startup for executables that live in `/usr` (but not '/usr/local', `/System`, `/bin`, and `/sbin` among other system locations. Spack cannot know the `LD_LIBRARY_PATH` of the calling process when executed using `/bin/sh` and `/usr/bin/python`. The `spack` shell function now manually forwards these two variables, if they are present, as `SPACK_<VAR>` and recovers those values on startup.
- [x] spack load/unload no longer delegate to modules
- [x] refactor user_environment modification calculations
- [x] update documentation for spack load/unload
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Rework Spack's continuous integration workflow to be environment-based.
- Add the `spack ci` command, which replaces the many scripts in `bin/`
- `spack ci` decouples the CI workflow from the spack instance:
- CI is defined in a spack environment
- environment is in its own (single) git repository, separate from Spack
- spack instance used to run the pipeline is up to the user
- A new `gitlab-ci` section in environments allows users to configure how
specs in the environment should be mapped to runners
- Compilers can be bootstrapped in the new pipeline workflow
- Add extensive documentation on pipelines (see `pipelines.rst` for further details)
- Add extensive tests for pipeline code
* Spack can uninstall unused specs
fixes#4382
Added an option to spack uninstall that removes all unused specs i.e.
build dependencies or transitive dependencies that are left
in the store after the specs that pulled them in have been removed.
* Moved the functionality to its own command
The command has been named 'spack autoremove' to follow the naming used
for the same functionality by other widely known package managers i.e.
yum and apt.
* Speed-up autoremoving specs by not locking and re-reading the scratch DB
* Make autoremove work directly on Spack's store
* Added unit tests for the new command
* Display a terser output to the user
* Renamed the "autoremove" command "gc"
Following discussion there's more consensus around
the latter name.
* Preserve root specs in env contexts
* Instead of preserving specs, restrict gc to the active environment
* Added docs
* Added a unit test for gc within an environment
* Updated copyright to 2020
* Updated documentation according to review
Rephrased a couple of sentences, added references to
`spack find` and dependency types.
* Updated function naming and docstrings
* Simplified computation of unused specs
Since the new approach uses private attributes of the DB
it has been coded as a method of that class rather than a
freestanding function.
Previously, `spack test` automatically passed all of its arguments to
`pytest -k` if no options were provided, and to `pytest` if they were.
`spack test -l` also provided a list of test filenames, but they didn't
really let you completely narrow down which tests you wanted to run.
Instead of trying to do our own weird thing, this passes `spack test`
args directly to `pytest`, and omits the implicit `-k`. This means we
can now run, e.g.:
```console
$ spack test spec_syntax.py::TestSpecSyntax::test_ambiguous
```
This wasn't possible before, because we'd pass the fully qualified name
to `pytest -k` and get an error.
Because `pytest` doesn't have the greatest ability to list tests, I've
tweaked the `-l`/`--list`, `-L`/`--list-long`, and `-N`/`--list-names`
options to `spack test` so that they help you understand the names
better. you can combine these options with `-k` or other arguments to do
pretty powerful searches.
This one makes it easy to get a list of names so you can run tests in
different orders (something I find useful for debugging `pytest` issues):
```console
$ spack test --list-names -k "spec and concretize"
cmd/env.py::test_concretize_user_specs_together
concretize.py::TestConcretize::test_conflicts_in_spec
concretize.py::TestConcretize::test_find_spec_children
concretize.py::TestConcretize::test_find_spec_none
concretize.py::TestConcretize::test_find_spec_parents
concretize.py::TestConcretize::test_find_spec_self
concretize.py::TestConcretize::test_find_spec_sibling
concretize.py::TestConcretize::test_no_matching_compiler_specs
concretize.py::TestConcretize::test_simultaneous_concretization_of_specs
spec_dag.py::TestSpecDag::test_concretize_deptypes
spec_dag.py::TestSpecDag::test_copy_concretized
```
You can combine any list option with keywords:
```console
$ spack test --list -k microarchitecture
llnl/util/cpu.py modules/lmod.py
```
```console
$ spack test --list-long -k microarchitecture
llnl/util/cpu.py::
test_generic_microarchitecture
modules/lmod.py::TestLmod::
test_only_generic_microarchitectures_in_root
```
Or just list specific files:
```console
$ spack test --list-long cmd/test.py
cmd/test.py::
test_list test_list_names_with_pytest_arg
test_list_long test_list_with_keywords
test_list_long_with_pytest_arg test_list_with_pytest_arg
test_list_names
```
Hopefully this stuff will help with debugging test issues.
- [x] make `spack test` send args directly to `pytest` instead of trying
to do fancy things.
- [x] rework `--list`, `--list-long`, and add `--list-names` to make
searching for tests easier.
- [x] make it possible to mix Spack's list args with `pytest` args
(they're just fancy parsing around `pytest --collect-only`)
- [x] add docs
- [x] add tests
- [x] update spack completion
Users can now list mirrors of the main url in packages.
- [x] Instead of just a single `url` attribute, users can provide a list (`urls`) in the package, and these will be tried by in order by the fetch strategy.
- [x] To handle one of the most common mirror cases, define a `GNUMirrorPackage` mixin to handle all the standard GNU mirrors. GNU packages can set `gnu_mirror_path` to define the path within a mirror, and the mixin handles setting up all the requisite GNU mirror URLs.
- [x] update all GNU packages in `builtin` to use the `GNUMirrorPackage` mixin.
I have, more than once, tried to install the list of things that need
to build the docs, only to discover that the list doesn't use Spack's
package names. I'm tired of facepalming....
While I was there I touched up the prose about activating the new
Python packages; activating a python package doesn't add anything to
your PYTHONPATH, it links things into a directory that's *already* on
your PYTHONPATH. Note that this all presupposes that you're using
that same python....
Extensions have been available for a while and the overall design
seems solid enough to be feasible for extensions without losing
backward compatibility.