Relative paths in views have been broken since #17608 or earlier.
- [x] Fix by passing base path of the environment into the `ViewDescriptor`.
Relative paths are calculated from this path.
* add tutorial setup script to share/spack
* Add check for Ubuntu 18, fix xvda check, fix apt-get errors
- now works on t2.micro, t2.small, and m instances
- apt-get needs retries around it to work
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
A bug was introduced in #13100 where ChildErrors would be redundantly
printed when raised during a build. We should eventually revisit error
handling in builds and figure out what the right separation of
responsibilities is for distributed builds, but for now just skip
printing.
- [x] SpackErrors were designed to be printed by the forked process, not
by the parent, so check if they've already been printed.
- [x] update tests
Fixes#17299
Cray Shasta systems appear to use an unmodified Sles or other Linux operating system on the backend (like Cray "Cluster" systems and unlike Cray "XC40" systems that use CNL).
This updates the CNL version detection to properly note that this is the underlying OS instead of CNL and delegate to LinuxDistro.
* environment-views: fix bug where missing recipe/repo breaks env commands
When a recipe or a repo has been removed from Spack and an environment
is active, it causes the view activation to crash Spack before any
commands can be executed. Further, the error message it not at all clear
in explaining the issue.
This forces view regeneration to always start from scratch to avoid the
missing package recipes, and defaults add_view=False in main for views activated
by the `spack -e` option.
* add messages to env status and deactivate
Warn users that a view may be corrupt when deactivating an environment
or checking its status while active. Updated message for activate.
* tests for view checking
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* switch from bool to int debug levels
* Added debug options and changed lock logging to use more detailed values
* Limit installer and timestamp PIDs to standard debug output
* Reduced verbosity of fetch/stage/install output, changing most to debug level 1
* Combine lock log methods; change build process install to debug
* Changed binary cache install messages to extraction messages
* bugfix: make compiler preferences slightly saner
This fixes two issues with the way we currently select compilers.
If multiple compilers have the same "id" (os/arch/compiler/version), we
currently prefer them by picking this one with the most supported
languages. This can have some surprising effects:
* If you have no `gfortran` but you have `gfortran-8`, you can detect
`clang` that has no configured C compiler -- just `f77` and `f90`. This
happens frequently on macOS with homebrew. The bug is due to some
kludginess about the way we detect mixed `clang`/`gfortran`.
* We can prefer suffixed versions of compilers to non-suffixed versions,
which means we may select `clang-gpu` over `clang` at LLNL. But,
`clang-gpu` is not actually clang, and it can break builds. We should
prefer `clang` if it's available.
- [x] prefer compilers that have C compilers and prefer no name variation
to variation.
* tests: add test for which()
Apple's gcc is really clang. We previously ignored it by default but
there was a regression in #17110.
Originally we checked for all clang versions with this, but I know of
none other than `gcc` on macos that actually do this, so limiting to
`apple-clang` should be ok.
- [x] Fix check for `apple-clang` in `gcc.py` to use version detection
from `spack.compilers.apple_clang`
The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* llvm-flang Only build offload code if cuda enabled
The current version executes `cmake(*args)` always as part of the post install. If device offload is not part of the build, this results in referencing `args` without it being set and the error:
```
==> Error: UnboundLocalError: local variable 'args' referenced before assignment
```
Looking at prevoous version of `llvm-package.py` this whole routine appears to be only required for offload, some indent `cmake/make/install` to be under the `if`.
* Update package.py
Add comment
* bugfix: make compiler preferences slightly saner
This fixes two issues with the way we currently select compilers.
If multiple compilers have the same "id" (os/arch/compiler/version), we
currently prefer them by picking this one with the most supported
languages. This can have some surprising effects:
* If you have no `gfortran` but you have `gfortran-8`, you can detect
`clang` that has no configured C compiler -- just `f77` and `f90`. This
happens frequently on macOS with homebrew. The bug is due to some
kludginess about the way we detect mixed `clang`/`gfortran`.
* We can prefer suffixed versions of compilers to non-suffixed versions,
which means we may select `clang-gpu` over `clang` at LLNL. But,
`clang-gpu` is not actually clang, and it can break builds. We should
prefer `clang` if it's available.
- [x] prefer compilers that have C compilers and prefer no name variation
to variation.
* tests: add test for which()
Apple's gcc is really clang. We previously ignored it by default but
there was a regression in #17110.
Originally we checked for all clang versions with this, but I know of
none other than `gcc` on macos that actually do this, so limiting to
`apple-clang` should be ok.
- [x] Fix check for `apple-clang` in `gcc.py` to use version detection
from `spack.compilers.apple_clang`
* MacOS build tests
- Run on PR that modify the YAML file of the workflow
- Don't clone Spack, since we are in the Spack repo now
* Try to add opengl to configuration to build jupyter
* fixup
Spack did not support usage of the `--config-scope` option in
combination with an environment: In `lib/spack/spack/main.py`,
`spack.config.command_line_scopes` is set equal to any config scopes
passed by the `--config-scope` option. However, this is done after
activating an environment. In the process of activating an environment,
the `spack.config.config` singleton is instantiated, so later setting of
`spack.config.command_line_scopes` is ignored.
This commit sets command line scopes before activating an environment to
ensure that they are included in the configuration.
Co-authored-by: Tim Fuller <tjfulle@sandia.gov>
* Add new versions of texlive and poppler.
* Add new versions of harfbuzz which also relocated source location to github.
* Update var/spack/repos/builtin/packages/harfbuzz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Restore deleted url line in harfbuzz.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Addition of Chainmap to satisfy Maestro dependency.
* Additional versions and dependencies for Maestro.
* Updated URL to point to pypi.
* Updates to chainmap hashes.
* Updates to pull version from PyPi.
* Corrections to flake8 errors.
* Stricter restrictions on Python versioning.
Maestro actually supports Python 3.5 and later.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Only install chainmap for Python2 versions.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removal of setuptools python cond.
* Removal of version constaints on setuptools.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
GCC 4.8.5 on rhel6:
```
utext.cpp:572:5: error: 'max_align_t' in namespace 'std' does not name a
type
std::max_align_t extension;
^
utext.cpp: In function 'UText* utext_setup_67(UText*, int32_t,
UErrorCode*)':
utext.cpp:587:73: error: 'max_align_t' is not a member of 'std'
spaceRequired = sizeof(ExtendedUText) + extraSpace -
sizeof(std::max_align_t);
^
utext.cpp:587:73: note: suggested alternative:
In file included from
/projects/spack/opt/spack/gcc-4.4.7/gcc/6ln2t7b/include/c++/4.8.5/cstddef:42:0,
from utext.cpp:19:
/projects/spack/opt/spack/gcc-4.4.7/gcc/6ln2t7b/lib/gcc/x86_64-unknown-linux-gnu/4.8.5/include/stddef.h:
425:3: note: 'max_align_t'
} max_align_t;
^
utext.cpp:598:57: error: 'struct ExtendedUText' has no member named
'extension'
ut->pExtra = &((ExtendedUText *)ut)->extension;
^
g++ ... loadednormalizer2impl.cpp
g++ ... chariter.cpp
```
The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* Initial version of PySCF.
* Add master branch to xcfun library
* PySCF only compatible with specific commit of xcfun library
* Update var/spack/repos/builtin/packages/py-pyscf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyscf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Revert "PySCF only compatible with specific commit of xcfun library"
This reverts commit 8296005400.
* Revert "Add master branch to xcfun library"
This reverts commit f2b6998931.
* Issues conflict for xcfun library version rather than relying on a random commit.
* Add version xcfun 2.0.0a2 which is needed by PySCF.
* Remove xcfun conflict and express dependency more explictly. Add comment as to why this is necessary.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* wannier90: add versions 3.0.0 and 3.1.0 and 'shared variant'
Added versions 3.0.0 and 3.1.0
Added shared variant
Added url_for_version function as versions less than 3 are from the
wannier.org site and versions 3 and up are from github.com
Added the MPI libraries to the list of libs substituted into the make.sys file
in place of @LIBS
Made it possible to build a shared object version of the library for versions
< 3 by filtering the src/Makefile.2 file (based off of the patch from a src rpm
from RHEL for version 2.0.1)
Create a modules directory in the install prefix root directory and copy the
Fortran .mod files there.
Set the MPIFC variable to the Spack Fortran MPI compiler wrapper.
* abinit: added 'wannier90' variant which enables building abinit with wannier90
Added wannier90 variant
Made abinit depend on the shared object ('shared') variant of
wannier90 if the wannier90 variant is selected
Add configure args for wannier90 libs, includes, and binaries and to
set MPIFC
set the dft-flavor to wannier90 when wannier90 is enabled and only
set the dft flavor to 'atompaw+libxc' if wannier90 is not selected
* Update var/spack/repos/builtin/packages/abinit/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* incorporated bbecker's suggestion for making the strings less ugly!
* incorporated bbecker's suggestion to fix the logic for picking which
"DFT flavor" configure argument.
If the wannier variant is enabled, it passes --with-dft-flavor=wannier90
to configure, otherwise it passes --with-dft-flavor=atompaw+libxc to configure
* Changed to using plain strings
* Fixed version tests
* incorporated @adamjstewart's fix for testing if the major version is > 2
* incorporated @adamjstewart's fix to check if mpi is enabled and
only set the MPIFC variable if it is.
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Only set MPIFC if '+mpi' is set
* incorporated fixes from @adamjstewart including:
- using the string=True argument to filter_file (and removed the unneeded
escapes)
- changing the url to the github location
- fixing the version checks
- building a libwannier.dylib on darwin
* incorporated fixes suggested by @adamjstewart including:
- using the string=True argument to filter_file and cleaned up the escapes
- only pass the MPIFC argument to configure when '+mpi' is set
- chaned the url to the github site for Wannier090
- fixed the version checks
- build a 'libwannier.dylib' file when building the shared variant on darwin
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* moved a configure argument from it's own '+mpi' check to under the lower one
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Cleaned up syntax as suggested by @adamjstewart
It looks *so much better* now! Thanks!
* removed unneeded import of 'find' from 'llnl.util.filesystem' package
as suggested by @adamjstewart
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* incorporated changes from @adamjstewart
changed check to "if '@:2 +shared' in spec:" instead of a nested check of '@:2' and
'+shared'
removed unneeded joins used in filter_file and spliced the list of objs directly into
the filter_file call
used the dso_suffix instead of testing for darwin to determine the name of the
shared library
* removed whitespace from blank line
* fixed bug with '../../wannier90.x: .*' not being treated as a regexp. Thanks Adam!
* fixed missing whitespace when modifying Makefile.2
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: OpenLoops
* install() for openloops
* Working OpenLoops recipe
* Flake-8
* Only copy collection file if required; add clarification to num_jobs
* Add __future__ import just in case
* Fix missing space
* Remove __future__ import
* Changes from review, pt. 1
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Replace print() with write()
* Flake-8
Co-authored-by: iarspider <iarpsider@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* vasp: New package.
* Remove unneeded `#noqa`
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removed a completely needless tty.debug()
* Add compiler conflicts() and minute fixes
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add apcomp package
* add maintainers
* fake8
* Update var/spack/repos/builtin/packages/apcomp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* review suggestions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* icu4c: Add new versions for older releases.
The old URLs for versions 60.1, 58.2 and 57.1 do not work anymore so add the versions available on Github.
The old versions are kept for reference (cf. #15896).
* icu4c: Add versions 66.1 and 67.1.
* icu4c: Fix compilation of versions 58 and 59 with recent glibc.
This matches the current latest version of protobuf in Spack.
Generally the version of py-protobuf and protobuf should match,
but this constraint is not currently recorded in py-protobuf.
For normal users, `-o` or `--no-same-owner` (GNU extension) is
the default behavior, but for the root user, `tar` attempts to preserve
the ownership from the tarball.
This makes `tar` use `-o` all the time. This should improve untarring
files owned by users not available in rootless Docker builds.
* gdk-pixbuf: Add new stable versions.
* gdk-pixbuf: Add a missing dependency with libx11.
Also add a variant disabled by default to make it optional since it is considered deprecated
(cf. 3362e94c25).
* Added new versions to magics and began to set not-so-optional netcdf dependency
* Added enforced netcdf dependency
* Fix also works for version 4.1.0
* llvm-flang Only build offload code if cuda enabled
The current version executes `cmake(*args)` always as part of the post install. If device offload is not part of the build, this results in referencing `args` without it being set and the error:
```
==> Error: UnboundLocalError: local variable 'args' referenced before assignment
```
Looking at prevoous version of `llvm-package.py` this whole routine appears to be only required for offload, some indent `cmake/make/install` to be under the `if`.
* Update package.py
Add comment
The error message was not updated when the behavior of Spack environments
was changed to not automatically activate the local environment in #17258.
The previous error message no longer makes sense.
When Spack installs a package, it stores repository package.py files
for it and all of its dependencies - any package with a Spack metadata
directory in its installation prefix.
It turns out this was too broad: this ends up including external
packages installed by Spack (e.g. installed by another Spack instance).
Currently Spack doesn't store the namespace properly for such packages,
so even though the package file could be fetched from the external,
Spack is unable to locate it.
This commit avoids the issue by skipping any attempt to locate and copy
from the package repository of externals, regardless of whether they
have a Spack repo directory.
On Cray platforms, we rely heavily on the module system to figure out
what targets, compilers, etc. are available. This unfortunately means
that we shell out to the `module` command as part of platform
initialization.
Because we run subcommands in a shell, we can get infinite recursion if
`setup-env.sh` and friends are in some init script like `.bashrc`.
This fixes the infinite loop by adding guards around `setup-env.sh`,
`setup-env.csh`, and `setup-env.fish`, to prevent recursive
initializations of Spack. This is safe because Spack never shells out to
itself, so we do not need it to be initialized in subshells.
- [x] add recursion guard around `setup-env.sh`
- [x] add recursion guard around `setup-env.csh`
- [x] add recursion guard around `setup-env.fish`
* fix binutils deptype for gcc
binutils needs to be a run dependency of gcc
* Fix gcc+binutils build on RHEL7+
static-libstdc++ is not available with system gcc.
Anyway, as it is for bootstraping, we do not really care depending on
a shared libstdc++.
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
Spack was attempting to calculate abspath on the located config.guess
path even when it was not found (None); this commit skips the abspath
calculation when config.guess is not found.
The error message was not updated when the behavior of Spack environments
was changed to not automatically activate the local environment in #17258.
The previous error message no longer makes sense.
* bbcp: Update the URLs to use HTTPS.
The HTTP URLs do not work anymore.
* bbcp: Add missing libnsl dependency.
* bbcp: Rename the git-based version to match the branch name.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pysam: add LDFLAGS to curl
* Update var/spack/repos/builtin/packages/py-pysam/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: vbfnlo
* Add new package: vbfnlo
* Add recipe for looptools
* Add patch for looptools
* LoopTools: patch not needed (fixed by developers without changing version)
* Remove patch file as well
* Update package.py
* Update package.py
* Fix vbfnlo recipe for old version
Co-authored-by: iarspider <iarpsider@gmail.com>
When Spack installs a package, it stores repository package.py files
for it and all of its dependencies - any package with a Spack metadata
directory in its installation prefix.
It turns out this was too broad: this ends up including external
packages installed by Spack (e.g. installed by another Spack instance).
Currently Spack doesn't store the namespace properly for such packages,
so even though the package file could be fetched from the external,
Spack is unable to locate it.
This commit avoids the issue by skipping any attempt to locate and copy
from the package repository of externals, regardless of whether they
have a Spack repo directory.
* new package: ligra
* setup run environment
* tidy up
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* flake8
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`gcc` 9 and above have more warnings that break the `flatcc` build by default, because `-Werror` is enabled. This loosens the build up so that we can build with more compilers in Spack.
- [x] Add `-DFLATCC_ALLOW_WERROR=OFF` to `flatcc` CMake arguments
Co-authored-by: Frank Willmore <willmore@anl.gov>
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
fixes#17396
This prevents the class attribute to be inherited and
saves current maintainers from becoming the default
maintainers of every Cuda package.
We got rid of `master` after #17377, but users still want a way to get
the latest stable release without knowing its number.
We've added a `releases/latest` tag to replace what was once `master`.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Fixes#16478
This allows an uninstall to proceed even when encountering pre-uninstall
hook failures if the user chooses the --force option for the uninstall.
This also prevents post-uninstall hook failures from raising an exception,
which would terminate a sequence of uninstalls. This isn't likely essential
for #16478, but I think overall it will improve the user experience: if
the post-uninstall hook fails, there isn't much point in terminating a
sequence of spec uninstalls because at the point where the post-uninstall
hook is run, the spec has already been removed from the database (so it
will never have another chance to run).
Notes:
* When doing spack uninstall -a, certain pre/post-uninstall hooks aren't
important to run, but this isn't easy to track with the current model.
For example: if you are uninstalling a package and its extension, you
do not have to do the activation check for the extension.
* This doesn't handle the uninstallation of specs that are not in the DB,
so it may leave "dangling" specs in the installation prefix
This PR creates a new spack package for
mumax: GPU accelerated micromagnetic simulator.
This uses the current beta version because
- it is somewhat dated, ~2018
- it is the only one that supports recent GPU kernels
On Cray platforms, we rely heavily on the module system to figure out
what targets, compilers, etc. are available. This unfortunately means
that we shell out to the `module` command as part of platform
initialization.
Because we run subcommands in a shell, we can get infinite recursion if
`setup-env.sh` and friends are in some init script like `.bashrc`.
This fixes the infinite loop by adding guards around `setup-env.sh`,
`setup-env.csh`, and `setup-env.fish`, to prevent recursive
initializations of Spack. This is safe because Spack never shells out to
itself, so we do not need it to be initialized in subshells.
- [x] add recursion guard around `setup-env.sh`
- [x] add recursion guard around `setup-env.csh`
- [x] add recursion guard around `setup-env.fish`
- [x] Remove references to `master` branch
- [x] Document how release branches are structured
- [x] Document how to make a major release
- [x] Document how to make a point release
- [x] Document how to do work in our release projects
* fix binutils deptype for gcc
binutils needs to be a run dependency of gcc
* Fix gcc+binutils build on RHEL7+
static-libstdc++ is not available with system gcc.
Anyway, as it is for bootstraping, we do not really care depending on
a shared libstdc++.
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
Spack was attempting to calculate abspath on the located config.guess
path even when it was not found (None); this commit skips the abspath
calculation when config.guess is not found.
* Add Rivet and YODA
* Add patches
* Flake-8
* Set level for Rivet patches
* Syntax fix
* Fix dependencies of Rivet
* Update var/spack/repos/builtin/packages/rivet/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The latest 0.3.10 version openblas changed how Fortran libraries
are detected, and this broke Fujitsu compiler support.
This (new) openblas patch addresses that issue.
* dtfbplus: New package.
* dftbplus: Addresses @adamjstewart's comments on PR #15191
* dftbplus: Fixes format() calls that slipped in previous commit.
* dftbplus: Appease flake8.
* dftbplus: Change 'url' and misc. fixes.
* Add a resource to do the job of './utils/get_opt_externals'
Also:
* Add url_for_version function
* Add Java to PATH for run environment
* Update `install` method to handle old and new version
Co-authored-by: lu64bag3 <gerald.mathias@lrz.de>
* examl +
* examl style fix
* examl flake8 fix
* Update var/spack/repos/builtin/packages/examl/package.py
using `working_dir`
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
* hbase: refine url , java and version
* Update var/spack/repos/builtin/packages/hbase/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Activate environment in container file
This PR will ensure that the container recipes will build the spack
environment by first activating the environment.
* Deactivate environment before environment collection
For Singularity, the environment must be deactivated before running the
command to collect the environment variables. This is because the
environment collection uses `spack env activate`.
* share/spack/setup-env.fish file to setup environment in fish shell
* setup-env.fish testing script
* Update share/spack/setup-env.fish
Co-Authored-By: Elsa Gonsiorowski, PhD <gonsie@me.com>
* Update share/spack/qa/setup-env-test.fish
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* updates completions using `spack commands --update-completion`
* added stderr-nocaret warning
* added fish shell tests to CI system
Co-authored-by: becker33 <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Elsa Gonsiorowski, PhD <gonsie@me.com>
* [py-mdanalysis] new version and added dependencies
Original commit message:
Author: Andrew Elble <aweits@rit.edu>
Date: Thu Nov 14 08:35:14 2019 -0500
mdanalysis
* [py-mdanalysis] python is type build/run
* [py-mdanalysis] updated numpy version requirement for all listed versions of py-mdanalysis
* [py-mdanalysis] updated biopython version requirement for all listed versions of py-mdanalysis
* [py-mdanalysis] updated py-griddataformats version requirement for all listed versions of py-mdanalysis
* [py-mdanalysis] gsd only required after version 1.17.0 and requires gsd@1.4.0
* [py-mdanalysis] only requires mmtf-python after version 0.16.0 and requires version 1.0.0
* [py-mdanalysis] has required py-joblib since version 0.16
* [py-mdanalysis] updated py-scipy version requirement for all listed versions of py-mdanalysis
* [py-mdanalysis] updated py-matplotlib version requirement for all listed versions of py-mdanalysis
* [py-mdanalysis] has required py-mock since version 0.18.0
* [py-mdanalysis] py-scikit-learn only required after version 0.16.0 and only for +analysis variant
* [py-mdanalysis] Reordered and reformatted for readability
* [py-mdanalysis] flake8 fixes
* [py-mdanalysis] proactively adding version 1.0.0 while I'm here since major release
* [py-mdanalysis] fixing some forgotten colons
* [ruby] fixing path to gcc such that users can use gem to install native gems to their home directory
* [ruby] working on making flake8 happier
* [ruby] Line can't really be split cleanly. Enhancing flake8's calm.
ya learn something new every day...
* [ruby] line break where requested
* [ruby] make raw string
* [ruby] only running for x86_64-linux everything else is untested
* [ruby] finding rbconfig.rb in a cross platform manner
* new package: GraphBlast
* polish
* add cuda_arch setup
* flake8
* the package requires cuda variant and dependency
* add comments
* define cuda_arch
* implement multiple and custom cuda arches
* tidy up, improve
* flake8
* improve style
* add variant description
* use patch method, add new version for latest commit building since master now fails
* remove gcc conflict, tidy up
* also indicate build range for boost
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Co-authored-by: Sinan81 <Sinan81@github>
py-mpi4py installs its header files at a difficult-to-predict location:
$prefix/lib/python-x.y/site-packages/mpi4py/include
With the new `headers` properties, dependent packages have now an easy
way to obtain this location:
spec['py-mpi4py'].headers.directories[0]
* cp2k: variant tuning `lmax` was broken
- `spack install cp2k lmax=6` now works
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: coin3d
* Update package.py
* Flake-8
* Update var/spack/repos/builtin/packages/coin3d/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add link-time dependencies
* Add configure flags for boost; remove version 4.0.0 (doesn't compile)
Co-authored-by: iarspider <iarpsider@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cray: detect frontend compilers automatically
This commit permits to detect frontend compilers
automatically, with the exception of cce.
Co-authored-by: Gregory Becker <becker33.llnl.gov>
* New package: fjcontrib + new variants for fastjet
* Flake-8
* Flake-8 once more
* Update package.py
* Allow choosing which plugins to build
Build all plugins by default.
* Flake-8
* Always build all plugins
* Update package.py
Co-authored-by: iarspider <iarpsider@gmail.com>
[george.hartzell@172-16-193-97 spack-explore-docker]$ spack containerize
Running `spack containerize` with the example `spack.yaml` file fails
with an error that ends like so:
```
[...]
File "/local_scratch/hartzell/tmp/spack-explore-docker/lib/spack/external/ruamel/yaml/scanner.py", line 165, in need_more_tokens
self.stale_possible_simple_keys()
File "/local_scratch/hartzell/tmp/spack-explore-docker/lib/spack/external/ruamel/yaml/scanner.py", line 309, in stale_possible_simple_keys
"could not find expected ':'", self.get_mark())
ruamel.yaml.scanner.ScannerError: while scanning a simple key
in "/local_scratch/hartzell/tmp/spack-explore-docker/spack.yaml", line 26, column 1
could not find expected ':'
in "/local_scratch/hartzell/tmp/spack-explore-docker/spack.yaml", line 28, column 5
```
Indenting the block string fixes the problem for me.
CentOS 7,
```
$ spack --version
0.14.2-1529-ec58f28c2
```
The Geant4 cmake check requires Qt5OpenGL_FOUND, so we must require
the Qt5 +opengl variant. If not, the cmake phase fall through to Qt4
and fails due to a missing Qt4::QtGui target.
In Geant4InterfaceOptions.cmake:
```
if(Qt5Core_FOUND
AND Qt5Gui_FOUND
AND Qt5Widgets_FOUND
AND Qt5OpenGL_FOUND
AND Qt5PrintSupport_FOUND)
```
Ref: https://github.com/Geant4/geant4/blob/master/cmake/Modules/Geant4InterfaceOptions.cmake#L90
(5baee230e93612916bcea11ebf822756cfa7282c, Import Geant4 10.6.0 source tree)
* redis: add config file from source code
* Update var/spack/repos/builtin/packages/redis/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* WarpX: Development Branch
Update the name of our development branch.
* WarpX version: develop keyword
development is not a "newest"-like keyword, but `master`/`develop`/`dev` are.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Renamed: develop version
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This is a general CMake CUDA language hint to use the CXX
compiler has host compiler for NVCC. Seems like a good
default since we do not express the CUDA compiler in Spack
otherwise yet (e.g. no `self.compiler.cuda` or
`self.compiler.cudahostcxx`).
* env: no automatic activation
* Ensure ci rebuild jobs activate the environment (no longer automagic)
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* Start moving toward a json buildcache index
* Add spec and database index schemas
* Add a schema for buildcache spec.yaml files
* Provide a mode for database class to generate buildcache index
* Update db and ci tests to validate object w/ new schema
* Remove unused temporary upload-s3 command
* Use database class to generate buildcache index
* Do not generate index with each buildcache creation
* Make buildcache index mode into a couple of constructor args to Database class
* Use keyword args for _createtarball
* Parse new json index when we get specs from buildcache
Now that only one index file per mirror needs to be fetched in
order to have all the concrete specs for binaries available on the
mirror, we can just fetch and refresh the cached specs every time
instead of needing to use the '-f' flag to force re-reading.
* First fix for SPACK_DEPENDENCIES problem when doing setup
* Get rid of transitive include path in setup.
* Export SPACK_INCLUDE_DIRS into spconfig.py
* add buildcache create test
* add functionality and test to create buildcache from environment
* use env.concretized_user_specs rather than env.roots to get concretized specs, as suggested in review from becker33
* Allow `spack remove -f` and `spack uninstall` to work on matrices
Allow Environment.remove(force=True) to remove the concrete spec from the environment
even when the user spec cannot be removed because it is in a matrix.
* Ascent: ~python default
Packages that build optional python bindings do not build them by default in Spack:
https://spack.readthedocs.io/en/latest/packaging_guide.html#variant-names
This reduces long dependency trees and build times, e.g. for apps just using C/C++/Fortran bindings of a library.
* Conduit: ~python default
Packages that build optional python bindings do not build them by
default in Spack:
https://spack.readthedocs.io/en/latest/packaging_guide.html#variant-names
This reduces long dependency trees and build times, e.g. for apps
just using C/C++/Fortran bindings of a library.
* Separate Apple Clang from LLVM Clang
Apple Clang is a compiler of its own. All places
referring to "-apple" suffix have been updated.
* Hack to use a dash in 'apple-clang'
To be able to use autodoc from Sphinx we need
a valid Python name for the module that contains
Apple's Clang code.
* Updated packages to account for the existence of apple-clang
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added unit test for XCode related functions
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* short-circuit is_activated check when the extendee is installed upstream
* add test for checking activation status of packages with an extendee installed upstream
spack config add <value>: add nested value value to the configuration scope specified
spack config remove/rm: remove specified configuration from the relevant scope
The rose library uses the `strtoflt128` and `quadmath_snprintf`
functions. In order to successfully link the rose library, chill must
also link the GCC libquadmath library to resolve the two functions. This
patch changes the chill build to include this library.
Chill will also not compile unless headers from the gmp and isl
libraries are found in the includes path. Two patches - one each for gmp
and isl - modify the chill build process to add options to specify those
paths. These options follow the similar pattern as seen with BOOSTHOME
and ROSEHOME options which already exist in the chill build process.
Because of the addition of GMPHOME and ISLHOME options, build
requirements for gmp and isl are also added.
* Update package.py
* edit confliction when add package 'meam'
The USER-MEAMC fully replaces the MEAM package, which has been removed from LAMMPS after the 12 December 2018 version.
* gromacs: fix fftw dependency
Only depend on fftw+mpi when gromacs is built with mpi,
and depend on fftw~mpi otherwise.
* gromacs: fix cmake dependency
master branch depends on cmake 3.11 (as specified in CMakeLists.txt
cmake dependency is also bumped to 3.11 when fj compilers are used
in order to fix OpenMP detection.
* Some minor fixes to set_permissions() in file_permissions.py
The set_permissions() routine claims to prevent users from creating
world writable suid binaries. However, it seems to only be checking
for/preventing group writable suid binaries.
This patch modifies the routine to check for both world and group
writable suid binaries, and complain appropriately.
* permissions.py: Add test to check blocks world writable SUID files
The original test_chmod_rejects_group_writable_suid tested
that the set_permissions() function in
lib/spack/spack/util/file_permissions.py
would raise an exception if changed permission on a file with
both SUID and SGID plus sticky bits is chmod-ed to g+rwx and o+rwx.
I have modified so that more narrowly tests a file with SUID
(and no SGID or sticky bit) set is chmod-ed to g+w.
I have added a second test test_chmod_rejects_world_writable_suid
that checks that exception is raised if an SUID file is chmod-ed
to o+w
* file_permissions.py: Raise exception when try to make sgid file world writable
Updated set_permissions() in file_permissions.py to also raise
an exception if try to make an SGID file world writable. And
added corresponding unit test as well.
* Remove debugging prints from permissions.py
* Module index should not be unconditionally overwritten
Uncovered after we switched our CI to generate modules for packages
one-by-one rather than in bulk. This overwrote a complete module index
with an index with a single entry, and broke our downstream Spack
instances that needed the upstream module index.
I get the following error message, if I do not use editline from the system.
```
>> 3090 Undefined symbols for architecture x86_64:
3091 "_tgetent", referenced from:
3092 _terminal_set in libedit.a(terminal.c.o)
3093 "_tgetflag", referenced from:
3094 _terminal_set in libedit.a(terminal.c.o)
3095 "_tgetnum", referenced from:
3096 _terminal_set in libedit.a(terminal.c.o)
...
3110 _terminal_insertwrite in libedit.a(terminal.c.o)
3111 _terminal_clear_EOL in libedit.a(terminal.c.o)
3112 _terminal_clear_screen in libedit.a(terminal.c.o)
3113 _terminal_beep in libedit.a(terminal.c.o)
3114 ...
3115 ld: symbol(s) not found for architecture x86_64
```
* Added unit tests to Github Actions
* Set user e-mail and name for git tests to succeed
* Simplify setup.sh logic
* Replicate Travis script on Github Actions
* Update flags since '.' is not allowed
* Added badge, simplified workflow
* Remove pinning of coverage
* Remove unit tests run on Github Actions from Travis
Bug fix release:
4.0.4 -- June, 2020
-----------------------
- Fix a memory patcher issue intercepting shmat and shmdt. This was
observed on RHEL 8.x ppc64le (see README for more info).
- Fix an illegal access issue caught using gcc's address sanitizer.
Thanks to Georg Geiser for reporting.
- Add checks to avoid conflicts with a libevent library shipped with LSF.
- Switch to linking against libevent_core rather than libevent, if present.
- Add improved support for UCX 1.9 and later.
- Fix an ABI compatibility issue with the Fortran 2008 bindings.
Thanks to Alastair McKinstry for reporting.
- Fix an issue with rpath of /usr/lib64 when building OMPI on
systems with Lustre. Thanks to David Shrader for reporting.
- Fix a memory leak occurring with certain MPI RMA operations.
- Fix an issue with ORTE's mapping of MPI processes to resources.
Thanks to Alex Margolin for reporting and providing a fix.
- Correct a problem with incorrect error codes being returned
by OMPI MPI_T functions.
- Fix an issue with debugger tools not being able to attach
to mpirun more than once. Thanks to Gregory Lee for reporting.
- Fix an issue with the Fortran compiler wrappers when using
NAG compilers. Thanks to Peter Brady for reporting.
- Fix an issue with the ORTE ssh based process launcher at scale.
Thanks to Benjamín Hernández for reporting.
- Address an issue when using shared MPI I/O operations. OMPIO will
now successfully return from the file open statement but will
raise an error if the file system does not supported shared I/O
operations. Thanks to Romain Hild for reporting.
- Fix an issue with MPI_WIN_DETACH. Thanks to Thomas Naughton for reporting.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
* Fix how the Conduit detects that the MPI compiler is the same as the
CC compiler and is more careful when it sets the MPI compilers to be
the Cray PE system compilers.
* Remove unnecessary push of the MPI compilers to the C compilers for Hydrogen.
* Added version 8.2.0, added dependency for documentation build, added variants for documentations
* Renamed variant '+man' to '+html'
Co-authored-by: Alexander Knieps <a.knieps@fz-juelich.de>
- Parallel HDF5 isn't required -- the comment seems to be about a
transitive dependency with pnetcdf.
- Boost usage should respect the variant, not automatically be reenabled
when choosing DTK.
Versions of py-tensorflow between versions 1.1 and 1.14 need a patch to
avoid an import error on the cloud package even if built without support
for the cloud package.
* bazel: Update for use with Fujitsu compiler
* bazel: Fix for use with Fujitsu compiler
* bazel: Fix flake8 error
* bazel: add conflicts setting for use with Fujitsu compiler
* fix flake8 error
* fix flake8 error
In Python 3.8, the reserved "tp_print" slot was changed from a function
pointer to a number, which broke the Python wrapping code in vtk@8
(causing "cannot convert 'std::nullptr_t' to 'Py_ssize_t'" errors in
various places). This is fixed in vtk@9.0.0.
This patch:
1) adds vtk@9.0.0
2) updates depends_on constraints to only use python@3.8: for vtk@9:
vtk@:8 depends on python@2, and vtk@8.0.1:8.9.9 depends on python@:3.7
3) Adds CMake flag VTK_PYTHON_VERSION=3 when using python@3 with vtk@9
* python: adding a distutils fix to improve build compatibility for C++ extension modules (e.g. py-matplotlib)
* python: added C/C++ distutils patches for python@3.6:3.8
Allow Spack to build with ROOT as an external dependency by setting
LD_LIBRARY_PATH: given that the external package was not built by
Spack, dependents would not be able to locate libraries using RPATHs
when running ROOT binaries.
Specified Python to be v2.7 only, as Python3 support is not currently
implemented in chill.
Update chill dependency versions for the following libraries to the
specific versions:
* rose: v0.9.13.0
* bison: v3.4.2
Both rose and iegenlib are build time dependencies, but are also run
time dependencies. Added 'run' to the build type for both dependencies.
cuda: 10.1 and onward, installers will crash if /tmp/cuda-installer.log
exists
Try to help if user owns the file, otherwise try to provide useful
info. Clean up the file post-install to try to avoid the whole issue.
The release number in the README had not been updated since we did the
relicense to Apache-2.0 OR MIT in v0.12.0. LLNL-CODE-811652 is Spack's
new LLNL release number.
* save edits
* tidy up
* Update var/spack/repos/builtin/packages/py-lmfit/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add python version constraints
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Sinan81 <Sinan81@github>
* Add pygelf Python package
* Update ReFrame package version
* Address styling remarks
* Update var/spack/repos/builtin/packages/reframe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/reframe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/reframe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/reframe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/reframe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Address PR remarks
* Remove setuptools runtime dependency
* Address PR remarks
* Address PR remarks
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
bazel uses gcc's -MF option to write dependencies to a
file. Post-compilation, bazel reads this file and makes some
determinations.
"Since gcc is given only relative paths on the command line,
non-system include paths here should never be absolute. If they
are, it's probably due to a non-hermetic #include, & we should stop
the build with an error."
Spack directly injects absolute paths, which appear in this file and
cause bazel to fail the build despite the fact that compilation
succeeded.
This patch disables this failure mode by default, and allows for it
to be turned back on by using the '~nodepfail' variant.
Whenever attempting to use any ncurses functionality within cscope, a
page fault would result within the ncurses library.
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7fad3cf in termattrs_sp () from .../lib/libncursesw.so.6
(gdb) bt
#0 0x00007ffff7fad3cf in termattrs_sp () from .../lib/libncursesw.so.6
#1 0x00007ffff7faa794 in _nc_setupscreen_sp () from .../lib/libncursesw.so.6
#2 0x00007ffff7fa614c in newterm_sp () from .../lib/libncursesw.so.6
#3 0x00007ffff7fa65b9 in newterm () from .../lib/libncursesw.so.6
#4 0x00007ffff7fa2970 in initscr () from .../lib/libncursesw.so.6
#5 0x0000000000403dc2 in main (argc=<optimized out>, argv=0x7fffffffcea8) at main.c:574
This is due to a conflict between libtinfo.so and libtinfow.so. Both are
linked into cscope:
$ ldd $(which cscope)
/bin/bash: .../lib/libtinfo.so.6: no version information available (required by /bin/bash)
linux-vdso.so.1 (0x00007fff5dbcb000)
libncursesw.so.6 => .../lib/libncursesw.so.6 (0x00007f435cc69000)
libtinfo.so.6 => .../lib/libtinfo.so.6 (0x00007f435cc2c000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f435ca29000)
libtinfow.so.6 => .../lib/libtinfow.so.6 (0x00007f435c9e8000)
/lib64/ld-linux-x86-64.so.2 (0x00007f435cca7000)
Specifically linking libtinfow.so instead of libtinfo.so resolves the
issue.
All instances of '...' above represent the path to the installed ncurses
for Spack.
* Changed the 'include' config section to use 'substitute_path_variables' to allow for Spack config variables to be used (e.g. $spack).
* Fixed a bug with 'include' section path expansion and added a test case for 'include' paths with embedded config variables.
* Adding a package for wcs.
* Turning on sbml for wcs.
* The cpp flag needs to be available for wcs.
* Wcs needs SBML to properly define the namespace.
* Flake8 fixes.
* Fixing the help string with the description.
* Changing cpp to use the new variant syntax.
* Fixing flake8 errors.
* Forgot to delete one last fixme comment.
* Spack "develop" needs to link to repo "devel"
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Robert Blake <rob.c.blake.3@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Cray: fix Blue Waters support
* pkg-config env vars needed on Blue Waters
* cray platform: fix support for user-build MPI on cray machines
* reintroduce cray environment cleaning behind cnl version guard
* cray platform: fix support for user-build MPI on cray machines
Co-authored-by: Gregory <becker33@llnl.gov>
* Add new package: py-devlib
* Update var/spack/repos/builtin/packages/py-devlib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add depends
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [hsf-cmaketools] add package
* fix formatting
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [hsf-cmaketools] remove cmake_prefix_path which is set already by spack
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
gcc 9.3.0 and glibc 2.31 (found in the base install of Ubuntu 20.04)
cause the gcc package to error during build with the error:
"size of array 'assertion_failed__####' is negative"
Previous to this fix, the error was resolved for v8.1.0 <= gcc <= v9.2.0
via two patches.
This fix backports those patches for v5.3.0 <= gcc <= v7.4.0
Potentially these patches need to be backported to versions of gcc
before v5.3.0, but other compile issues need to be resolved for earlier
versions of gcc first.
Fixes#16968
* intel-tbb: Fix for #16938 add custom libs method
Override the libs method to look for libraries of form libtbb*
(instead of inherited which looks for libintel-tbb*)
* Fixing pre-existing flake8 issues
* py-flake8: add version 3.8.2
* This version depends on different versions of py-pycodestyle
and py-pyflakes
* When built for python@:3.7, this depends on the
py-importlib-metadata backport library
* py-pycodestyle: add version 2.6.0
* py-pyflakes: add version 2.2.0
Builds can be stopped before the final install phase due to user requests. Those builds
should not be registered as installed in the database.
We had code intended to handle this but:
1. It caught the wrong type of exception
2. We were catching these exceptions to suppress them at a lower level in the stack
This PR allows the StopIteration to propagate through a ChildError, and catches it
properly. Also added to an existing test to prevent regression.
This fixes a fork bomb in `spack versions`. Recursive generation of pools
to scrape URLs in `_spider` was creating large numbers of processes.
Instead of recursively creating process pools, we now use a single
`ThreadPool` with a concurrency limit.
More on the issue: having ~10 users running at the same time spack
versions on front-end nodes caused kernel lockup due to the high number
of sockets opened (sys-admin reports ~210k distributed over 3 nodes).
Users were internal, so they had ulimit -n set to ~70k.
The forking behavior could be observed by just running:
$ spack versions boost
and checking the number of processes spawned. Number of processes
per se was not the issue, but each one of them opens a socket
which can stress `iptables`.
In the original issue the kernel watchdog was reporting:
Message from syslogd@login03 at May 19 12:01:30 ...
kernel:Watchdog CPU:110 Hard LOCKUP
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#110 stuck for 23s! [python3:2756]
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#94 stuck for 22s! [iptables:5603]
Stratimikos is an optional dependency for our project. It depends on
Thyra, and thyra has subpackages that should be enabled based on
tpetra/epetra/epetraext.
* gnuplot: Fix for #16928
Dependency for --with-wx flag mistyped (should be wxwidgets)
* Revert "gnuplot: Fix for #16928"
This reverts commit 2b85814e5c.
* gnuplot: Fix for #16928
Dependency spec for --with-wx flag mistyped (should be wxwidgets, not
wx)
* add an --exclude-file option to 'spack mirror create' which allows a user to specify a file of specs to exclude when creating a mirror. this is anticipated to be useful especially when using the '--all' option
* allow specifying number of versions when mirroring all packages
* when mirroring all specs within an environment, include dependencies of root specs
* add '--exclude-specs' option to allow user to specify that specs should be excluded on the command line
* add test for excluding specs
fixes#12527
Mention that specs can be uninstalled by hash also in
the help message. Reference `spack gc` in case people
are looking for ways to clean the store from build time
dependencies.
Use "spec" instead of "package" to avoid ambiguity in
the error message.
* Adding a module for sbml.
* Adding support for all the languages.
* Update var/spack/repos/builtin/packages/sbml/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Update var/spack/repos/builtin/packages/sbml/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Robert Blake <rob.c.blake.3@gmail.com>
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Unify tests for compiler command in the same file
Tests for the "spack compiler" command were previously
scattered among different files.
* Tests should use mutable_config, since they modify the compiler list
Because of the way abstract variants are implemented, the following
spec matrix does not work as intended:
```
matrix:
- [foo]
- [bar=a, bar=b]
exclude:
- bar=a
```
because abstract variants always satisfy any variant of the same
name, regardless of values.
This PR converts abstract variants to whatever their appropriate
type is before running satisfaction checks for the excludes clause
in a matrix.
fixes#16841
Now that the version number of GCC reached double digits, an update
to the regex is needed to recognize gcc-10 as an executable to be
inspected when searching for compilers.
The current package does not work. Also several ocaml versions do
not compile such as 4.09.0. Updated to the new ocaml version to
4.10.0, which is now a prerequisite.
* Add new version of py-astroid
* Add new version of py-lazy-object-proxy
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Update both main and data packages. Data versions have not changed
in new version.
Provide new python variant to build Python bindings for 10.6.2 and
newer. Add dependencies on python and Boost+python required by variant.
* make_link_relative: added docstring
* make_elf_binaries_relative: added docstring, unit tests
* raise_if_not_relocatable: added docstring, added unit test for exceptional case
* relocate_links: removed unused arguments, added docstring and comments
Also fixed a possible bug that was issuing spurious
warning when a file was relocated successfully
* relocate_text: added docstring and comments, renamed arguments
* relocate_text_bin: added docstring and comments, renamed arguments, unit tests
- add future-proofing for wmake rules locations:
Accept wmake/rules/{ARCH}{COMP} or wmake/rules/{ARCH}/{COMP}
- compiler option is now '-spack' instead of 'RpathOpt'
which now seems to be a bit harsh on the eyes.
Now have compilations such as 'linux64GccDPInt32-spack',
which is moderately easier to read.
- add OpenFOAM 1912, patch 200506
STYLE: adjust for new flakey8 indentation rules
* openmpi: get rid of implicit system dependencies
* Python 2 compatibility.
* Rename pbspro to openpbs and revert packages.yaml.
* Remove virtual package 'sendmail'.
Problem: when calling `static_to_shared_library` on the `cray` arch, it
produces a non-sensical compiler command with no input files. For
example, when installing lua@5.2.4, it produced:
'gcc -lm -ldl -o /big-long-spack-path/liblua.so.5.2.4'
Solution: do the same thing on `cray` that is done for `linux`
fixes#16725
The dap, jna, pnetcdf, netcdf4, and ncgen4 variants added in #16047
are _not_ supported by the configure script for netcdf-cxx4 package
(These appear to be configure args for netcdf-c package).
* account for schema validation errors where the associated instance doesn't have a line number
* fix unrelated flake error (but it must be fixed because this PR touches this file and the flake rules have been updated since the last edit to this file)
Fujitsu-MPI wrapper commands aren't recognized from 'FindMPI'
function of 'cmake'. If we are using the Fujitsu compiler and Fujitsu MPI,
specify the MPI path information explicitly.
Flux requires `build` for python and many of the python packages because
it builds python bindings. Beyond the bindings, the Flux front-end
commands now use python too, hence the `run` type. Finally, Flux's
`pymod` module is linked against the python interpreter, so the package
requires a `link` dependency on python too.
* IWYU: fix 0.14 build
The CMake patch used for 0.13 hadn't been applied to the master when
0.14 was released, and this version of IWYU requires C++14 or higher.
* Flake8
```
'/var/folders/fy/x2xtwh1n7fn0_0q2kk29xkv9vvmbqb/T/s3j/spack-stage/spack-stage-valgrind-3.15.0-mtir7ubjz7mqmjbb7bogze2qm35hl4ze/spack-src/configure' '--prefix=/ornldev/code/spack/opt/spack/clang-11.0.0-apple/valgrind/mtir7ub' '--enable-only64bit' '--build=amd64-darwin'
1 error found in build log:
43 checking host system type... x86_64-pc-darwin
44 checking for a supported CPU... ok (x86_64)
45 checking for a 64-bit only build... yes
46 checking for a 32-bit only build... no
47 checking for a supported OS... ok (darwin)
48 checking for the kernel version... unsupported (18.7.0)
>> 49 configure: error: Valgrind works on Darwin 10.x, 11.x, 12.x, 13.x, 14.x, 15.x, 16.x and 17.x (Mac OS X 10.6/7/8/9/10/11 and macOS 10.12/13)
```
* ffr: add flag to use fixed format in which the length of one line of the source code is 255 when building with Fujitsu compiler.
* ffr: changed to elif.
The dev branch of UnifyFS now depends on the latest release of
GOTCHA, and will future releases.
This updates our spackage to depend on the correct version of GOTCHA
depending on the version of UnifyFS being installed.
* Create VMD recipe
This is a new recipe to install VMD on Spack-managed hosts.
* Fix lint errors.
* Use plain Package
As per peer-review:
- Use Package to build
- Use configure to create a Makefile
- Use install to copy files to prefix directory
* Move VMD package to correct path, duh...
* Restructure description so first short paragraph can be used by module files.
* Add an empty line as suggested by peer-review. That's how you separate paragraphs.
* Remove extra spaces.
* Use setup_build_environment since that's where you're supposed to export OS variblaes. Thanks to peer-review for spotting this.
* Create VMD recipe
This is a new recipe to install VMD on Spack-managed hosts.
* Fix lint errors.
* Use plain Package
As per peer-review:
- Use Package to build
- Use configure to create a Makefile
- Use install to copy files to prefix directory
* Move VMD package to correct path, duh...
* Add Cubist (#16069)
* Add Cubist
* enhance recipe
* Not using OS module anymore
* remove white space
* Fix build shell
* make Flake8 happy
* use bash shell for build
* Convert it To MakefilePackage as per peer-review
* dbcsr: expose all options, check openblas feats (#16034)
* dbcsr: expose all options, check openblas feats
* dbcsr: use Ninja to build, ensure serialized tests
* dbcsr: add myself as maintainer
* MPark.Variant: GCC 7.3.1 Conflict (#16081)
* MPark.Variant: GCC 7.3.1 Conflict
Due to an ICE in this specific patch-release of GCC, compile
errors in downstream packages should be avoided with a clean
conflict.
* Fix superfluous spaces
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Fix typo
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Move VMD package to correct path, duh...
* Add an empty line as suggested by peer-review. That's how you separate paragraphs.
* New matlab versions (#16086)
* Add new version 1.1.1 (#16087)
* New package bonniepp added (#16091)
* openbabel: fix compilation errors (#16090)
- Disable maeparser as it is broken with CMake
- Added missing dependencies
* singularity: updated maintainer list (#16093)
* New version xrootd-4.11.3 (#16092)
* I added Gaussian 16. I also execute bsd/install to fix scripts instead of filtering them.
* revert VMD so only Gaussian is in my PR.
* revert VMD so only Gaussian is in my PR.
* revert VMD so only Gaussian is in my PR.
* I added myself as a package maintainer.
Co-authored-by: asmaahassan90 <31959389+asmaahassan90@users.noreply.github.com>
Co-authored-by: Tiziano Müller <tiziano.mueller@chem.uzh.ch>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Amjad Kotobi <amjadkotbi@gmail.com>
Co-authored-by: athanasio <athanasio@users.noreply.github.com>
Co-authored-by: Carlos Arango Gutierrez <arangogutierrez@gmail.com>
Allows `all` to be configured non-buildable in packages.yaml.
The following config would only allow zlib to be built by Spack, all other packages would have to be found as externals.
```
packages:
all:
buildable: False
zlib:
buildable: True
```
The openssl build process can use the wrong perl for
various reasons, including:
* Wrong value in PERL env var
* The build process first looks for `perl5`, which the
spack system does not provide, but some other
distributions provide it. That way, the build process can
end up using the wrong perl.
Stop all of these problems by explicitly setting PERL to
the to be used dependency.
* Updated versions of COSMA.
* Added an empty line for formatting.
* Switched to sha256.
* Renamed gpu variant to cuda. Extending the CudaPackage base class.
This change also adds a code path through the spack ci pipelines
infrastructure which supports PR testing on the Spack repository.
Gitlab pipelines run as a result of a PR (either creation or pushing
to a PR branch) will only verify that the packages in the environment
build without error. When the PR branch is merged to develop,
another pipeline will run which results in the generated binaries
getting pushed to the binary mirror.
Providing only $padding or ${padding} results in an attempt to
substitute a padding of maximum system path length, while leaving
room for the parts of the install path spack generates. Providing
$padding-<len> or ${padding-<len>} simply substitutes padding of
the specified length.
* Fixing the build directory for cardioid.
* These imports are no longer needed due to deletions.
Co-authored-by: Robert Blake <rob.c.blake.3@gmail.com>
* [maloc] created template
* [maloc] added some build dependencies
* [maloc] added homepage and description
* [maloc] added doc variant in an attempt to disable documentation. Still builds doc though... at least when there is a system doxygen found. Having doxygen being a dependency here can bring python in and cause issues for dependents, even when listed as only a build dependency. Maybe this is a bug?
* [maloc] flake8 and fixme removal
* [maloc] shorted description
The pkg-config file of newer versions of at-spi2-core includes
dependencies for xtst, recordproto, inputproto and fixesproto, so they
have to be available at runtime as well.
Packages built with lmod core_compiler are placed in `Core`.
Other packages may belong in `Core`. For example, python may be built with a proprietary compiler for performance, but belong on the `Core` directory.
With this PR, lmod config can include a `core_specs` list. Any package that satisfies a spec in that list is placed in `Core`, regardless of its compiler or dependencies.
* Adding varient for python interface in HELICS
* Append python interface to pythonpath
* cleaning up blank lines
* Update var/spack/repos/builtin/packages/helics/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Clarify comments about QMCPACK-to-QE converter.
* Allow hdf5=serial with QE 6.4.1 + qmcpack, but apply filter_file.
* Ammend comments about the use of the filter_file.
This improves the documentation for `spack external find` in several ways:
* Provide a code example of implementing `determine_spec_details` for a package
* Explain how to define executables to look for (and also e.g. that they are treated as regular expressions and so can pull in unexpected files).
* Add the "why" for a couple of constraints (i.e. explain that this logic only works for build/run deps because it examines `PATH` for executables)
* Spread the docs between build customization and packaging sections
* Add cross-references
* Add a label so that `spack external find` is linked from the command reference.
Modifications:
- [x] Travis now uses `bionic` as a default (`xenial` used for Python 3.5, `trusty` for Python 2.6)
- [x] Shell unit tests have been factored into their own run
- [x] `kcov` is built only for tests that upload coverage results
Overall with this we shave 3-4 mins. on each run and add an additional run of about 3 min. For some reason `kcov` 38 fails forwarding output when used with Python unit tests, so I used v34 for that and v38 (latest) for shell testing. Previously we were using v25.
-[x] `z3` needs a dependency on `py-setuptools`
-[x] `z3` has a run dependency on `python`, so we might as well make
building with the python bindings default
`clingo` has some undocumented dependencies:
- `bison`, for which the default macos version is too old
- `re2c`, which isn't always available
Also, the python installation was not set up properly. Clingo by default
does an install in `~/.local`, so we need to disable that and add a few
other options to get things right.
- [x] add `bison` dependency
- [x] add `re2c` dependency
- [x] make `doxygen` dependency optional (and patch if needed)
- [x] add options to fix `python` build
- [x] make python build optional but on by default
* Add pmi support (required by ucx, ofi, and gni backends)
* Add support for ucx backend
* Add dependency on MPI for pmi=simplepmi, slurmpmi, or slurmpmi2
* Remove charmpp as an MPI provider since the changes in this PR can
add MPI as a dependency (mentioned previously)
* Install into transport_protocol-OS-arch subdirectory to match
default charmpp installation behavior (which helps dependents find it)
SKX includes AVX-512 extensions that consumer Skylake processors do
not have. Therefore, map Skylake to the prior arch to work on
these systems. Skylake-X processors will still map as the
skylake_avx512 spack arch and get the correct optimzations.
* Add maintainers.
Add variants for building with default earlier api versions.
* Update var/spack/repos/builtin/packages/hdf5/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add naromero77 as a maintainer for QMCPACK Spack package.
* Add QMCPACK 3.9.2
* Remove QE-to-QMCPACK wave function converter from QMCPACK Spack package. Already been moved to QE Spack package.
* Fix dependency of geant4 (amends #16497)
* Update geant4-data dependencies
* Reviewer comments (part 1/2)
* Reviewer comments (part 2/2)
* Update var/spack/repos/builtin/packages/geant4-data/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- add docstrings and make parameter names consistent in `relocate.py`
- Make `replace_prefix_*` and other functions private (as they are implementation details)
- remove unused function _replace_prefix_nullterm()
- Add unit tests for `relocate.py` functions
- add patchelf to Travis and use it during tests
- add hello_world fixture with a compiled binary, so we can test relocation
After migrating to `travis-ci.com`, we saw I/O issues in our tests --
tests that relied on `capfd` and `capsys` were failing. We've also seen
this in GitHub actions, and it's kept us from switching to them so far.
Turns out that the issue is that using streams like `sys.stdout` as
default arguments doesn't play well with `pytest` and output redirection,
as `pytest` changes the values of `sys.stdout` and `sys.stderr`. if these
values are evaluated before output redirection (as they are when used as
default arg values), output won't be captured properly later.
- [x] replace all stream default arg values with `None`, and only assign stream
values inside functions.
- [x] fix tests we didn't notice were relying on this erroneous behavior
This adds the `url` alternative `urls` to `package.all_urls`. With
this addition, one can find again new versions with
`spack versions <package>` for packages that are populated with
from mixin mirror `urls`.
Example: `util-macros` from x.org mixin.
* Non-interactive mode for spack checksum; allow passing 'package@version' to spack checksum
* Flake8 fixes
* Update checksum.py
Fix typo
* Update spack-completion script
* Automatically set non-interactive mode if more than one version passed
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add documentation and update spack-completion
* Flake8
* Rename option
* Update spack-completion
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update checksum.py
* Update stage.py
* Update create.py
Use batch mode when adding a new package
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This fixes some errors with setting up test configuration. These
errors do not cause current Spack tests to fail but do create
red herring issues elsewhere (see #15666). Fixing these errors
leads to more errors in tests that depended on the original
misconfigured state, so those are also addressed here.
This is an update to #16003 which accounts for some unit tests with
conflicting config/mutable_config fixtures. These conflicts were
not exposed until the mutable_config fixture was fixed. Details are
included below. The change which builds on #16003 is prefixed with
"(new)".
* For tests that use the real Spack package repository, the config
needs to avoid using MPI providers that are not intended to be
installed by Spack. Without this, it is possible that Spack tests
which concretize the MPI virtual will end up trying to use an
implementation that it shouldn't (e.g. one that is always
provided externally). See #15666 for an example.
* The mutable_config test fixture was not initializing the scope
roots to the right directories (so the resulting config was empty).
* The current_host fixture in the concretize.py tests was using the
config fixture rather than mutable_config, and was polluting the
config cache for other tests.
* One test in concretize.py was clearing a nonexistent cache
(PackagePrefs._packages_config_cache). This reference has been
removed.
* The test 'test_preferred_compilers' was was depending on cross
test config pollution to succeed. The initial spec before
concretization has been updated to updated to be explicit about
the desired result.
* (new) For tests that use install_mockery and mutable_config,
replace install_mockery with a separate install_mockery_mutable_config
fixture that is exactly the same as install_mockery but uses the
mutable_config fixture to avoid conflicts.
* Adapt to the latest Acts developments
A long time ago, the Acts project (whose name was then capitalized ACTS) used
to maintain multiple software repositories:
- The heart of the tracking toolkit was located in the `acts-core` repository
- Fast simulation extensions were located in the `acts-fatras` repository
- Advanced usage examples were located in the `acts-framework` repository
This multi-repository organization, however, has been a source of constant
pain, which is why the various projects were gradually merged into a single
mono-repo, called `acts`. Today, with the integration of `acts-framework`,
this merging process is reaching completion.
The present pull request adapts the Acts package to this evolution by...
- Renaming the package to `acts`, reflecting the new repository name
- Renaming the `test` variant to `unit_tests`, reflecting current CMake naming
- Adding the new build variants that were inherited from `acts-framework`
- Acknowledging the change of semantics of the `examples` variant, and only
supporting the new ones (as the former variant was almost unused)
- Liberally using alphabetical order to make the package code more readable
- Recording a large number of conflicts, some of which are introduced by the
merging of `acts-framework` and some of which already existed before
- Using the new capitalization of "Acts"
* Add acts v0.23
* Update dd4hep version requirement
* Add acts v0.22.1 bugfix
Fixed#15884.
Spack asks every package linked into an environment to tell us how
environment variables should be modified when a spack environment is
activated. As part of this, specs in an environment are symlinked into
the environment's view (see #13249), and the package calculates
environment modifications with *the default view as the prefix*.
All of this works nicely for pointing the user's environment at the view
*if* every package is successfully linked. Unfortunately, right now we
only track what specs "should" be in a view, not which specs actually
are. So we end up calculating environment modifications on things that
aren't linked into thee view, and the exception isn't caught, so lots of
spack commands end up failing.
This fixes the issue by ignoring and warning about specs where
calculating environment modifications fails. So we can still keep using
Spack even if the current environment is incomplete.
We should probably also just avoid computing env modifications *entirely*
for unlinked packages, but right now that is a slow operation (requires a
lot of YAML parsing). We should revisit that when we have some better
state management for views, but the fix adopted here will still be
necessary, as we want spack commands to be resilient to other types of
bugs in `setup_run_environment()` and friends. That code is in packages
and we have to assume it could be buggy when we call it outside of builds
(as it might fail more than just the build).
* openfoam: correspond to build with Fujitsu compiler.
* openfoam: add rules for Fujitsu compiler (on linuxARM64)
- the Fujitsu compiler is a clang derivative, so use a modified
version of the clang rules if upstream does not supply anything
If using mpirun, the R sessions can be started with a wrapper script
that helps set up the R session cluster. Put this wrapper in the PATH so
it is easily accessible.
Add a `spack external find` command that tries to populate
`packages.yaml` with external packages from the user's `$PATH`. This
focuses on finding build dependencies. Currently, support has only been
added for `cmake`.
For a package to be discoverable with `spack external find`, it must define:
* an `executables` class attribute containing a list of
regular expressions that match executable names.
* a `determine_spec_details(prefix, specs_in_prefix)` method
Spack will call `determine_spec_details()` once for each prefix where
executables are found, passing in the path to the prefix and the path to
all found executables. The package is responsible for invoking the
executables and figuring out what type of installation(s) are in the
prefix, and returning one or more specs (each with version, variants or
whatever else the user decides to include in the spec).
The found specs and prefixes will be added to the user's `packages.yaml`
file. Providing the `--not-buildable` option will mark all generated
entries in `packages.yaml` as `buildable: False`
Cray has two machine types. "XC" machines are the larger
machines more common in HPC, but "Cluster" machines are
also cropping up at some HPC sites. Cluster machines run
a slightly different form of the CrayPE programming environment,
and often come without default modules loaded. Cluster
machines also run different versions of some software, and run
a linux distro on the backend nodes instead of running Compute
Node Linux (CNL).
Below are the changes made to support "Cluster" machines in
Spack. Some of these changes are semi-related general upkeep
of the cray platform.
* cray platform: detect properly after module purge
* cray platform: support machines running OSs other than CNL
Make Cray backend OS delegate to LinuxDistro when no cle_release file
favor backend over frontend OS when name clashes
* cray platform: target detection uses multiple strategies
This commit improves the robustness of target
detection on Cray by trying multiple strategies.
The first one that produces results wins. If
nothing is found only the generic family of the
frontend host is used as a target.
* cray-libsci: add package from NERSC
* build_env: unload cray-libsci module when not explicitly needed
cray-libsci is a package in Spack. The cray PrgEnv
modules load it implicitly when we set up the compiler.
We now unload it after setting up the compiler and
only reload it when requested via external package.
* util/module_cmd: more robust module parsing
Cray modules have documentation inside the module
that is visible to the `module show` command.
Spack module parsing is now robust to documentation
inside modules.
* cce compiler: uses clang flags for versions >= 9.0
* build_env: push CRAY_LD_LIBRARY_PATH into everything
Some Cray modules add paths to CRAY_LD_LIBRARY_PATH
instead of LD_LIBRARY_PATH. This has performance benefits
at load time, but leads to Spack builds not finding their
dependencies from external modules.
Spack now prepends CRAY_LD_LIBRARY_PATH to
LD_LIBRARY_PATH before beginning the build.
* mvapich2: setup cray compilers when on cray
previously, mpich was the only mpi implementation to support
cray systems (because it is the MPI on Cray XC systems).
Cray cluster systems use mvapich2, which now supports cray
compiler wrappers.
* build_env: clean pkgconf from environment
Cray modules silently add pkgconf to the user environment
This can break builds that do not user pkgconf.
Now we remove it frmo the environment and add it again if it
is in the spec.
* cray platform: cheat modules for rome/zen2 module on naples/zen node
Cray modules for naples/zen architecture currently specify
rome/zen2. For now, we detect this and return zen for modules
named `craype-x86-rome`.
* compiler: compiler default versions
When detecting compiler default versions for target/compiler
compatibility checks, Spack previously ran the compiler without
setting up its environment. Now we setup a temporary environment
to run the compiler with its modules to detect its version.
* compilers/cce: improve logic to determine C/C++ std flags
* tests: fix existing tests to play nicely with new cray support
* tests: test new functionality
Some new functionality can only be tested on a cray system.
Add tests for what can be tested on a linux system.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Since #9481 Python's None is not permitted as a value for
MV variants. The string 'none' is used instead.
Add the same fix for the amgx and lammps packages
* CMake: fix https://github.com/spack/spack/issues/16453 with a patch addressing both libhugetlbfs and icpc warnings on Cray XC40 systems
* Including CMake v3.17.2 in the patched versions
* libspatialite
* flake8
* Added proper version constrain
* Update var/spack/repos/builtin/packages/libspatialite/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/libspatialite/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Otherwise, if I run `xenv` after `spack load py-xenv` it fails with:
```
Traceback (most recent call last):
File "/home/vavolkl/spack/opt/spack/linux-centos7-broadwell/gcc-8.3.0/py-xenv-develop_2018-12-20-lqbxakapsepqo5w3sjhhokj5o7c5jei2/bin/xenv", line 6, in <module>
from pkg_resources import load_entry_point
ModuleNotFoundError: No module named 'pkg_resources'
```
* [gaudi] fixes and patches
* Update var/spack/repos/builtin/packages/gaudi/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/gaudi/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [gaudi] add older versions and fold +tests into +optional
* [gaudi] set run environment
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
If spack is checked out in a git worktree (see [1]), all git-related
commands fail because the `spack_is_git_repo()`-check is not thorough
enough.
When developing in a feature-branch in a seperate worktree, this is
annoying as all unittests regarding git-related spack commands fail,
cluttering the test results with false-positives.
[1]: https://git-scm.com/docs/git-worktree
Change-Id: I94b573a2c0e058e9ccc169e7ee6561626fbb06fd
We can control the shared/static build of CMake and the default in
Spack is to build shared libraries. The old, uncontrolled default
of this package is a static build.
* Revise description of patch variant.
* Add qmcpack variant. Apply QE-to-QMCPACK wave function converter patch.
* Clean-up, document, and re-organize.
* ELPA patches did not nead when=`+patch` variant.
* Need to be more precise here with QE version numbers.
* satisfies seems to be necessary here in order to get correct behaviour.
* Buglet with zlib link line.
* Update var/spack/repos/builtin/packages/quantum-espresso/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/quantum-espresso/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix for QE-to-QMCPACK wave function converter w.r.t. QE 6.3. Also adjust comments to reflect changes in code.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* For tests that use the real Spack package repository, the config
needs to avoid using MPI providers that are not intended to be
installed by Spack. Without this, it is possible that Spack tests
which concretize the MPI virtual will end up trying to use an
implementation that it shouldn't (e.g. one that is always
provided externally). See #15666 for an example.
* The mutable_config test fixture was not initializing the scope
roots to the right directories (so the resulting config was empty).
* The current_host fixture in the concretize.py tests was using the
config fixture rather than mutable_config, and was polluting the
config cache for other tests.
* One test in concretize.py was clearing a nonexistent cache
(PackagePrefs._packages_config_cache). This reference has been
removed.
* The test 'test_preferred_compilers' was was depending on cross
test config pollution to succeed. The initial spec before
concretization has been updated to updated to be explicit about
the desired result.
* dev-build: --drop-in <shell>
Add a `--drop-in <shell>` option to `spack dev-build`.
This option will automatically run a
`spack build-env <spec> -- <shell>` at the end of a `dev-build`, e.g.
to quickly drop-and-devel into a build phase of a package.
Example usage:
```
spack dev-build --before cmake --drop-in bash openpmd-api@develop
```
* build_env: drop in unit test
Co-authored-by: Greg Becker <becker33@llnl.gov>
mfem variant hypre is now rolled into variant mpi - so update spec accordingly
mfem@4.0.1-xsdk+superlu-dist is broken and unsupported - so disable it
with the addition of py-petsc4py@3.13.0 - conretizer gets confused and is not picking py-petsc4py@3.12.0 as a compatible dependency with petsc@3.12. So manually specify it.
Also depends_on('py-libensemble@0.5.2+petsc4py ^py-petsc4py@3.12.0' causes concretizer to hang forever
Generally speaking, errors that are encountered when attempting to load
command extensions now terminate the running Spack instance.
* Added new exceptions `spack.cmd.PythonNameError` and
`spack.cmd.CommandNameError`.
* New functions `spack.cmd.require_python_name(pname)` and
`spack.cmd.require_cmd_name(cname)` check that `pname` and `cname`
respectively meet requirements, throwing the appropriate error if not.
* `spack.cmd.get_module()` uses `require_cmd_name()` and passes through
exceptions from module load attempts.
* `spack.cmd.get_command()` uses `require_cmd_name()` and invokes
`get_module()` with the correct command-name form rather than the
previous (incorrect) Python name.
* Added New exceptions `spack.extensions.CommandNotFoundError` and
`spack.extensions.ExtensionNamingError`.
* `_extension_regexp` has a new leading underscore to indicate expected
privacy.
* `spack.extensions.extension_name()` raises an `ExtensionNamingError`
rather than using `tty.warn()`.
* `spack.extensions.load_command_extension()` checks command source
existence early and bails out if missing. Also, exceptions raised by
`load_module_from_file()` are passed through.
* `spack.extensions.get_module()` raises `CommandNotFoundError` as
appropriate.
* Spack `main()` allows `parser.add_command()` exceptions to cause
program end.
Tests:
* More common boilerplate has been pulled out into fixtures including
`sys.modules` dictionary cleanup and resource-managed creation of a
simple command extension with specified contents in the source file
for a single named command.
* "Hello, World!" test now uses a command named `hello-world` instead of
`hello` in order to verify correct handling of commands with hyphens.
* New tests for:
* Missing (or misnamed) command.
* Badly-named extension.
* Verification that errors encountered during import of a command are
propagated upward.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Add more checksummed versions
* Remove all versions not supported by autotools build method
* Add old build system for older versions
* Add suggested changes
* add amgx package
* add amgx variants for mkl and magma support
* fix typo in cmake option
* flake8 fix formatting
* Apply suggestions from code review - use mkl virtual provider
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Apply suggestions from code review - fix copypasta
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add additional variants to netcdf-fortran
* Fix duplicate variant
* Clean up variants based on review feedback
* Addtional variant changes
* Convert jna variant to single line
* Fix proper version constraints for jna variant
* cp2k: prettify arch-file, call pkg-config directly
this allows to re-use the arch-file without having to load the complete
Spack environment, for example after a dev-build
* cp2k: use consistency check instead of blas lib enum
this makes using other BLAS/LAPACK implementations possible without
explicitly adding support for them
* cp2k: add basic support for Cray and XL Compilers, correct Intel fp mode
* cp2k: add myself as maintainer
* cp2k: use "master" to denote the git version
* cp2k: use spack_cc/fc/cxx when possible, set CXX explicitly
* cp2k: set __MKL when using the MKL, not just the Intel compiler
* cp2k: drop self. when referencing spec where possible
* cp2k: add forgotten elpa+openmp dep
* cp2k: set C++14 for recent versions
* Add new version: "Abseil LTS branch, Feb 2020, Patch 1"
* Build shared libraries by default with new version
* Older versions do not support building shared libraries
* Update bat and make the url dynamic
- Now depending on the version it will calculate
the url
- This also fixes a weird issue that was reported
on Darwin, back when I reported that Rust wasn't
linking properly on Darwin #15887 on the comment
by hartzell i was also experiencing this issue
* Remove unnecessary stuff
- Removes the need for LLVM
- Removes the need for version calculation.
* Keep the versions on 1 line
* Pass flake8 tests
* Add new package exa
* Format and fix a silly typo
* Fix SHA256 SUM and make URL calculation dynamic
* Remove unnecessary URL calculation
* Update package.py
* Keep the version on 1 line
* Pass flake8 checks
* Add ShengBTE
Adding a new package; ShengBTE. I tried adding it a MakefilePackage, and use build_directory = 'Src', but it was as if the build_directory gets ignored and make complains about target not found. and using the make funtion here instead of os.system, I get errors that config.f90 is not found which is already available under Src as well.
* more enhancmenets
fix lint; use mkl spec; use build_directory variable
* one more fix
* Use Makefile template
* Update var/spack/repos/builtin/packages/shengbte/package.py
use mkl instead of intel-mkl as a dependency
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/shengbte/package.py
update recipe as suggested by reviewer
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* enhance recipe
remove white space; changes as suggested by reviewer
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixed SSL pathing for older versions of Python (i.e. @:3.6.999).
* Fixed an issue where the 'python~ssl' variant wasn't properly being respected.
* Improved the '~ssl' patch by making it functional instead of diff-based (enables 3.X.Y patches).
* Fixed comment formatting to satisfy 'flake8' style requirements.
On Power architectures cuDNN will install in a target directory. This
sets cuDNN_ROOT to point to the subdirectory to help dependents use
this install.
The singularity info should actually suggest where you might find the info about the post-install steps.
Co-authored-by: george.hartzell <george.hartzell@sana.com>
This PR introduces trivial refactoring in:
- `get_existing_elf_rpaths`
- `get_relative_elf_rpaths`
- `get_normalized_elf_rpaths`
- `set_placeholder`
mainly to be more consistent with practices used in other
parts of the code and to simplify functions locally. It also
adds or reworks unit tests for these functions and extends
their docstrings.
Co-authored-by: Patrick Gartung <gartung@fnal.gov>
Co-authored-by: Peter J. Scheibel <scheibel1@llnl.gov>
Building the `py-jupyter` stack on macOS with AppleClang breaks on
the `py-qtconsole` -> `py-qtconsole` -> `qt +opengl` package build
environment setup with:
```
==> Error: AttributeError: Query of package 'mesa' for 'libs' failed
...
==> Error: Failed to install qt due to ChildError: AttributeError: Query of package 'mesa' for 'libs' failed
```
This tries to add more library targets build by `mesa` to avoid this.
* LLVM: Python Dependency
Effort to expose the linked python library when building LLVM.
This might fix the forward propagation of libintl that comes
with the static python library build on darwin.
* LLDB Py: Remove Ignored Old Flags
Changed in LLVM 10.0+
Packages in Spack are classes, and we need to be able to execute class
methods on mock packages. The previous design used instances of a single
MockPackage class; this version gives each package its own class that can
spider depenencies. This allows us to implement class methods like
`possible_dependencies()` on mock packages.
This design change moves mock package creation into the
`MockPackageMultiRepo`, and mock packages now *must* be created from a
repo. This is required for us to mock `possible_dependencies()`, which
needs to be able to get dependency packages from the package repo.
Changes include:
* `MockPackage` is now `MockPackageBase`
* `MockPackageBase` instances must now be created with
`MockPackageMultiRepo.add_package()`
* add `possible_dependencies()` method to `MockPackageBase`
* refactor tests to use new code structure
* move package mocking infrastructure into `spack.util.mock_package`,
as it's becoming a more sophisticated class and it gets lots in `conftest.py`
The variants table in `spack info` is cramped, as the *widest* it can be
is 80 columns. And that's actually only sort of true -- the padding
calculation is off, so it still wraps on terminals of size 80 because it
comes out *slightly* wider.
This change looks at the terminal size and calculates the width of the
description column based on it. On larger terminals, the output looks
much nicer, and on small terminals, the output no longer wraps.
Here's an example for `spack info qmcpack` with 110 columns.
Before:
Name [Default] Allowed values Description
==================== ==================== ==============================
afqmc [off] on, off Install with AFQMC support.
NOTE that if used in
combination with CUDA, only
AFQMC will have CUDA.
build_type [Release] Debug, Release, The build type to build
RelWithDebInfo
complex [off] on, off Build the complex (general
twist/k-point) version
cuda [off] on, off Build with CUDA
After:
Name [Default] Allowed values Description
==================== ==================== ========================================================
afqmc [off] on, off Install with AFQMC support. NOTE that if used in
combination with CUDA, only AFQMC will have CUDA.
build_type [Release] Debug, Release, The build type to build
RelWithDebInfo
complex [off] on, off Build the complex (general twist/k-point) version
cuda [off] on, off Build with CUDA
This allows horovod to be built with frameworks=pytorch,tensorflow.
I tracked down the crash I observed in #15719, where loading torch
before tensorflow would cause a crash in:
google::protobuf::internal::(anonymous
namespace)::InitSCC_DFS(google::protobuf::internal::SCCInfoBase*)
The solution is to make tensorflow compile against the protobuf
version Spack provides, instead of allowing it to use it's own.
It's likely we'll want to go after some of the others
that are listed in third_party/systemlibs/syslibs_configure.bzl
in the future.
* Update var/spack/repos/builtin/packages/py-sqlparse/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add new package py-sqlparse
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The major building blocks in many software stacks:
- CPython
- CMake (libuv)
do not build on macOS with GCC. The main problem is that some macOS
framework includes pull in objective-c code and that code does get
misinterpreted as (invalid) C by GCC by default.
Last month VTK-m releases its lastest version named `v1.5.1`. This new
release only contains bugfixes related to compiler error / warnings.
- Depends on CMake >= 3.12
- Set VTKm_NO_ASSERT=ON by default
- add maintainers
Signed-off-by: Vicente Adolfo Bolea Sanchez <vicente.bolea@kitware.com>
Update compiler config with bootstrapped compiler when it was already installed and added config defaults to code so mutable_config test fixture works.
hwloc depends on MPI when netloc is enabled. Note that OpenMPI depends on
netloc, so hwloc cannot use OpenMPI as the MPI provider when netloc is
enabled (this would result in a cyclic dependency).
To specify an environment for a comment, the user can specify
"spack -e <env>". The documentation incorrectly specified "-E" (which
is actually used to ignore any implicit use of environments).
While building _visit_, I ran into an undefined symbol at link time. I tracked
the missing dependency to _libsm_ needing to know about _libuuid_ at link time.
* phist: add int64 variant and resulting conflicts and dependencies
* phist: use Trilinos TPLs as soon as they are in the spec, not just if +trilinos isexplicitly set
and remove a redundant depends-statement
* phist: use int as gotype for Trilinos dependency if ~int64
* phist: new version 1.9.0
* phist: remove trailing whitespace
* phist: updated checksum (version tag was moved)
* Fix: Flex Reconfigure
Learn the `flex` package how to reconfigure itself when needed.
Fix#11551
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* Autoreconf: only when actually desired
Co-authored-by: Andrew W Elble <aweits@rit.edu>
If the Spack compiler wrapper encounters any "-isystem" option, then
when adding include directories for Spack dependencies, Spack will
use "-isystem" instead of "-I". This prevents Spack-generated "-I"
options from overriding the "-isystem" options generated by the build
system. To ensure that build-system "-isystem" directories are
searched first, Spack places all of its inserted "-isystem"
directories after.
The new ordering of -isystem includes is:
* -isystem from build system (not system directories)
* -isystem from Spack
* -isystem from build system (for directories like /usr/include)
The prior order of "-I" arguments is preserved (although as of this
commit Spack no longer generates -I if -isystem is detected):
* -I from build system (not system directories)
* -I from Spack (only if there are no "-isystem" options)
* -I from build system (for directories like /usr/include)
* New package: sumo
This PR adds the sumo package, as well as the fox package as a
dependency. It also updates and adds some fixes for openscenegraph.
For fox, the patch is for the development version. That patch should not
be necessary in future versions as it has been applied upstream. The
stable version is 1.6.57 and is marked as preferred. This is the version
needed for sumo.
Added dependencies for openscenegraph as well as set constraints on qt
versions.
* Update var/spack/repos/builtin/packages/sumo/package.py
I had intended to set this version constraint but somehow did not.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add dependency types to sumo recipe
- googletest: 'test'
- swig: 'build'
- java: 'build', 'run'
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Since #16132, we've consolidated the setting of FORCE_UNSAFE_CONFIGURE to
`autotools.py`, so we don't need to use it in packages like `coreutils`,
in our commands, or in our container recipes.
- [x] Remove FORCE_UNSAFE_CONFIGURE from packages
- [x] Remove FORCE_UNSAFE_CONFIGURE from container recipes
- [x] Remove FORCE_UNSAFE_CONFIGURE from `spack ci` command
* [mfem] A few updates: add 'strumpack' variant; add 'zlib'
variant (same as 'gzstream'); fix optmization flag
for v4.0.
* [mfem] flake8 fix
* [mfem] Add version 4.1
* [mfem] Add/tweak some 'conflicts' directives.
* [gslib] Add new release versions + 'develop' version.
* [petsc] Restrict hdf5 version to <= 1.10.99 since 1.12.0 fails
* [metis] Use the original metis url for v4.0.3.
* [conduit] Remove restrictions to the used hdf5 variant to allow
building with other packages that use hdf5, e.g. petsc.
* [mfem] Few updates:
* Replace the 'gzstream' variant with 'zlib' variant.
* Do not add system library paths with -L flags.
* Allow '+pumi+shared' variant.
* Update the 'test_builds.sh' script.
* [occa] Add version 1.0.9.
* [mfem] Some OCCA and RAJA updates.
* [gslib] Fix the build for new versions of the library.
* [mfem] Add 'gslib' variant for GSLIB.
* [mfem] Add 'cuda' variant.
* [mfem] Add 'libceed' variant + a few more tweaks.
* [mfem] Add 'umpire' variant.
* [ceed] Add a draft for v3.0. Not tested. Just made sure that
concretization works for 'ceed' and 'ceed+cuda'.
* [nek] Fix Nek5000/NekCEM
* [nek] Add Nek5000-v19 & polishing Nek packages
* [flake8] Fix flake8 failure
* petsc: use of HDF5 does not care about +hl+fortran
* [petsc] Temporarily allow any hypre version with petsc@develop.
[ceed] Remove the requirement for hypre@develop.
* [libceed] Do not explicitly set NVCCFLAGS for v0.5 and later.
* [laghos] Add version 3.0, pointing to dev branch for now.
Do not set CXX at the make command line.
Simplify the dependecy directives a little.
[ceed] Use laghos v3.0 for ceed v3.0.0.
* [laghos] Keep the injection of CXX in the makefile for laghos
versions <= 2.0.
* [nekcem] Recovert hash-versions used by older versions of the
'ceed' package.
* [occa] Disable hip autodetection because it fails on some machines.
* [laghos] Update v3.0 with the actual release source.
* [suite-sparse] Explicitly add the c11 flag to CFLAGS.
* Update package.py (#15749)
* [magma] Add forgotten specification of the 'cuda_arch' variant.
* [ceed] Use magma v2.5.3 for ceed v3.0.
* libceed-0.6
* mfem: depend on libceed 0.6:, not 0.6.0:
* [libceed] Add 'magma' variant -- enable MAGMA backend.
* [ceed] In v3.0, use '+magma' variant of libceed when cuda is enabled.
* Initial package for Remhos (needs to be updated with actual sha256
* Adding Remhos to CEED-3.0, for now @develop
* petsc: add 3.13.0 (using petsc-lite) and 3.12.5
* ceed: update to petsc@3.13.0:3.13.99
* Temporary fix
* [nekcem] Add hash-version for ceed v3.0.
* [nek5000] Simplify source urls.
* [nektools] Use the same sources and versions as in nek5000.
* [ceed] Update Nek-related package versions.
* libceed: add v0.6 portabilty fix
* libceed: better v0.6 portabilty fix
* Adding Remhos 1.0 release in CEED-3.0
* Updating hash for Remhos-1.0
* [petsc] Add cuda variant.
* [libceed] Flake8 fix.
* [petsc] Add cuda variant.
* [ceed] Fix the OCCA version to 1.0.9. Enable petsc+cuda when
compiling ceed@3.0.0+cuda.
* nek5000: fix python 2.7+ syntax
* [laghos] Fix testing.
* [remhos] Fix testing.
* [remhos] For testing use the 'tests' target instead of 'test'.
* Add/update the maintainers for ceed, libceed, mfem, laghos, and remhos.
* [ceed] Remove unnecessary dependencies.
* libceed: activate AVX when supported
Co-authored-by: Thilina Rathnayake <thilinarmtb@gmail.com>
Co-authored-by: Jed Brown <jed@jedbrown.org>
Co-authored-by: Stan Tomov <tomov@eecs.utk.edu>
Co-authored-by: Tzanio <tzanio@llnl.gov>
This commit sets the `FORCE_UNSAFE_CONFIGURE` environment variable to 1 in autotools builds.
We see a lot of builds popping up and complaining about `FORCE_UNSAFE_CONFIGURE`. This behavior is not actually part of `autoconf` per se. It comes from this patch to `mknod.m4`, which is used by a lot of autoconf builds:
* https://lists.gnu.org/archive/html/bug-gnulib/2010-07/msg00282.html
Which originated from this problem that someone had on AIX:
* https://lists.gnu.org/archive/html/bug-gnulib/2010-07/msg00279.html
The gist of the problem seems to be that they want to check whether `mknod` can do something as root, but instead of checking whether they're running as root and using `su` or something to test this, they just made it harder to run `configure` as root.
This seems very ad hoc and this is one of many checks that are run as root in `configure`. Many of them run before this check, so it's not clear that the `FORCE_UNSAFE_CONFIGURE` thing is even preventing bad things from happening.
So:
1. This only happens in `autotools` builds, so we should go ahead and put it into `autotools.py` instead of in the global build environment, and
2. The variable does too little and provides a false sense of security in the first place, so we'll just disable it and avoid the nuisance. If we really feel strongly about this we can put some warnings in Spack about running as root, but at the top level, not in the middle of an already running script like `configure`.
* Throw an error at spack install invocation instead of most of the way through the build process when cuda_arch is unspecified.
* Clean-up of CMake booleans. No actual change.
* Use CMake variables for hwloc and libelf installation directories and avoid injecting extra flags into CMAKE_CXX_FLAGS
* Conflict should only exist for +cuda variant.
* Biopandas python package
* Update var/spack/repos/builtin/packages/py-biopandas/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-biopandas/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* remove scipy dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
At least the v2.0.2.0 tar ball contains compiled object files etc, which
cause the build to fail on other architectures (ppc64le in particular), so
this patch adds a `make clean` after configuring first.
* SourceForge: Mirror Mixin
Add a mixing class for direct `CNAME`s to sourceforge mirrors.
Since the main gateway servers are often down, this could reduce
timeouts and fetch errors for sourceforge.net hosted software.
* SourceForge: unspectacular mirror replacement
add mirrors to all sourceforge packages with trivial
download logic.
tested fetch of latest version of each of these packages
with various mirrors before committing.
* SourceForge: xz
the author homepage is chronocially overrun and this is the offical
upload with many mirrors.
* macOS: Fix emacs Linking
Fix linking issue of emacs on macOS (clang and gcc).
Applies the same work-around as conda-forge:
b051f6c928/recipe/build.sh
Homebrew avoids this by linking against the system ncurses lib:
https://github.com/Homebrew/homebrew-core/blob/master/Formula/emacs.rb
* ncurses: fix outdated variant comment
this comment was build on the assumption that gnutls
triggers a termlib dependency in emacs. that's not the
case, ncurses itself depends on termlib when build with
this feature.
libpng still has its sourceforge page but is actively been
developed on github.
since the sourceforge urls are too often down (as seen in
my nightly CI/CD tests), just switch the download source to
GitHub instead.
`DYLD_LIBRARY_PATH` can frequently break builtin macOS software when
pointed at Spack libraries. This is because it takes *higher* precedence
than the default library search paths, which are used by system software.
`DYLD_FALLBACK_LIBRARY_PATH`, on the other hand, takes lower precedence.
At first glance, this might seem bad, because the software installed by
Spack in an environment needs to find *its* libraries, and it should not
use the defaults. However, Spack's isntallations are always `RPATH`'d,
so they do not have this problem.
`DYLD_FALLBACK_LIBRARY_PATH` is thus useful for things built in an
environment that need to use Spack's libraries, that don't set *their*
RPATHs correctly for whatever reason. We now prefer it to
`DYLD_LIBRARY_PATH` in modules and in environments because it helps a
little bit, and it is much less intrusive.
This commit removes the DYLD_LIBRARY_PATH variable from the default
modules.yaml for darwin. The rationale behind deleting this
environment variable is that paths in this environment variable take
precedence over the default locations of libraries (usually the
install path of the library), which can lead to linking errors in some
circumstances. For example, executables intended to link with Apple's
system BLAS and LAPACK will instead link to a spack-installed
implementation (e.g., OpenBLAS), causing runtime errors.
These errors are resolved by instead relying on paths set in
DYLD_FALLBACK_LIBRARY_PATH, which is lower in precedence than default
locations of libraries.
provided (#15662).
Prior to this fix, the checked Spec object would not be populated, and
concretization would fail.
Co-authored-by: Marc Allen <mrcall@amazon.com>
Mesa links against libtinfo so needs to depend on ncurses. It also needs
a little help finding the library directory so an LDFLAGS configure
option is added.
* MPark.Variant: GCC 7.3.1 Conflict
Due to an ICE in this specific patch-release of GCC, compile
errors in downstream packages should be avoided with a clean
conflict.
* Fix superfluous spaces
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Fix typo
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add Cubist
* enhance recipe
* Not using OS module anymore
* remove white space
* Fix build shell
* make Flake8 happy
* use bash shell for build
* Convert it To MakefilePackage as per peer-review
* Update var/spack/repos/builtin/packages/neo4j/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add new package: neo4j
* refine neo4j package
* fix flake8 warning
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* update: memsurfer with python3
* flake8 compliance
* Update var/spack/repos/builtin/packages/memsurfer/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/memsurfer/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/memsurfer/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* removed build_type preferences at adamjstewart's suggestion
* Added build/run dependency on python3.7
as suggested by adam stewart
* more flake8 horror!
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`spack test` has a spurious '[+] ' in the output:
```
lib/spack/spack/test/install.py .........[+] ......
```
Output is properly suppressed:
```
lib/spack/spack/test/install.py ...............
```
* sonLib package as required by the HAL toolkit
* cleanup
* Update var/spack/repos/builtin/packages/sonlib/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/sonlib/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Makes the following changes:
* (Fixes#15620) tty configuration was failing when stdout was
redirected. The implementation now creates a pseudo terminal for
stdin and checks stdout properly, so redirections of stdin/out/err
should be handled now.
* Handles terminal configuration when the Spack process moves between
the foreground and background (possibly multiple times) during a
build.
* Spack adjusts terminal settings to allow users to to enable/disable
build process output to the terminal using a "v" toggle, abnormal
exit cases (like CTRL-C) could leave the terminal in an unusable
state. This is addressed here with a special-case handler which
restores terminal settings.
Significantly extend testing of process output logger:
* New PseudoShell object for setting up a master and child process
and configuring file descriptor inheritance between the two
* Tests for "v" verbosity toggle making use of the added PseudoShell
object
* Added `uniq` function which takes a list of elements and replaces
any consecutive sequence of duplicate elements with a single
instance (e.g. "112211" -> "121")
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The performance improvements done in #14693 where leaving the DB in an inconsistent state when specs were removed from it. This PR updates the DB internal state whenever the DB is written to a file.
Note that we still cannot properly enumerate installed dependents, so there is a TODO in this code. Fixing that will require the dependents dictionaries in specs to be re-keyed (either by hash, or not keyed at all -- a list would do). See #11983 for details.
Reading the database repeatedly can be quite slow. We need a way to speed
up Spack when it reads the DB multiple times, but the DB has not been
modified between reads (which is nearly all the time).
- [x] Add a file containing a unique uuid that is regenerated at database
write time. Use this uuid to suppress re-parsing the database
contents if we know a previous uuid and the uuid has not changed.
- [x] Fix mutable_database fixture so that it resets the last seen
verifier when it resets.
- [x] Enable not rereading the database immediately after a write. Make
the tests reset the last seen verifier in between tests that use the
database fixture.
- [x] make presence of uuid module optional
Removed the code that was converting the old index.yaml format into
index.json. Since the change happened in #2189 it should be
considered safe to drop this (untested) code.
If a user invoked "spack env activate example-henv", Spack would
mistakenly interpret the "-h" from "example-henv" as the "-h" option.
This commit allows users to create and activate environments with
"-h" in the name.
This issue existed for bash shell support as well as csh support, and
this commit addresses both, along with some other unrelated csh
support issues.
* only override spec prefix for non-external packages
* add test that environment shell modifications respect explicitly-specified prefixes for external packages
* add clarifying comment
spack.util.environment_after_sourcing_files compares the local
environment against a shell environment after having sourced a
file; but this ends up including the default shell profile and
rc, which might differ from the local environment.
To change this, compare against the default shell environment,
expressed here as 'source /dev/null'.
According to my nightly CI/CD tests, x.org is another large provider
of software in common build chains that is often down.
Added a hand-selected amount of mirrors that is well up-to-sync.
Tested with `util-macros` that has a quite "recent" patch release.
Other packages to follow in an individual PR.
Makes the following changes:
* (Fixes#15620) tty configuration was failing when stdout was
redirected. The implementation now creates a pseudo terminal for
stdin and checks stdout properly, so redirections of stdin/out/err
should be handled now.
* Handles terminal configuration when the Spack process moves between
the foreground and background (possibly multiple times) during a
build.
* Spack adjusts terminal settings to allow users to to enable/disable
build process output to the terminal using a "v" toggle, abnormal
exit cases (like CTRL-C) could leave the terminal in an unusable
state. This is addressed here with a special-case handler which
restores terminal settings.
Significantly extend testing of process output logger:
* New PseudoShell object for setting up a master and child process
and configuring file descriptor inheritance between the two
* Tests for "v" verbosity toggle making use of the added PseudoShell
object
* Added `uniq` function which takes a list of elements and replaces
any consecutive sequence of duplicate elements with a single
instance (e.g. "112211" -> "121")
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* MAINT: Charliecloud OSX error
* raise an appropriate error when attempting to build
Charliecloud on Mac OSX, since it will otherwise fail
with a more confusing configure stage link check failure
* Update var/spack/repos/builtin/packages/charliecloud/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* MAINT: PR 16049 revision
* remove an unused import
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Moved link to the right place in the docs
* Fixed a few minor issues in extensions docs
Fixed a typo, added a subsubsection for better
navigation, reworded "modules in Python" as
"Python packages"
* pfft: fix to handle 'precision' variant in fftw
pfft had been checking for +double, etc. in fftw spec, which no longer
are present (replaced by Multivalued variant precision).
* pfft: fix to handle 'precision' variant in fftw
pfft had been checking for +double, etc. in fftw spec, which no
longer are present (replaced by Multivalued variant precision).
(amended to use more idiomatic checks as suggested by @alalazo)
sourceware.org is often quite overrun and times out or results in
certificate errors.
Since libffi, bzip2, elfutils, etc. are quite fundamental in
build chains, lets add some official mirrors.
libffi, bzip2, elfutils, lvm2, valgrind: add mirrors
* Patch Mathematica
Mathematica installer moves all files and directories from installation directory to a backup one. The problem is that it also moves .spack to this backup location. Once it's done it does not move .spack back where it was.
My patch creates a copy of .spack to /tmp then moves it back right before exiting the install call.
* Make lint happy
* Use Spack native copy()
As suggested in peer-review let's:
- Copy .spack to stage directory so I don't have to use random
- Use Spack native copy() to do these operations
* Use join_path to create paths
As per peer-review suggestion:
- Use join_path to create paths
- Use copy_tree since we're copying a directory that could have sub-directories
* Update package.py to include py-notebook 6.0.3 and sha
* Update package.py
* [py-notebook] updated py-tornado version requirements
* [py-notebook] reworked and reordered for readability
* [py-notebook] updated version requirement for py-jupyter-client
* [py-notebook] updated version requirements for py-jupyter-core
Co-authored-by: ehdeec <ehdeec@rit.edu>
* new package: BART
This PR adds the BART (Berkeley Advanced Reconstruction Toolset)
package.
Despite the presence of CMake files, this package builds with a
Makefile. It looks like the project is moving away from cmake. The patch
for MKL has been committed upstream so should only be necessary for this
version of BART. The Makefile patch is meant for working with Spack and
would not be useful upstream. The bart scripts are still setup to use
bart with the subcommands being individual binaries. This patches those
to use the single binary with built-in subcommands and assumes that
spack is providing the TOOLBOX environment variable and setting PATH.
* Update var/spack/repos/builtin/packages/bart/package.py
Yes, '==' make more sense for a single string.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* The python dependencies are run time only.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New patch release SLEPc 3.13.1
* Update var/spack/repos/builtin/packages/slepc/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The performance improvements done in #14693 where leaving the DB in an inconsistent state when specs were removed from it. This PR updates the DB internal state whenever the DB is written to a file.
Note that we still cannot properly enumerate installed dependents, so there is a TODO in this code. Fixing that will require the dependents dictionaries in specs to be re-keyed (either by hash, or not keyed at all -- a list would do). See #11983 for details.
* new package: py-youtube-dl + fixes for dependencies
This PR adds the py-youtube-dl program. In addition, there are a couple
of dependency packages that needed to be updated.
* ffmpeg
This is needed by py-youtube-dl. However, the spack ffmpeg recipe does
not include a lot of options, specifically, a dependency on openssl for
working with the https protocol.
- Added updated version.
- Added variants for the different licensing options.
- Added "meta" variants for X and drawtext. These turn on/off several
options.
- Set variants and dependencies for many options. The defaults are based
on the configuration settings in ffmpeg.
- Set dependencies that were missing or that will likely get pulled in
from the system.
* libxml2
The ffmpeg+libxml2 variant initially failed to build. The issue is that
libxml2 sets the headers property to
include_dir = self.spec.prefix.include.libxml2
The ffmpeg configure looks for prefix.include and fills in the rest.
This could probably be patched in ffmpeg but the headers property in the
libxml2 recipe is not consistent with the environment module or the
pkgconfig file, both of which set the headers path to prefix.include.
This PR sets the libxml2 headers property to
include_dir = self.spec.prefix.include
A spot check of a few libxml2 dependents did not rreveal any problems
with this change.
* Comment out libxml2 dependency in ffmpeg
The header property issue of the spack libxml2 package will need to be
resolved in another PR before libxml2 can be enabled in ffmpeg.
* new package: openmm
* dependency adjustments
* 1. modify dependencies
2. openmm dynamically compiles cuda kernels during runtime,
attempt to set up an environment that will work.
* Update var/spack/repos/builtin/packages/openmm/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This adds a boolean 'libtirpc' variant to the hdf package.
Default is false, which will reproduce previous behavior (which
was to rely either on system xdr headers/library, or have hdf use
it's builtin xdr lib/headers (which are only for 32 bit))
If true, a dependency is added on 'libtirpc', and the LIBS and
CPPFLAGS are updated in the configure command to find the libtirpc
library and xdr.h header.
This is needed as RHEL8 (and presumably other distros have or will be)
removed xdr.h from glib-headers package,which was breaking the previous
behavior of using system xdr headers.
* opencv: assorted fixes
1. depends on blas when +lapack
2. set cuda nvcc flags for cuda_arch
3. let cuda/contrib builds work
4. depends on hdf5 when cuda/contrib
5. depends on ant when +java
6. allow protobuf version to be different
7. let opencv recompile it's protoc files.
* ant is a build-time dependency
* register +cuda~contrib as impossible.
The old api is found in version 0.3.0 which uses a different release
name, so the url function was updated to properly find the older
releases. Also, this removes the boost constraint on the 0.3.0 version
which does not need it.
* meson: Add 0.54.0
This change also improves the rpath patch we are using. Instead of never
removing the rpath, we now only keep it if running within Spack to
guarantee consistent behavior of Meson.
* meson: Make ninja a hard dependency
* Update doxygen package
* Add new version releases 1.8.16 and 1.8.17
* Add mscgen as an optional dependency.
* Update doxygen package.py
Fix typo in mscgen dependency specification
* Remove whitespace for flake8
* primer3: move to github, add 2.5.0, fix 2.3.7
- The Primer3 project moved to GitHub.
- update the URL
- compare the tarballs 2.3.7 from Sourceforge and github, no
significant differences (e.g. the Sourceforge tarball contained a
couple of "tmp" files).
- update the signature for the 2.3.7 tarball.
- @2.3.7 doesn't build with gcc@8.4.0, there's a dubious pointer/int
comparison that causes an error. It was fixed upstream in newer
versions, apply simple patch to this version so that it continues to
be build-able with newer compilers. See:
- https://github.com/primer3-org/primer3/issues/2
- https://github.com/primer3-org/primer3/issues/3
- Add info for @2.5.0, which builds cleanly.
* Flake8 cleanup
* [gtk-doc] created template
* [gtk-doc] using custom url to standardize on dotted version
* [gtk-doc] added description and homepage
* [gtk-doc] added dependencies and added pdf variant
* [gtk-doc] commented out pdf variant
* [gtk-doc] cleaned up leftover fixmes
* [gtk-doc] flake8
* [gtk-doc] readded url
* [gtk-doc] python packages are build and run dependencies
* py-torch: Fix v1.4.0 by adding v1.4.1
version/tag hack so that 1.4.0 is still workable.
see pytorch/pytorch#35149
* delete/disable fbgemm support in 1.4.0
* moved conflict
* Skip collection of compiler link paths if compiler does not define a verbose flag
* modules config bug: allow user to configure a compiler without an explicit entry for loaded modules
Newer versions of glib require Meson, so this PR adds support for that
using a hybrid approach. glib@5.28: will be built using Meson, older
versions still make use of Autotools.
The most recent change to the openjdk package set expand=False for all versions
of the package. This means only the unexpanded archive will be installed, which is not correct.
* Update dependencies and support variant for Fortran Intermediate Representation.
* Add Cmake flags that toggle Fortran Intermediate Representation on/off. Exclude Flang tests for now.
* f18+fir variant needs next release of llvm or master.
* Only build tests if you are pass in --test to spack install
* New package: py-tensorboard
* some basic dependencies based on requirements.txt
remove the older version that doesn't build
* requested changes
* add additional dependencies
* more dependency changes
* py-onnx: only use py-typing if python < 3.5
avoids a 'Callable has no attribute __abc_registry' error. See
onnx/onnx#2199 and python/typing#573
* add version 1.6.0 while we're here.
Add new version 4.5.0 and update quarantine variable list to also
include LD_PRELOAD in addition to LD_LIBRARY_PATH (to avoid side-effect
when the first depends on the latter).
- Add info for version 0.68.3
- Add variant to package that enables the "extended" features. It
works with both of the existing versions (not sure how far back it
goes, but confirmed that it's valid fro 0.53).
Tested on OS X with go@1.14.1.
* Add ACTS v0.21
* Update var/spack/repos/builtin/packages/acts-core/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* slepc: updates for @devel
- +arpack now works with int64
- +blopex add conflict with int64
- switch to using --with-arpack-lib [from --with-arpack-lib] with current slepc
- use updated blopex with current slepc
* slepc: conflict with blopex should be in all versions
* slepc: add new downloads
* slepc: add whitespace around operator
Co-authored-by: Satish Balay <balay@mcs.anl.gov>
* New package(s): py-pydeps and py-stdlib-list
* requested changes
* Update var/spack/repos/builtin/packages/py-stdlib-list/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cryptsetup: restrict the version of automake if @2.2.1
fixes#15706
* Update var/spack/repos/builtin/packages/cryptsetup/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add capability for detecting build number for Arm compilers
* Fixing fleck8 errors and updating test_arm_version_detection function for more detailed Arm compielr version detection
* Ran flake8 locally and corrected errors
* Altering Arm compielr version check to remove else clause and be more consistent with other compielr version checks. Added test case so both the 'if' and 'else' conditionals of the Arm compiler version check have a test case
Co-authored-by: EC2 Default User <ec2-user@ip-172-31-7-135.us-east-2.compute.internal>
* Fixed building coreutils on Darwin
* Bump nano version to 4.9
* Coreutils: Add program prefix g so we don't conflict with Apple utilities
* Fix intendation
* Make format more spack like
* Removed unnecesary changes
* Merge branch 'develop' of github.com:DiegoMagdaleno/spack into develop
Fix linking libgit2 on Darwin
* Revert "Merge pull request #3 from spack/develop"
This reverts commit 58dbbdb82b, reversing
changes made to dd7a413f48.
* Revert "Revert "Merge pull request #3 from spack/develop""
This reverts commit f956aa7b13.
* Revert "Merge branch 'develop' of github.com:DiegoMagdaleno/spack into develop"
This reverts commit 50321f7986.
spack.util.environment_after_sourcing_files compares the local
environment against a shell environment after having sourced a
file; but this ends up including the default shell profile and
rc, which might differ from the local environment.
To change this, compare against the default shell environment,
expressed here as 'source /dev/null'.
* only override spec prefix for non-external packages
* add test that environment shell modifications respect explicitly-specified prefixes for external packages
* add clarifying comment
* update version: intel packages daal, ipp, mkl-dnn, mkl, mpi, parallel-studio, pin, tbb and makes url parameter consistent and always use single quote.
* Fixes a typo with one of the sha256 checksum..
* Adds version entries for new versions of Intel packages.
Co-authored-by: Robert Mijakovic <robert.mijakovic@lrz.de>
* Patching unqlite to be able to build a shared library
* Correcting a whitespace for flake8 to pass
* added comment about PR on unqlite
* extra commit to force github to merge
* Update flit package to v2.1.0 and add dependencies
* flit: comment out bash dependency
The host system should have bash available and compiling bash through
spack failed for me. I'm not sure if binutils and coreutils should
be listed as dependencies as well.
* Add new version of py-pyelftools
* py-pyelftools: add py-setuptools as a build dependency
* Address review comments
HDF5 1.12 broke backward compatibility, so we're preferring version 1.10
for now. Packages that need the new API should specify:
depends_on("hdf5@1.12:")
to be explicit. We can eventually change the preference, but at the
moment most libraries have not udpated to use the new HDF5.
* Added v3 of Laghos
Added v3 of Laghos as per
https://github.com/CEED/Laghos/blob/v3.0/README.md
* Update var/spack/repos/builtin/packages/laghos/package.py
Changed develop->master as per PR
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Made Metis Dependency Explicit
Added explicit metis dependency
* Folded @develop Laghos Deps in to @3.0:
Theoretically there will be a difference between develop and 3.0: in the
future but currently there is not
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add myself as a maintainer
* This was a regression that occured in previous PR. Flang has been excised from LLVM for now until f18 is merged upstream.
* Libraries only needed when a GPU backend is present.
* Add version 18-08-9-1
* Add variant to allow setting the sysconfdir: See below
About sysconfdir:
slurm has a server and a client.
To use the correct communication channel, the client needs
to be able to read the correct config. This config is in
PREFIX/etc.
Let's assume one has the server part installed as a system
package. This generally is a good idea, so that the server
gets started during boot. This means, that the config is
in /etc/slurm.
If one now wants to use the client part (library!) via
spack, one has a problem: spack's slurm looks in
SPACK-PACKAGE-PREFIX/etc for the config.
There needs to be a way to let the spack installed package
use the system's config.
So add a variant to override the path during build:
sysconfdir=/etc/slurm.
This is much like what happened in #15307 for munge.
If a user invoked "spack env activate example-henv", Spack would
mistakenly interpret the "-h" from "example-henv" as the "-h" option.
This commit allows users to create and activate environments with
"-h" in the name.
This issue existed for bash shell support as well as csh support, and
this commit addresses both, along with some other unrelated csh
support issues.
Currently, to force Spack to use an external MPI, you have to specify `buildable: False`
for every MPI provider in Spack in your packages.yaml file. This is both tedious and
fragile, as new MPI providers can be added and break your workflow when you do a
git pull.
This PR allows you to specify an entire virtual dependency as non-buildable, and
specify particular implementations to be built:
```
packages:
all:
providers:
mpi: [mpich]
mpi:
buildable: false
paths:
mpich@3.2 %gcc@7.3.0: /usr/packages/mpich-3.2-gcc-7.3.0
```
will force all Spack builds to use the specified `mpich` install.
* geant4: new version 10.6 plus simplifications
Add new 10.6.0 release, migrating download of source to use Geant4's
public release repo on CERN GitLab. Change versioning scheme to use
clearer and standard semantic scheme.
Update geant4-data and g4XXX data packages with new versions. Migrate
geant4-data as a BundlePackage of the g4XXX packages, installing links
to each under a single directory under share for geant4-data. Ensure
each g4XXX package exports the environment variable pointing to its
location expected by Geant4.
Remove "data" variant from Geant4 package and always use geant4-data.
Simplify cxxstd variant transport to dependencies.
* g4<DATA>: Use self to resolve correct prefix
* geant4, data: Fix flake8 errors
* g4photonevaporation: flake8 fix
* geant4: vecgeom version depends_on
Geant4 major.minor versions have specific dependencies on vecgeom
versions. Add missing vecgeom version for geant4 10.5, and match
version requirements for vecgeom in geant4 depends_on.
* geant4: c++17 patch specific for 10.4.3
* geant4: simplify geant4-data setup
* geant4: Use new define_from_variant function
* geant4: fix flake8 errors
* helics: add new package
* Remove FIXME boilerplate
* Use open @master: verison range for git dependency and remove mpi fix branch version
* Add blank line after spack import
* py-onnx: depends on cmake >= 3.1
* Update var/spack/repos/builtin/packages/py-onnx/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Aded Option to Disable Shared Lua library
Added option to disable generation of shared object library for lua to
avoid build issues on static only platforms
* Fixed Flake8 Issue with Lua Spackage
Fixed indentation issue with lua spackage
The current implementation of `spack-python` will leave an extra shell
around while it runs. That shell should really replace itself with
spack.
- [x] add exec to spack-python script
* Add initial attempt at intel-mpi-benchmarks package
* Add more checksummed versions
* Changes to how makefile is handled
* First working install version. Needs tuning to support building specific benchmarks
* Add variant for building specific benchmarks rather than all of them
* Minor syntax change
* New package: gdrcopy
provides the userspace libraries for gdrcopy.
* Update var/spack/repos/builtin/packages/gdrcopy/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* XIOS: add new versions
Patch has been removed because it was not applied to any previously
existing versions and it actually breaks the new versions added by this
PR.
* Sort versions from newest to oldest
This allows the llvm build to support:
* clang cuda
* libomptarget for:
* current host
* cuda
* bitcode compilation of libomptarget device runtime for inlining by
bootstrapping libomptarget
* split dwarf information support as an option for debug builds, if you need a
debug build, for the love of all that's good in the universe use this flag
* adds necessary dependencies for shared library builds and libomp and
libomp target to build correctly
* new version of z3 to make it sufficient to build recent llvm
The actual change is much smaller than the diff, this is because it's been formatted with black. I realize this kinda sucks right now, but I'm hoping it will make future updates here less painful.
the gcc package.py includes patches for a sanitizer related bug that look
like they've been fixed in gcc 8.4.0, which caused `spack install` to fail.
This PR excludes patching gcc >= 8.4.0 and < 9.0.0.
* py-statsmodels: update to 0.102 and fix dependencies
* Update var/spack/repos/builtin/packages/py-statsmodels/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update URL for new py2neo versions
* Use pypi for py-py2neo
* Add version 4.3.0
* Update py2neo dependencies
* Apply suggestions from code review
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* NETCDF: Remove maxdims maxvars variant
I'm not sure of the correct protocol to do this, so decided to make a stab and hopefully it works or I'm told the correct way...
The `maxdims` and `maxvars` variants for the NetCDF package were, to the best of my knowledge, only ever used for the Exodus library in the SEACAS package. In versions of NetCDF prior to 4.4.0, Exodus required that the `NC_MAX_DIMS` and `NC_MAX_VARS` be increased over the default values. This requirement was removed in 4.4.0 and later.
I do not know of any way to make a variant depend on the version and since the `maxdims` and `maxvars` variants are integer values and not boolean, then every build of NetCDF will have these variants. Typically `maxdims=1024 maxvars=8192` and the build will patch the `netcdf.h` include file for every build even though it is (almost) never needed.
The SEACAS package has a NetCDF version requirement of >4.6.2, so it no longer specifies the `maxdims` or `maxvars` variant and I could find no other package in spack that uses this variant either, so removal should not break anything *in* spack. However, there is no guarantee that some other external package doesn't use the variant, so I'm not sure of the correct way to remove the variant.
For this PR, I simply removed the variants. If there is a way to specify use of the variant tied to a specific version, I couldn't find it anywhere...
* Address review comment
Removed `is_integral` and `import numbers` since `is_integral` was only place it was used.
* Add blank line for flake8
* magma now extends CudaPackage class, taking care of the gcc conflicts
* enforce +cuda; thus cuda is dependency via CudaPackage class
* add conflict
* use cuda_arch to set GPU_TARGET build option
* get rid of unnecessary constraint
* flake8
* impose cuda version dependency found empirically
* add variant description
* add conflict
Co-authored-by: Sinan81 <Sinan81@github>
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
* build python bindings within qscintilla package via extend_path trick
* add todo
* reflect new setup also in py-pyqt4 package
* get rid of qscintilla dependency
* also tweak qgis for the new setup
* generalize the building of python bindings
* generalize building of pythong bindings to all qt versions
* add qsci_api variant
* add qsci_variant for pyqt4 package as well; add comment
* pyqt dependency should build with +qsci_api variant enabled
* fix bugs
* improve style
* reflect recent changes
* flake8
* improve style
* more flake8
* more flake8
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
* Add some comments explaining the choice of flag_handler.
* Fix QMCPACK install method.
* Add support for ppconvert. This requires a custom build method.
* Fix QMCPACK setup_run_environment. Nexus should be properly supported now.
* Cleaner way to check for intel-mkl in spec.
* Remove build method and use build_targets property instead.
* Additional fixed for install method. Effectively restoring the original install method.
* Add the missing backslash to fix directory names.
* Update var/spack/repos/builtin/packages/qmcpack/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qmcpack/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qmcpack/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Omit these conflicts on mkl variants for now, will hopefully be supportted with new concretizer in a couple of months.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
intel moved the repository around.
github changes the prefix inside the tar according to the
repository name.
So all sha256 have changed!
I verified that the tar contents for 2019.4 did not change
except for the prefix.
* TensorFlow: Clean up/simplify the installation, make sure the headers are
installed so that horovod can find them successfully. Fix the 2.0.* builds.
* Backport of 837c8b6b upstream
"Remove contrib cloud bigtable and storage ops/kernels."
Allows 2.0.* releases to build with '--config=nogcp'
* comment regarding tensorflow issue #31187
Co-authored-by: Andrew W Elble <aweits@skl-a-00.rc.rit.edu>
## Summary
This PR updates and improves the Spack package for [UPC++](https://upcxx.lbl.gov).
I'm an LBL employee and developer on the UPC++ team, as well as the maintainer of this Spack package.
### Key Improvements:
* Adding new 2020.3.0 release and support for use of develop/master branches
- Our build infrastructure underwent a major change in this release, switching from a hand-rolled Python2 script to a bash-based autoconf work-alike.
- The new build system is NOT using autotools (nor does it support some of the more esoteric autoconf options), but the user interface for common builds is similar.
* Add explicit support for an MPI optional dependency
- New `mpi` variant enables use of the MPI-based spawner (most relevant on loosely coupled clusters), and the (unofficial) mpi-conduit backend
- This variant is OFF by default, since UPC++ works fine without MPI on many systems, increasing the likelihood first-time Spack users get a working build without needing to correctly setup MPI
* Add support for post-install testing using the test support deployed in the new build infrastructure
* Fix or workaround a few bugs observed during testing
### Status
The new package has been validated with a variety of specs across over seven different systems, including: NERSC cori, ALCF Theta, OLCF Summit, an in-house Linux cluster, and macOS laptops (Mojave and Catalina).
Expose serial/parallel build (MPI), CUDA/OpenMP backends, Clang, and
Ascent bindings.
Interestingly, `warpx +ascent` currently leads to an infinite loop in
the Spack concretizer.
Removed provider_index use of 'import from' and refactored a few routines to a further subclassing of _IndexBase for implementing user defined bindings of provider specs.
Git-based conditions database for HEP and other experiments.
Use latest release version and current master to support Linux and
macOS. Add core known dependencies and conflicts related to C++17
support.
cxxstd variant added to help transitive dependencies, and for future
support for newer standards in future.
* relocate: removed import from statements
* relocate: renamed *Exception to *Error
This aims at consistency in naming with both
the standard library (ValueError, AttributeError,
etc.) and other errors in 'spack.error'.
Improved existing docstrings
* relocate: simplified search function by un-nesting conditionals
The search function that searches for patchelf has been
refactored to remove deeply nested conditionals.
Extended docstring.
* relocate: removed a condition specific to unit tests
* relocate: added test for _patchelf
Our unit tests run many times. Any unit test which actually installs
a package (which involves fetching code on the internet) is a severe
bug because it runs an installation many times (i.e. re-downloading
the same package for each version of Python that we run unit tests
for).
This reverts commit 25893f1, which added tests that install real
packages.
If the Python used by Spack does not include Setuptools, then
'spack test' will fail because Spack's vendored pytest dependency
imports and uses Setuptools in some of its functions. It turns out
that Spack doesn't use the functionality those methods enable, so
this PR removes those functions and thereby allows 'spack test' to
run without Setuptools.
For any Spack test using Spack's YAML configuration, avoid using real
Spack configuration that has been cached by other tests and Spack
startup logic. Previously this was only done for tests using
'mutable_config' (i.e. those which expected to *change* the
configuration of Spack), but in fact all tests that read Spack config
should use it.
This was an issue when running tests within an environment, because
compiler configuration ends up being queried earlier, and the user's
real config "leaks" into the cache. Outside an environment, the cache
is never set until tests touch it, so we weren't seeing this issue.
`spack test` has a spurious '[+] ' in the output:
```
lib/spack/spack/test/install.py .........[+] ......
```
Output is properly suppressed:
```
lib/spack/spack/test/install.py ...............
```
Update WarpX for recent developments:
- add openPMD I/O (default: ON)
- remove electro-static solver option (now a runtime option)
- enable tiny profiler by default
- depend on new CXX std support in make scripts for C++14
- WarpX only supports 2D and 3D in cartesian dims
* davix: add cxxstd variant
Davix is written in C++, so add this variant to allow dependents to specify
this so a consistent ABI is used.
* davix: fix flake8 errors
Reading the database repeatedly can be quite slow. We need a way to speed
up Spack when it reads the DB multiple times, but the DB has not been
modified between reads (which is nearly all the time).
- [x] Add a file containing a unique uuid that is regenerated at database
write time. Use this uuid to suppress re-parsing the database
contents if we know a previous uuid and the uuid has not changed.
- [x] Fix mutable_database fixture so that it resets the last seen
verifier when it resets.
- [x] Enable not rereading the database immediately after a write. Make
the tests reset the last seen verifier in between tests that use the
database fixture.
- [x] make presence of uuid module optional
* Add gmsh v4.5.4 with new options
This adds OpenCASCADE as an alternative to the oce package.
A new variant 'privateapi' is added to enable the gmsh private API.
* Make oce conflict with opencascade in gmsh
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded.
This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded.
This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
This makes sure that a package's fetch_options are used when fetching
new versions to checksum. This allows working around problems with
slow servers or those requiring a cookie to be set.
Bug: Spack hangs on some Cray machines
Reason: The TERM environment variable is necessary to run bash -lc "echo $CRAY_CPU_TARGET", but we run that command within env -i, which wipes the environment.
Fix: Manually forward the TERM environment variable to env -i /bin/bash -lc "echo $CRAY_CPU_TARGET"
When trying to use an upstream Spack repository, as of f2aca86 Spack
was attempting to write to the upstream DB based on a new metadata
directory added in that commit. Upstream DBs are read-only, so this
should not occur.
This adds a check to prevent Spack from writing to the upstream DB
fixes#15449
Before this PR a call to pkg.url_for_version was modifying
class attributes determining different results for subsequents
calls and an error when the urls was empty.
This recovers the old behavior of replace_prefix_bin that was
modified to work with elf binaries by prefixing os.sep to new prefix
until length is the same as old prefix.
Testing the install StopIteration exception resulted in an attribute error:
AttributeError: 'StopIteration' object has no attribute 'message'
This PR adds a unit test and resolves that error.
The new build process, introduced in #13100 , relies on a spec's dependents in addition to their dependencies. Loading a spec from a yaml file was not initializing the dependents.
- [x] populate dependents when loading from yaml
The distributed build PR (#13100) -- did not check the install status of dependencies when using the `--only package` option so would refuse to install a package with the claim that it had uninstalled dependencies whether that was the case or not.
- [x] add install status checks for the `--only package` case.
- [x] add initial set of tests
This change stores packages' configure arguments during build and makes
use of them while refreshing module files. This fixes problems such as in
#10716.
* Add version 6.20.{00,02}, don't yet mark it preferred
* It needs zstd
* It needs numpy (at least for 6.20.00:6.20.03)
* Reorder python dependencies a bit
* Add mlp variant, default False
Older versions always include mlp, so no conflicts there.
* Disable tmva, because it needs mlp
* tmva needs mlp, so add conflict
* Add sources and resources for each version of Rust
* install bootstrapping compiler into stage
* Add libgit2
* Install bootstrapping compiler correctly
* implement full rust bootstrap
* Remove support for Rust pre-1.14
Also add lots of comments
* Support only Rust 1.17 and newer
* Remove < 1.23 versions of Rust
* Change the layout of rust_releases for maintainability
* Remove LLVM variant
* Address flake8 issues
* Make libgit2 curl variant default False, conflict 0.28 and newer
* Remove binutils dependency
* Add ARM64 while we're at it
* flake8
* use the 'python' routine rather than relying on the correct python to be picked up
Bug: Spack hangs on some Cray machines
Reason: The TERM environment variable is necessary to run bash -lc "echo $CRAY_CPU_TARGET", but we run that command within env -i, which wipes the environment.
Fix: Manually forward the TERM environment variable to env -i /bin/bash -lc "echo $CRAY_CPU_TARGET"
* petsc: add checksum for 3.12.4
* petsc: constrain hdf5 to <= 1.10.x
Current petsc will error when being build with hdf5 1.12, so this ensures that
a compatible hdf5 will be used. Fix suggested by @balay.
- [x] move some logic for handling virtual packages from the `spack
dependencies` command into `spack.package.possible_dependencies()`
- [x] rework possible dependencies tests so that expected and actual
output are on the left/right respectively
Hpctoolkit master was recently updated to test for and allow old
binutils <= 2.33.1 and/or new binutils 2.34. Older hpctoolkit up to
2030.03.01 will forever require :2.33.1.
Adjust the libunwind dependency for safety with the current
concretizer.
Fabtests provides runtime analysis tools and examples of libfabric.
As with other projects that are tightly version-bound, e.g.
`py-adios` and `adios`, the fact that releases stem from the same
repo does not imply they should be the same package.
Remove resources, which complicate the libfabric build, and update
the fabtests package accordingly.
When trying to use an upstream Spack repository, as of f2aca86 Spack
was attempting to write to the upstream DB based on a new metadata
directory added in that commit. Upstream DBs are read-only, so this
should not occur.
This adds a check to prevent Spack from writing to the upstream DB
* Trilinos: Add more variants
+ Provide three new variants to allow building trilinos without netcdf, matio,
or glm.
+ No change to defaults.
* Fix style issue.
* py-tuiview: Source has moved to github
* py-tuiview: Explicitly require +python on gdal dependency
* py-tuiview: Versions up to 1.1.99 are qt4 only
* py-tuiview: Add version 1.2.6, which is qt5 only
* Explicit version range on gdal dependency
* adol-c:updating sources location
* Update var/spack/repos/builtin/packages/adol-c/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* try extend path to solve PyQt5.sip not found issue
* disable private sip installation in sippackage class
* undo manual PyQt5 dir creation in py-sip site-packages dir
* fix typo
* fix typo
* also apply fix to PyQt4
* tidy up
* flake8 and tidy up
* tidy and undo hardcoding of python_include_dir
* replace hardcoded python inc dir
* fix minor issues
* rethink include dir variable name
* improve style
* add new versions
* implement new sip setup to qsci installation
* set sip-incdir correctly for the new setup
* setup extend_path thing before qsci python bindings
* take care of conflict
* flake8
* also extend for PyQt4
* improve style
* improve style
* SipPackage build sys should depend on py-sip
* consolidate extend_path fixes into SipPackage
* fix typo
* fix bugs
* flake8
* revert sip doc to pre-resource setup
* import os module
* flake8
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
* CGNS: Version update
The CGNS library has several new versions. There was a format-changing change in 3.4.0 which was removed for 3.4.1. The change was then added again and released with a change to the major version (4.0.0). Note that 4.0.0 should be close to the functionality of 3.4.0.
* CGNS: Add shared variant
Added the `shared`variant to make the CMake build correctly pick up the RPATH settings.
* Added a package for the DiHydrogen distributed linear algebra library.
* Updated recipe to provide cuda architecture constaints.
* Addressed reviewer comments
* Fixed flake 8
* Previous qt changes broke the openspeedshop gui build. This puts back the changes that caused the breakage.
* Update the qt version to be more robust.
Co-authored-by: Galarowicz, James <jgalarowicz@newmexicoconsortium.org>
* Add patches when building with NAG
* Make libxml2 support optional. Also include conflict for
@:3.2~hydra+libxml2 since @:3.2~hydra does not require libxml2
support
* Add '--disable-silent-rules' to get more verbose output during
the build
* Implemented working file filtering to replace spack compiler wrapper with real compiler.
* Using string=True instead of re.escape. Using self.prefix.lib instead of appending /lib.
Co-authored-by: Wyatt Spear <wspear@cs.uoregon.edu>
* Add new vecgeom versions, add cuda support, automate target options
* Add ROOT, GDML, and external VecCore support to VecGeom
* Address reviewer comments
* Update vecgeom for CUDA
* Update versions
fixes#11555
Every path in CPATH is equivalent to a -I path to the compiler,
while every path in *_INCLUDE_PATH is equivalent to -isystem.
The latter avoids the noise due to warnings coming from 3rd party
libraries that a project depends on.
Added INCLUDE env variable (Intel Fortran, .mod files)
Add a 'define_from_variant` helper function to CMake-based Spack
packages to convert package variants into CMake arguments. For
example:
args.append('-DFOO=%s' % ('ON' if '+foo' in self.spec else 'OFF'))
can be replaced with:
args.append(self.define_from_variant('foo'))
The following conversions are handled automatically:
* Flag variants will be converted to CMake booleans
* Multivalued variants will be converted to semicolon-separated strings
* Other variant values are converted to CMake string arguments
This also adds a 'define' helper method to convert any variable to
a CMake argument. It has the same conversion rules as
'define_from_variant' (but operates directly on values rather than
requiring the user to supply the name of a package variant).
* Buildcache: Install into non-default directory layouts
Store a dictionary mapping of original dependency prefixes to dependency hashes
Use the loaded spec to grab the new dependency prefixes in the new directory layout.
Map the original dependency prefixes to the new dependency prefixes using the dependency hashes.
Use the dependency prefixes map to replace original rpaths with new rpaths preserving the order.
For mach-o binaries, use the dependency prefixes map to replace the dependency library entires for libraries and executables and the replace the library id for libraries.
On Linux, patchelf is used to replace the rpaths of elf binaries.
On macOS, install_name_tool is used to replace the rpaths and dependency libraries of mach-o binaries and the id of mach-o libraries.
On Linux, macholib is used to replace the dependency libraries of mach-o binaries and the id of mach-o libraries.
Binary text with padding replacement is attempted for all binaries for the following paths:
spack layout root
spack prefix
sbang script location
dependency prefixes
package prefix
Text replacement is attempted for all text files using the paths above.
Symbolic links to the absolute path of the package install prefix are replaced, all others produce warnings.
PR #15212 added a new connect_timeout option that can be overridden
using fetch_options but had to specified per-version. This adds a new
per-package variable that can be used to override fetch_options for
all versions in the package. This includes connect_timeout as well
as 'cookie' (e.g. for the jdk package).
Packages can combine package-level fetch_options with per-version
fetch_options, in which case the version fetch_options completely
override the package-level fetch_options.
This commit includes tests for the added behavior.
* Change py-merlinwf to py-merlin to match PyPi.
Change py-merlin to py-merlin-info.
Move to py-merlin_info.
Add py-merlin-info back in.
* Update dependent packages for the new merlin name.
* Remove non-working pyre and the associated packages, exchanger,
py-pythia and py-mlerin-info from citcoms.
* Remove blank line.
fixes#15449
Before this PR a call to pkg.url_for_version was modifying
class attributes determining different results for subsequents
calls and an error when the urls was empty.
* scr: add develop, legacy branches; version 2.0.0
squash! scr: add develop and legacy versions
* filo: package for SCR component
* spath: package for SCR component
* axl: update for versions 0.3 and 0.2
* scr: build with components
* spath: structure of +mpi if/else
* 👌 capitalization of ecp-veloc
* scr: branches are always greater than any version
* Parallel-netcdf: update package.
* Add a temporary patch for version 'develop'.
* Rename version 'develop' to 'master'.
* Drop the patch for 'master'.
includes fixes for likwid-mpirun, better support for ARM and POWER,
other bugfixes.
For full support of ARM and POWER, #14183 has to be merged, too.
Added TomTheBear as maintainer. He is the current main developer of
LIKWID.
* new package: GunRock
* Update var/spack/repos/builtin/packages/gunrock/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* improve
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Sinan81 <Sinan81@github>
* add --skip-unstable-versions option to 'spack mirror create' which skips sources/resource for packages if their version is not stable (i.e. if they are the head of a git branch rather than a fixed commit)
* '--skip-unstable-versions' should skip all VCS sources/resources, not just those which are not cachable
* New package: bat
Add package for bat:
A cat(1) clone with wings.
* Update copyright date
* Embiggen comment re build env settings
Provide a bit more explanatory text about why setup_build_environment
needs to set LLVM_CONFIG_PATH and LIBCLANG_PATH.
Co-authored-by: George Hartzell <ghartzell@audentestx.com>
* eigen: updated url to point to gitlab
fixes#13890
Eigen migrated from bitbucket to gitlab
* eigen: simplified package (no dependencies other than stdlib)
* Added TODO list for future improvements
* libunwind: remove version 2018.10.12, add stable branch
Finish cleaning up the libunwind version numbers. The 2018.10.12
snapshot number didn't fit well with spack's ordering (my bad), and
1.4-rc1 is a near identical replacement.
Add a version for the 1.4-stable branch.
Add a variant for zlib compressed symbol tables (develop branch only).
Adjust packages caliper and hpctoolkit to adapt to the changes.
Add myself as maintainer.
* Flake
* Settle on renaming 'develop' to 'master' (to match the branch name)
and name the 'v1.4-stable' branch as '1.4-head'. 'stable' or
'1.4-stable' is a better name, but '1.4-head' (an infinity version)
sorts better.
* explicitly link against libtinfo
* Update to v15.9
* fixed indentation
* fixed url definition
* added url vor current version again
* fixed indentation
* moving url_version to the bottom
* Add version 0.5.14
* Add variant to allow setting the localstatedir: See below
* Add bzip2 dependency
* Add myself to maintainers (I just think, I can care for
this package)
About localstatedir:
munge has a server and a client.
They communicate via unix domain sockets.
This socket is in PREFIX/var.
This package provides the client, the server, and
development part (headers, libraries).
Let's assume one has the server part installed as a system
package. This generally is a good idea, so that the server
gets started during boot. This means, that the socket is in
the system's /var.
If one now wants to use the client part (library!) via
spack, one has a problem: spack's munge looks in
SPACK-PACKAGE-PREFIX/var for the socket.
There needs to be a way to let the spack installed package
use the system's socket.
So add a variant to override the path during build:
localstatedir=/var.
Allows spack.config InternalConfigScope and Configuration.set() to
handle keys with trailing ':' to indicate replacement vs merge
behavior with respect to lower priority scopes.
Lists may now be replaced rather than merged (this behavior was
previously only available for dictionaries).
This commit adds tests for the new behavior.
* Add Checksum for 1.4.1.4 Release
Add checksum for 1.4.1.4 release. Mark myself as maintainer. List develop version.
* Update var/spack/repos/builtin/packages/libhio/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
It was reported that UnifyFS had a bug with the --enable-mpi-mount
config option, which corresponds to the auto-mount variant. This bug
was fixed in the UnifyFS dev branch, however remains broken for the
0.9.0 version.
This adds a patch to the unifyfs package to fix the auto-mount
variant when installing with version 0.9.0.
This also removes the openssl dependency as unifyfs does not directly
depend on it. This was said to be a non-explicit dependency in #15258.
However, if it is needed, it is likely a non-explicit dependency of
one of unifyfs's dependencies and should be added there.
Fixes: #15292
* lammps: add most recent stable release
* Update var/spack/repos/builtin/packages/lammps/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Add package for ripgrep:
ripgrep is a line-oriented search tool that recursively searches
your current directory for a regex pattern. ripgrep is similar to
other popular search tools like The Silver Searcher, ack and grep.
* [opensubdiv] created stub for opensubdiv
* [opensubdiv] added homepage and description
* [opensubdiv] removed boilerplate
* [opensubdiv] working on dependencies and variants
* [opensubdiv] fixed syntax error
* [opensubdiv] defined spec
* [opensubdiv] added dev version
* [opensubdiv] building on CudaPackage
* [opensubdiv] always build with open gl and depends_on('gl')
* [opensubdiv] applying cuda flags
* [opensubdiv] worked on doc variant
* [opensubdiv] added some x11 libraries
* [opensubdiv] depends on glfw
* [opensubdiv] locating glew
* [opensubdiv] added openmp variant
* [opensubdiv] flake8 fixing
* [opensubdiv] fixed develop version name
* [opensubdiv] fixed description to not need @
Testing the install StopIteration exception resulted in an attribute error:
AttributeError: 'StopIteration' object has no attribute 'message'
This PR adds a unit test and resolves that error.
* ADIOS2: Version `master`
Rename branch version to supported, real development branch `master`.
The old name is legacy Spack when alternative development branch
names were not yet supported.
* ADIOS: Simplify via spec Variable
use the already defined local variable `spec` to shorten
lines
This recovers the old behavior of replace_prefix_bin that was
modified to work with elf binaries by prefixing os.sep to new prefix
until length is the same as old prefix.
Since 1.9.0 is broken for Darwin, which impacts many developers, and
the fix is still a RC, let's keep the previous release as default.
This avoids distruption for OSX developers and CI.
Removed the code that was converting the old index.yaml format into
index.json. Since the change happened in #2189 it should be
considered safe to drop this (untested) code.
Spack's fflags are meant for both f77 and fc. Therefore, they must
be passed as FFLAGS and FCFLAGS to the configure scripts of
Autotools-based packages.
* @develop needs the full git repo to use "git describe"
properly
* If not specifying the cxxstd variant, let cmake use its
default
* Improve fmt dependencies: fairlogger < 1.6.2 does not
work with fmt >= 6.
* Small other stuff
Just `depend_on('ncurses')`, pkgconfig seems to cause the right thing
to happen if the package is built +termlib or ~termlib (tested both
ways on a CentOS 7 system).
Additional details in: https://github.com/spack/spack/issues/15281
Tested by building all of the releases of tmux on a CentOS 7 box.
The distributed build PR (#13100) -- did not check the install status of dependencies when using the `--only package` option so would refuse to install a package with the claim that it had uninstalled dependencies whether that was the case or not.
- [x] add install status checks for the `--only package` case.
- [x] add initial set of tests
* add preliminary afqmc support in qmcpack
* afqmc updates
* fix spack typos
* edit AFQMC to only allow up 3.7 or above
* added NCCL library support for AFQMC build
* add CMAKE args for BUILD_AFQMC_WITH_NCCL
* update for just AFQMC support. No AFQMC+GPU support
* remove nccl for afqmc
* flake8 whitespace fix
* Update flag_handler for 'netcdf-fortran'.
* Refactoring.
* Enable old versions of netcdf-fortran for NAG.
* Disable parallel 'make check' for versions before 4.5.0.
* Fix shared libraries built with NAG instead of conflicting it.
* Add 'skosukhin' as a maintainer of 'netcdf-fortran'.
This PR adds some fixes for the tcsh package.
- Adds new version
- adds list_url so fetching works for current and old tarballs
- sets ncurses dependency explicitly to `ncurses+termlib`
If `+termlib` is not set then it will link against the system libtinfo.
* Quantum-Espresso: qe-6.5 fails to detect MKL for FFT
qe-6.5 fails to detect MKL for FFT if BLAS_LIBS is set due to
an unfortunate upsteam change in their autoconf/configure:
- qe-6.5/install/m4/x_ac_qe_blas.m4 only sets 'have_blas'
but no 'have_mkl' if BLAS_LIBS is set (which seems to be o.k.)
- however, qe-6.5/install/m4/x_ac_qe_fft.m4 in 6.5 unfortunately
relies on x_ac_qe_blas.m4 to detect MKL and set 'have_mkl'
- qe-5.4 up to 6.4.1 had a different logic and worked fine with
BLAS_LIBS being set
However, MKL is correctly picked up by qe-6.5 for BLAS and FFT if
MKLROOT is set (which SPACK does automatically for ^intel-mkl).
Thus, do not set BLAS_LIBS when compiling qe-6.5 with intel-mkl.
* replace all '^intel-mkl' by '^mkl' to match other packages which also provide MKL
e.g. intel-parallel-studio+mkl as mentioned by @adamjstewart in #15276
* Filter problematic flag from std_cmake_args
Including '-DCMAKE_VERBOSE_MAKEFILE:BOOL=ON' in the cmake args causes
bits of cmake's output to end up in the autoconf-generated configure
script. See https://github.com/spack/spack/issues/15274 and
* Don't build in parallel (sigh)
* Constrain hackery for newer versions...
... to newer versions.
* [WIP] new package: qgis
* add maintainer
* further improvements
* reflect improvements to qscintilla installation paths
* comment out qtkeychain dependency as the package is created
* uncomment qtkeychaing dependency, since this package is now created
* a comment on webkit
* specify versions of dependencies, add variant
* fix variant description
* fix proj dependency logic
* adjust conflicts and dependencies so that one can compile qgis@2 with qt4, python2.7
* minor improvement
* fix some build errors, improve dependency specs
* qsci python bindings will be build by py-pyqt
* cmake variable QSCINTILLA_LIBRARY should point to library itself not the parent folder
* turn grass off explicitly, fix typo, turn qspatialite off explicitly
* fix typo
* specify more cmake options that doesn"t seem to be set properly, and use spack provided pkg-config
* fix libzip
* fix build issue with sqlite variant, add runtime dependencies
* add more runtime python package dependencies
* reflect variant name change in sqlite
* add maintainer, correct typo
* add TODO's
* add more versions
* improve style
* add latest versions
* netcdf -> netcdf-c
* add variants as shown in cmake config
* add conflict: v3.8.1 won't build if qt@5.13:
* change preferred version to latest long term release, 3.10.3
* add a zillion of build options
* improve style
* add descriptions for variants
* remove and already implemented compilation tip
* add "when" statements for optional dependencies
* make flake8 happy
* Update var/spack/repos/builtin/packages/qgis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qgis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qgis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qgis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qgis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* convert conflicts to depends_on
* undo str conversion for path objects
* fix flake8 E131
* fix flake8 E128, E124
* improve style
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Caliper depends on python3.
The package needs to be told where to find it.
* More flake8 formatting edits.
* Change explicit python3 to spec['python'].command.path
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Removing defunct import for flake8
* Flake8 trailing whitespace warning.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR adds 'http://' to the homepage setting of a few packages that do
not have it set. Not having that set can cause problems with some wiki
apps when embedding the homepage value into markdown syntax.
- bowtie2
- exuberant-ctags
- perl-want
- samtools
- add version info for v1.8.0
- v1.8.0 adds a new source file, `file.c`, which need to be included
in our hardcoded list of objects to link.
I discovered this while demonstrating "how easy it is to add a new
package to Spack", only to scratch my head when it failed `spack
install` but worked when I ran `make` in the stage dir. Finally
looked at tree/package.py and it All Became Clear.
Perhaps someone should rewrite this to use MakefilePackage, but the
Makefile starts off with a bunch of twisty turny "uncomment these
lines to run on this platform", so it might not be worth it.
* Added package py-versioneer
* Update python version
* Added python2 to versioneer
* Added python2 to versioneer
* Update var/spack/repos/builtin/packages/py-versioneer/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Cleaning comments
* Removed temporarily @warner as a maintainer, waiting for answer
* Removed line
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* gromacs: add v2019.6
* Update var/spack/repos/builtin/packages/gromacs/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- Tell make about the source code path
- Install actual header files, not a wrapper with wrong paths
- Add a patch to prevent compiler warnings
- Improve description
* Fixups for jupyter
This PR fixes a few things for some jupyter related packages.
py-ipython:
- make the python depends_on statements reflect needs of different
versions
- remove an unneeded conflicts directive
py-ipywidgets:
- add new version
- set version constraints for py-widgetsnbextension
py-jupyter-console
- add new version
- set python dependencies for versions as needed
- set version constraint for py-ipython
- set version constraints for py-prompt-toolkit
py-pyqt5
- build with py-sip
py-qtconsole
- add dependency on py-pyqt5
* Update var/spack/repos/builtin/packages/py-jupyter-console/package.py
Tweak version ranges.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-jupyter-console/package.py
Tweak version range.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Make py-pyqt5 a run dependency
Also, make formatting more consistent.
* Fix site_packages_dir
Change reference of site_packages_dir to self.site_packages_dir. Oddly,
this did not show up as a problem until I regenerated the module.
* Restore py-pyqt5 to previous state
* Explicitly set path to site_packages_dir
This change prevents an error when regenerating the py-pyqt5 module
file.
* Fix flake8 errors
* Make sure prefix is in join_path
* Fix flake8 errors
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: py-librosa
This PR adds the py-librosa package, along with new dependency packages
and some updates of existing dependency packages.
- new package: py-audioread
- new package: py-resampy
- new package: py-soundfile
- update package: py-numba
- update package: py-llvmlite
py-numba:
- add updated version
- adjust constraints
py-llvmlite:
- add updated version
- adjust constraints
- fix version specifications for llvm
- add environment function to set PIC
* Update var/spack/repos/builtin/packages/py-numba/package.py
Ah, yes, I see that `setuptools` is listed in the `install_requires` array. I missed that before.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Fix dependency
- Add dependency of py-soundfile depends on libsndfile
- Add new libsndfile package
* Add py-pytest-runner build dep
* Make numpy a variant for py-soundfile
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When building gcc7 and gcc8 on RHEL6 with Spack and installing it as
a spack-available compiler, OpenBLAS will fail to compile because GCC
generates newer instructions than rhel6's `as` assembler knows about
(e.g. "vpermpd").
Building gcc8 with binutils succeeds, and it generates a GCC that can
then successfully build OpenBLAS. This is also expected to work for
gcc7 on RHEL6.
* Added version 0.12.0 to fix issue #15218
* Added dependencies specs with compatible versions
* Switched py-scipy dependency to variant (default F)
* Removed variant py-scipy and didn't add py-dask
* Fixed typo: missing '
* Update var/spack/repos/builtin/packages/py-pyfftw/package.py
Fixing typos from version ranges in dependencies.
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
* Update var/spack/repos/builtin/packages/py-pyfftw/package.py
Removed repeating dependency option.
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
* Update var/spack/repos/builtin/packages/py-pyfftw/package.py
Limited version of py-numpy dependency to <2.0.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Adding an option to build the C api for Umpire.
This is useful if you need to link to a C code and you're using
a compiler suite that doesn't support fortran.
* Also updating the versions while I'm here.
* Adding conflict: Fortran requires C.
To ease transition and confusion, default to C-bindings being
present. This shouldn't hurt anyone who is upgrading an existing
installation.
connect_timeout can be used to increase the time Spack waits for the
server to answer. This can be used to work around slow connections or
servers.
Fixes#14700
* CudaPackage: add support for Tesla K80 and older CUDA
* Flake8 fixes
* Fix cuda_arch when no arch is set
* Fine-tune cuda_arch=37,50 supported CUDA versions
* CUDA 6.5+ supports SM_37
* Add @svenevs as a maintainer
The new build process, introduced in #13100 , relies on a spec's dependents in addition to their dependencies. Loading a spec from a yaml file was not initializing the dependents.
- [x] populate dependents when loading from yaml
LLVM is the only package that explicitly sets the "termlib" variant
of ncurses and it specifies +termlib. ncurses defaults to ~termlib;
if a package depends on LLVM and ncurses, there is a concretizer bug
that incorrectly detects a constraint conflict (see #267). Setting
+termlib as the default is a stopgap measure to avoid this conflict.
If other packages were to explicitly request ~termlib in the future,
the same issue would come up again (and could not be resolved by
adjusting the default of "termlib").
Setting +termlib on ncurses moves some symbols into a separate
"libtinfo". Not all packages may be able to detect libtinfo properly
so may require an update; vim, samtools, and libedit have been
updated to use ncurses+termlib (in the case of libedit, the only
necessary action was to add a newer version where the build system
was updated to check libtinfo).
* Flake8 OK
* Update var/spack/repos/builtin/packages/py-basis-set-exchange/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-basis-set-exchange/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Added missing dependencies proposed by @adamjstewart
* Without py-versioneer
* Added py-versioneer
* Python2 for bse
* Python build error
* Update var/spack/repos/builtin/packages/py-basis-set-exchange/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Removed py-versioneer, according to @adamjstewart
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Buildcache command: add install option -o/--otherarch
This will allow matching specs from other archs, for example
installing macOS buildcaches on linux hosts.
* spack commands --update-completion
- When compiling qrupdate with `FFLAGS=-fdefault-integer-8` it can be perfectly used for larger problem dimensions.
- Improved the readability of the file with the added rules.
args.specs is a list, which results in output like this:
```
eval `spack load --sh ['libxml2', 'xz']`
```
We want this instead:
```
eval `spack load --sh libxml2 xz`
```
This change stores packages' configure arguments during build and makes
use of them while refreshing module files. This fixes problems such as in
#10716.
* Update py-bx-python package
- add update to py-bx-python
- switch to pypi downloads
- set dependencies
* Update var/spack/repos/builtin/packages/py-bx-python/package.py
I had initially pulled version 0.8.6 and then updated that to 0.8.8 but missed the change in the python specs between those two versions.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Fix version 0.7.4
- set dependency on python2
- add dependency on py-python-lzo
- add py-python-lzo package
- set py-numpy dependency to correspond to latest version that works
with python2
* Add constraint for py-six dependency
* Update var/spack/repos/builtin/packages/py-bx-python/package.py
Ah, I had that `when` clause in and then took it out as it did not seem to be needed. I guess it is always better to be more explicit.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Remove py-numpy constraint
Let the concretizer catch the conflict with python2 and py-numpy
versions.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixes & additional variant(s) for HepMC3
* Syntax
* Restore recipe for HepMC2
* Remove FIXME
* Update package.py
* Apply suggestions from code review
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
about: Report a bug in the core of Spack (command not working as expected, etc.)
labels: bug
name: "\U0001F41E Bug report"
about: Report a bug in the core of Spack (command not working as expected, etc.)
labels: "bug,triage"
---
*Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran Spack find to list all the installed packages and..."*
<!-- Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran `spack find` to list all the installed packages and ..." -->
### Steps to reproduce the issue
@@ -20,30 +17,26 @@ $ spack <command2> <spec>
### Error Message
If Spack reported an error, provide the error message. If it did not report an error
but the output appears incorrect, provide the incorrect output. If there was no error
message and no output but the result is incorrect, describe how it does not match
what you expect. To provide more information you might re-run the commands with
the additional -d/--stacktrace flags:
<!-- If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect. -->
```console
$ spack -d --stacktrace <command1> <spec>
$ spack -d --stacktrace <command2> <spec>
...
$ spack --debug --stacktrace <command>
```
that activate the full debug output.
### Information on your system
This includes:
<!-- Please include the output of `spack debug report` -->
1. which platform you are using
2. any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.)
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
-----
### Additional information
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have searched the issues of this repo and believe this is not a duplicate
- [ ] I have run the failing commands in debug mode and reported the output
<!-- We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
Other than that, thanks for taking the time to contribute to Spack! -->
about: Some package in Spack didn't build correctly
name: "\U0001F4A5 Build error"
about: Some package in Spack didn't build correctly
title: "Installation issue: "
labels: "build-error"
---
*Thanks for taking the time to report this build failure. To proceed with the
report please:*
<!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
3. Remove the template instructions before posting the issue.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
---
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install <spec># Fill in the exact spec you are using
... # and the relevant part of the error message
$ spack install <spec>
...
```
### Platform and user environment
### Information on your system
Please report your OS here:
```commandline
$ uname -a
Linux nuvolari 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -d
Description: Ubuntu 18.04.1 LTS
```
and, if relevant, post or attach:
<!-- Please include the output of `spack debug report` -->
- `packages.yaml`
- `compilers.yaml`
to the issue
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
Sometimes the issue benefits from additional details. In these cases there are
a few things we can suggest doing. First of all, you can post the full output of:
```console
$ spack spec --install-status <spec>
...
```
to show people whether Spack installed a faulty software or if it was not able to
build it at all.
<!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. -->
* [spack-build-out.txt]()
* [spack-build-env.txt]()
If your build didn't make it past the configure stage, Spack as also commands to parse
You might want to run this command on the `config.log` or any other similar file
found in the stage directory:
```console
$ spack location -s <spec>
```
In case in `config.log` there are other settings that you think might be the cause
of the build failure, you can consider attaching the file to this issue.
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
Rebuilding the package with the following options:
```console
$ spack -d install -j 1 <spec>
...
```
will provide additional debug information. After the failure you will find two files in the current directory:
### General information
1.`spack-cc-<spec>.in`, which contains details on the command given in input
to Spack's compiler wrapper
1.`spack-cc-<spec>.out`, which contains the command used to compile / link the
failed object after Spack's compiler wrapper did its processing
You can post or attach those files to provide maintainers with more information on what
is causing the failure.
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [ ] I have uploaded the build log and environment files
- [ ] I have searched the issues of this repo and believe this is not a duplicate
about: Suggest adding a feature that is not yet in Spack
labels: feature
---
*Please add a concise summary of your suggestion here.*
<!--*Please add a concise summary of your suggestion here.*-->
### Rationale
*Is your feature request related to a problem? Please describe it!*
<!--*Is your feature request related to a problem? Please describe it!*-->
### Description
*Describe the solution you'd like and the alternatives you have considered.*
<!--*Describe the solution you'd like and the alternatives you have considered.*-->
### Additional information
*Add any other context about the feature request here.*
<!--*Add any other context about the feature request here.*-->
-----
### General information
- [ ] I have run `spack --version` and reported the version of Spack
- [ ] I have searched the issues of this repo and believe this is not a duplicate
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
<!--If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
@@ -17,22 +17,16 @@ Spack integrates with `Environment Modules
<http://lmod.readthedocs.io/en/latest/>`_ by
providing post-install hooks that generate module files and commands to manipulate them.
..note::
If your machine does not already have a module system installed,
we advise you to use either Environment Modules or LMod. See :ref:`InstallEnvironmentModules`
for more details.
.._shell-support:
----------------------------
Using module files via Spack
----------------------------
If you have installed a supported module system either manually or through
``spack bootstrap``, you should be able to run either ``module avail`` or
``use -l spack`` to see what module files have been installed. Here is
sample output of those programs, showing lots of installed packages:
If you have installed a supported module system you should be able to
run either ``module avail`` or``use -l spack`` to see what module
files have been installed. Here is sample output of those programs,
showing lots of installed packages:
..code-block::console
@@ -93,9 +87,7 @@ Note that in the latter case it is necessary to explicitly set ``SPACK_ROOT``
before sourcing the setup file (you will get a meaningful error message
if you don't).
When ``bash`` and ``ksh`` users update their environment with ``setup-env.sh``, it will check for spack-installed environment modules and add the ``module`` command to their environment; This only occurs if the module command is not already available. You can install ``environment-modules`` with ``spack bootstrap`` as described in :ref:`InstallEnvironmentModules`.
Finally, if you want to have Spack's shell support available on the command line at
If you want to have Spack's shell support available on the command line at
any login you can put this source line in one of the files that are sourced
at startup (like ``.profile``, ``.bashrc`` or ``.cshrc``). Be aware though
that the startup time may be slightly increased because of that.
@@ -165,8 +157,6 @@ used ``gcc``. You could therefore just type:
To identify just the one built with the Intel compiler.
.._extensions:
.._cmd-spack-module-loads:
^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -469,14 +459,14 @@ is compiled with ``gcc@4.4.7``, with the only exception of any ``gcc``
or any ``llvm`` installation.
.._modules-naming-scheme:
.._modules-projections:
"""""""""""""""""""""""""""
Customize the naming scheme
"""""""""""""""""""""""""""
"""""""""""""""""""""""""""""""
Customize the naming of modules
"""""""""""""""""""""""""""""""
The names of environment modules generated by spack are not always easy to
fully comprehend due to the long hash in the name. There are two module
fully comprehend due to the long hash in the name. There are three module
configuration options to help with that. The first is a global setting to
adjust the hash length. It can be set anywhere from 0 to 32 and has a default
length of 7. This is the representation of the hash in the module file name and
@@ -510,20 +500,46 @@ version of python a set of python extensions is associated with. Likewise, the
``openblas`` string is attached to any program that has openblas in the spec,
most likely via the ``+blas`` variant specification.
The most heavyweight solution to module naming is to change the entire
naming convention for module files. This uses the projections format
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.