Separate spack instances installing to separate install trees can fight
over the same stage directory because we do not currently unique stage
paths by instance.
- [x] add a new `$instance` substitution that gives an 8-digit hash
unique to the spack instance
- [x] make the default stage directory use `$instance`
- [x] rework `spack.util.path.substitute_config_variables()` so that
expensive operations like hashing are done lazily, not at module
load time.
* fix issue #22228 build of gdk-pixbuf
* Update var/spack/repos/builtin/packages/gdk-pixbuf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This package used to be a part of rpm, but now is being developed separately.
It will supposedly be moved to a sourceware branch (it is maintained by
redhat) but I do not know if this will happen soon. We need it in order
to change locations in binaries that are built in /tmp and then moved
elsewhere. I will ping @woodard who might be able to give us an estimate
if we should include this development repository or wait for it to be
moved elsewhere. Once this is merged, we will want to use the bootstrap
approach to install and use the library from spack.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.
* Replace URL computation in base IntelOneApiPackage class with
defining URLs in component packages (this is expected to be
simpler for now)
* Add component_dir property that all oneAPI component packages must
define. This property names a directory that should exist after
installation completes (useful for making sure the install was
successful) and also defines the search location for the
component's environment update script.
* Add needed dependencies for components (e.g. intel-oneapi-dnn
requires intel-oneapi-tbb). The compilers provided by
intel-oneapi-compilers need some components under certain
circumstances (e.g. when enabling SYCL support) but these were
omitted since the libraries should only be linked when a
dependent package requests that feature
* Remove individual setup_run_environment implementations and use
IntelOneApiPackage superclass method which sources vars.sh
(located in a subdirectory of component_dir)
* Add documentation for IntelOneApiPackge build system
Co-authored-by: Vasily Danilin <vasily.danilin@yandex.ru>
This is to help debug situations like #22383, where python3.4 is
accidentally preferred over python2. It will also help on systems where
there is no python2 available or some other issue.
* QA: reduce number of unit tests for jobs not in the matrix
* Fixup for CentOS6 dependencies
* Put correct conditions back in place
* Add dependency on changes
* Change default FFT implementation to FFTW
To account for the default changing with casacore v3.4.0, as well as the
CMake logic for getting the FFTPack implementation.
* Switch to using spec.satisfies() for Python CMake values
This PR will update the urls to not have www (not needed),
the repository user should be hpcng instead of sylabs (technically
GitHub maintains the old links but this might not be forever) and
also added 3.7.1 and 3.7.2 versions of Singularity, newly released
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Beginning with version 2.4.1, the python interpreter line changed from
"#!/usr/bin/env python" to "#!/usr/bin/env python3"
That caused the bowtie2-build and bowtie2-inspect scripts to have a
trailing '3' at the end of the interpreter line. This PR fixes that. I
also observed that older versions do not build with intel-oneapi-tbb
so added a conflicts statement for that.
PRs that change only package recipes will only run tests under "package_sanity.py" and without coverage. This should result in a huge drop the cpu-time spent in CI for most PRs.
* updated deps to get gtkplus to build
* gtk-doc requires docbook-xml 4.3
* patch gtk-doc build to find xml catalogs
* patch gtk-doc build to find xml catalogs
* patch gtk-doc build to find xml catalogs
* add new version, fix macOS build error
* reorder docbook versions from newest to oldest
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added 2 new configure patch files to build WRF 3.9.1.1 and 4.2
with aocc@3.0
* Renamed patch files used for building WRF 3.9.1.1 and 4.2 with
aocc@2.3 (mostly, this also removes -march=native from AOCCOPT
and updates LIBMVEC options for aocc@2.3)
* unit tests: mark slow tests as "maybeslow"
This commit also removes the "network" marker and
marks every "network" test as "maybeslow". Tests
marked as db are maintained, but they're not slow
anymore.
* GA: require style tests to pass before running unit-tests
* GA: make MacOS unit tests fail fast
* GA: move all unit tests into the same workflow, run style tests as a prerequisite
All the unit tests have been moved into the same workflow so that a single
run of the dorny/paths-filter action can be used to ask for coverage based
on the files that have been changed in a PR. The basic idea is that for PRs
that introduce only changes to packages coverage is not necessary, this
resulting in a faster execution of the tests.
Also, for package only PRs slow unit tests are skipped.
Finally, MacOS and linux unit tests are now conditional on style tests passing
meaning that e.g. we won't waste a MacOS worker if we know that the PR has
flake8 issues.
* Addressed review comments
* Skipping slow tests on MacOS for package only recipes
* QA: make tests on changes correct before merging
In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.
This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.
We add one special case for this: dependencies of externals.
We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.
- [x] change asp.py so that this can't happen -- we now only generate
dependency types for possible dependencies.
This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.
Given some directives in a package like:
```python
depends_on("foo@1.0+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```
We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:
```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
attr(Name, Arg1) : required_dependency_condition(ID, Name, Arg1);
attr(Name, Arg1, Arg2) : required_dependency_condition(ID, Name, Arg1, Arg2);
attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
dependency_condition(ID, Parent, Dependency);
node(Parent).
```
And we handled `foo@1.0+bar` and `mpi@2:` parts ("imposed constraints")
like this:
```prolog
attr(Name, Arg1, Arg2) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2).
attr(Name, Arg1, Arg2, Arg3) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```
These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.
Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:
```prolog
condition_holds(ID) :-
condition(ID);
attr(Name, A1) : condition_requirement(ID, Name, A1);
attr(Name, A1, A2) : condition_requirement(ID, Name, A1, A2);
attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).
attr(Name, A1) :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```
this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:
```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```
The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:
```prolog
dependency_holds(Package, Dependency, Type) :-
dependency_condition(ID, Package, Dependency),
dependency_type(ID, Type),
condition_holds(ID).
```
Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:
```python
def package_dependencies_rules(self, pkg, tests):
"""Translate 'depends_on' directives into ASP logic."""
for _, conditions in sorted(pkg.dependencies.items()):
for cond, dep in sorted(conditions.items()):
condition_id = self.condition(cond, dep.spec, pkg.name) # create a condition and get its id
self.gen.fact(fn.dependency_condition( # associate specifics about the dependency w/the id
condition_id, pkg.name, dep.spec.name
))
# etc.
```
- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals
LocalWords: concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords: virtuals def pkg cond dep fn refactor github py
* Rewrite relative dev_spec paths internally to absolute paths in case of relocation of the environment file
* Test relative paths for dev_path in environments
* Add a --keep-relative flag to spack env create
This ensures that relative paths of develop paths are not expanded to
absolute paths when initializing the environment in a different location
from the spack.yaml init file.
Currently, regardless of a spec being concrete or not, we validate its variants in `spec_clauses` (part of `SpackSolverSetup`).
This PR skips the check if the spec is concrete.
The reason we want to do this is so that the solver setup class (really, `spec_clauses`) can be used for cases when we just want the logic statements / facts (is that what they are called?) and we don't need to re-validate an already concrete spec. We can't change existing concrete specs, and we have to be able to handle them *even if they violate constraints in the current spack*. This happens in practice if we are doing the validation for a spec produced by a different spack install.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
As of OpenBLAS 0.3.13, leaving off `TARGET` by default optimizes most
code for the host system -- adding flags that cause the resulting
library to fail (SIGILL) on older systems. This change should ensure
that a "x86_64" target for example will work across deployment systems.
https://github.com/xianyi/OpenBLAS/issues/3139
This pull request will add the ability for a user to add a configuration argument on the fly, on the command line, e.g.,:
```bash
$ spack -c config:install_tree:root:/path/to/config.yaml -c packages:all:compiler:[gcc] list --help
```
The above command doesn't do anything (I'm just getting help for list) but you can imagine having another root of packages, and updating it on the fly for a command (something I'd like to do in the near future!)
I've moved the logic for config_add that used to be in spack/cmd/config.py into spack/config.py proper, and now both the main.py (where spack commands live) and spack/cmd/config.py use these functions. I only needed spack config add, so I didn't move the others. We can move the others if there are also needed in multiple places.
Was getting the following error:
```
$ spack test list
==> Error: issubclass() arg 1 must be a class
```
This PR adds a check in `has_test_method` (in case it is re-used elsewhere such as #22097) and ensures a class is passed to the method from `spack test list`.
Updated the versions for DiHydrogen and Aluminum. Added new constraints on versions of Aluminum that are used across the software stack. Cleaned up the dependency on DiHydrogen for LBANN.
* py-chainer: Add test method for ChainerMN (continued #21848, #21940)
* py-chainer: Fixed the word in the message
* py-chainer: Delete unnecessary imports
* py-chainer: Incorporation of the measures pointed out in #21940 was insufficient.
Adds several EpetraExt_BUILD_* options as well as an Amesos2_ENABLE_Basker option. Adds `none` as an option to `gotype=`, which should be among the options since 'none' is specifically handled later in the package definition.
Adds `stokhos` and `trilinoscouplings` as options in spack which already are available in CMake for Trilinos (e.g. Trilinos_ENABLE_Stokhos:BOOL=)
* py-pytest-html recipe
* added missing deps + copyright
* Update var/spack/repos/builtin/packages/py-pytest-html/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pytest-metadata/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Patch eospac's Makefile.-linux-gnu.hashes to consider only `$(notdir
$(F90))` when constructing a key to look up compiler flags in the
_F90-CPUINFO_COMP_FLAGS associative array.
This patch was accepted into eospac itself after the release of
6.4.2beta, so apply it only to 6.4.2beta and earlier releases.
- Fix faulty patch
- Only use GEARSHIFFT_BACKEND_FFTW_PTHREADS if ~openmp
- Explicitly disable float16 support
- Use correct minimum required Boost version
- Add variants for Intel MKL and ROCm rocfft
This is a workaround for an issue with how "spack install" is invoked from within "spack ci rebuild". The fact that we don't get an exception or even the actual returncode when using the object returned by spack.util.executable.which('spack') to install the target spec means we get no indication of failures about the install command itself. Instead we rely on the subsequent buildcache creation failure to fail the job.
In the past, we only had the binutils variant, which included the
bootstrapping flag. Now that we have a separate bootstrap variant, fix
the nvptx conflict accordingly.
* Add intel cluster package update2 for 2020
* add pacifica cli tools, and pager
* remove boilerplate code
* update flake8 lints
* update flake8 lint, missed one
* add a description for pager
* Shorten a line
* Remove whitespace
* check on dependencies and move urls to proper place
* Remove import package as it seems it is not required
* add requests to the uploader config
* remove blank Line
* change to build and run for packages
* add run and build to the packages
* move from url method to pypi method
* adjust requirements based on feedback from adamjstewart
* remove python 3 requirement, and add setuptools-scm
* remove dependence on python
Co-authored-by: Evan Felix <evan.felix@pnnl.gov>
Unlike the other commands of the `R CMD` interface, the `INSTALL` command
will read `Renviron` files. This can potentially break builds of r-
packages, depending on what is set in the `Renviron` file. This PR adds
the `--vanilla` flag to ensure that neither `Rprofile` nor `Renviron` files
are read during Spack builds of r- packages.
Cray added necessary functionality for CMake to support fortran preprocessing using crayftn. This patch is necessary for the current release of cmake (3.19), with this patched expected to be in the 3.20 release of Cmake. The included patch is from kitware.
see https://gitlab.kitware.com/cmake/cmake/-/merge_requests/5882
Co-authored-by: James Elliott <jjellio@sandia.govv>
This adds a `--path` option to `spack python` that shows the `python`
interpreter that Spack is using.
e.g.:
```console
$ spack python --path
/Users/gamblin2/src/spack/var/spack/environments/default/.spack-env/view/bin/python
```
This is useful for debugging, and we can ask users to run it to
understand what python Spack is picking up via preferences in `bin/spack`
and via the `SPACK_PYTHON` environment variable introduced in #21222.
`spack test list` will show you which *installed* packages can be tested
but it won't show you which packages have tests.
- [x] add `spack test list --all` to show which packages have test methods
- [x] update `has_test_method()` to handle package instances *and*
package classes.
* adding package for libabigail, which we likely will need to use it for an analysis!
* includes variant for documentation (doxygen and pysphinx are associated dependencies)
* Improve R package creation
This PR adds the `list_url` attribute to CRAN R packages when using
`spack create`. It also adds the `git` attribute to R Bioconductor
packages upon creation.
* Switch over to using cran/bioc attributes
The cran/bioc entries are set to have the '=' line up with homepage
entry, but homepage does not need to exist in the package file. If it
does not, that could affect the alignment.
* Do not have to split bioc
* Edit R package documentation
Explain Bioconductor packages and add `cran` and `bioc` attributes.
* Update lib/spack/docs/build_systems/rpackage.rst
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update lib/spack/docs/build_systems/rpackage.rst
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Simplify the cran attribute
The version can be faked so that the cran attribute is simply equal to
the CRAN package name.
* Edit the docs to reflect new `cran` attribute format
* Use the first element of self.versions() for url
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Remove casacore's old version of the file with a package patch()
function, and depend on a modern CMake for the build.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
1. Add version 2021.03.01.
2. Cleanup the binutils dependencies now that 2.35.2 and 2.36 are available.
3. Require gcc 7.x or later for current 2021 version.
4. Simplify the xz depends to always require +pic.
This works around a glitch in the original concretizer.
This allows users to use relative paths for mirrors and repos and other things that may be part of a Spack environment. There are two ways to do it.
1. Relative to the file
```yaml
spack:
repos:
- local_dir/my_repository
```
Which will refer to a repository like this in the directory where `spack.yaml` lives:
```
env/
spack.yaml <-- the config file above
local_dir/
my_repository/ <-- this repository
repo.yaml
packages/
```
2. Relative to the environment
```yaml
spack:
repos:
- $env/local_dir/my_repository
```
Both of these would refer to the same directory, but they differ for included files. For example, if you had this layout:
```
env/
spack.yaml
repository/
includes/
repos.yaml
repository/
```
And this `spack.yaml`:
```yaml
spack:
include: includes/repos.yaml
```
Then, these two `repos.yaml` files are functionally different:
```yaml
repos:
- $env/repository # refers to env/repository/ above
repos:
- repository # refers to env/includes/repository/ above
```
The $env variable will not be evaluated if there is no active environment. This generally means that it should not be used outside of an environment's spack.yaml file. However, if other aspects of your workflow guarantee that there is always an active environment, it may be used in other config scopes.
For opt-in packages in Spack, its common that the `cuda` variant
is disabled by default.
This also simplifies downstream usage in multi-variants for
backends in user code.
* Allow the bootstrapping of clingo from sources
Allow python builds with system python as external
for MacOS
* Ensure consistent configuration when bootstrapping clingo
This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.
* Github actions: test clingo with bootstrapping from sources
* Add command to inspect and clean the bootstrap store
Prevent users to set the install tree root to the bootstrap store
* clingo: documented how to bootstrap from sources
Co-authored-by: Gregory Becker <becker33@llnl.gov>
- as outlined in merge-request #21336 some clang compilers
can trigger erroneous floating point exceptions.
OpenFOAM normally traps FPE, but disable this in the etc/controlDict
for specific compilers:
change "trapFpe digit;" -> "trapFpe 0;"
Eliminate previous use of FOAM_SGIFPE env variable in favour of
using the etc/controlDict setting - cleaner and robuster.
Co-authored-by: Mark Olesen <Mark.Olesen@esi-group.com>
* py-importlib: Python 2.7 is needed to build.
added depends_on('python@2.7.0:2.7.99')
* Update var/spack/repos/builtin/packages/py-importlib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This check is performed in cmake_args rather than with a 'conflicts'
statement because matching on !clang (i.e. any compiler that is not
clang) cannot currently be done with our spec syntax.
If a user creates a wrapper for the ifx binary called ifx_orig,
this causes the ifx --version command to produce:
$ ifx --version
ifx_orig (IFORT) 2021.1 Beta 20201113
Copyright (C) 1985-2020 Intel Corporation. All rights reserved.
The regex for ifx currently expects the output to begin with
"ifx (IFORT)..." so the wrapper would not be detected as ifx. This
PR removes the need for the static "ifx" string which allows wrappers
to be detected as ifx.
In general, the Intel compiler regexes do not include the invoked
executable name (i.e., ifort, icc, icx, etc.), so this is not
expected to cause any issues.
* asciidoc: current sourceforge a2x needs python2, new github release python3
* asciidoc: current sourceforge a2x needs python2, new github release python3
* asciidoc: making python 2.3 to 2.7 able to cope with asciidoc
* asciidoc: current sourceforge a2x needs python2, new github release python3
* asciidoc: current sourceforge a2x needs python2, new github release python3
* asciidoc: current sourceforge a2x needs python2, new github release python3
* asciidoc: current sourceforge a2x needs python2, new github release python3
* Fix sensei@develop
Should work with all options but libsim.
Current releases don't work with ~catalyst
See
https://gitlab.kitware.com/sensei/sensei/-/merge_requests/240
for the fix for develop.
Current releases work only with paraview 5.7 and 5.6
See
https://gitlab.kitware.com/sensei/sensei/-/merge_requests/239
for the fix for develop (which works with 5.9)
* Fix libsim.
* Fix warnings.
* Fix python runtime.
* Many changes:
* Reworked cmake options top use the CMakePackage option helpers
* Simplified and consolidated options
* Replaced adios with adios2 variant
* Added vtkm variant (not yet working)
* paraview: Fix downstream consumers getting the wrong FindMPI
* vtk: Fix downstream consumers getting the wrong FindMPI
* Add +ascent, +adios2; remove +adios; variants off by default
* Fix catalyst python logic
* sensei: cleanup formatting
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
if an environment is active and if yes we fetch all
uninstalled specs.
When using an external package with the old concretizer, all
dependencies of that external package were severed. This was not
performed bidirectionally though, so for an external package W with
a dependency on Z, if some other package Y depended on Z, Z could
still pull properties (e.g. compiler) from W since it was not
severed as a parent dependency.
This performs the severing bidirectionally, and adds tests to
confirm expected behavior when using config from DAG-adjacent
packages during concretization.
Allow libfuse to build without setuid binary and bump versions of both
libfuse and fuse-overlayfs.
Still doesn't solve the issue where this package tries to install things
into /etc/init.d though.
kcov CMakeLists.txt generates the "kcov" executable only if
certain dependencies are found. These dependencies are
"libbfd", "libopcodes" and "libiberty", hence the dependency
on binutils.
There clingo-cffi job has two issues to be solved:
1. It uses the default concretizer
2. It requires a package from https://test.pypi.org/simple/
The former can be fixed by setting the SPACK_TEST_SOLVER
environment variable to "clingo".
The latter though requires clingo-cffi to be pushed to a
more stable package index (since https://test.pypi.org/simple/
is meant as a scratch version of PyPI that can be wiped at
any time).
For the time being run the tests in a container. Switch back to
PyPI whenever a new official version of clingo will be released.
This allows for quickly configuring a spack install/env to use upstream packages by default. This is particularly important when upstreaming from a set of officially supported spack installs on a production cluster. By configuring such that package preferences match the upstream, you ensure maximal reuse of existing package installations.
* n2p2: Add new package
* remove ,
* Resurrection of , and changed " to single
* changed example.command to example.co
* n2p2: Added v2.1.1
* n2p2: Changed the type of depends_on.
Since there are many variables being set I thought it would be a good idea to document them better and slightly simplify the logic for external vs not-external.
Fixes for gitlab pipelines
* Remove accidentally retained testing branch name
* Generate pipeline w/out debug mode
* Make jobs interruptible for auto-cancel pending
* Work around concretization conflicts
* Support clingo when used with cffi
Clingo recently merged in a new Python module option based on cffi.
Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.
manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi
* Spec.splice feature
Construct a new spec with a dependency swapped out. Currently can only swap dependencies of the same name, and can only apply to concrete specs.
This feature is not yet attached to any install functionality, but will eventually allow us to "rewire" a package to depend on a different set of dependencies.
Docstring is reformatted for git below
Splices dependency "other" into this ("target") Spec, and return the result as a concrete Spec.
If transitive, then other and its dependencies will be extrapolated to a list of Specs and spliced in accordingly.
For example, let there exist a dependency graph as follows:
T
| \
Z<-H
In this example, Spec T depends on H and Z, and H also depends on Z.
Suppose, however, that we wish to use a differently-built H, known as H'. This function will splice in the new H' in one of two ways:
1. transitively, where H' depends on the Z' it was built with, and the new T* also directly depends on this new Z', or
2. intransitively, where the new T* and H' both depend on the original Z.
Since the Spec returned by this splicing function is no longer deployed the same way it was built, any such changes are tracked by setting the build_spec to point to the corresponding dependency from the original Spec.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.
This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.
This makes a similar change to spack cd.
If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* Made DiHydrogen a required dependencies on newer versions of LBANN.
Added an explicit variant for enabling Boost-dependent callbacks.
Updated the separation for embedded Python and the Python front end
code and associated dependencies.
* Bugfix on ROCm include in DiHydrogen
Drops:
* C_INCLUDE_PATH
* CPLUS_INCLUDE_PATH
* LIBRARY_PATH
* INCLUDE
We already decided to use C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, INCLUDE over CPATH here:
https://github.com/spack/spack/pull/14749
However, none of these flags apply to Fortran on Linux. So for consistency it seems better to make the user use -I and -L flags by hand or through pkgconfig.
BlasPP by ECP SLATE will fail to install by default
(`spack install blaspp`) because:
- the default BLAS installation in Spack is OpenBLAS
- BlasPP conflicts with `threads=none` for all recent OpenBLAS releases
OpenBLAS introduced a threadsafe compile option
with 0.3.7+ aka `USE_LOCKING`:
```
61 # If you want to build a single-threaded OpenBLAS, but expect to call this
62 # from several concurrent threads in some other program, comment this in for
63 # thread safety. (This is done automatically for USE_THREAD=1 , and should not
64 # be necessary when USE_OPENMP=1)
65 # USE_LOCKING = 1
```
According to tests, with `spack install --test root blaspp`,
this exactly addresses the issues in BlasPP tests.
It also seems to be a good option to set by default for OpenBLAS and
users that do not need this safety net can always disable it.
Solve issues with newer OpenBLAS by requiring
`+locking` over none-default threading options.
* Improve error message for inconsistencies in package.py
Sometimes directives refer to variants that do not exist.
Make it such that:
1. The name of the variant
2. The name of the package which is supposed to have
such variant
3. The name of the package making this assumption
are all printed in the error message for easier debugging.
* Add unit tests
* Also removed LBANN CUDA CMake flags that are set by the
version of Hydrogen that is compiled against.
* Updated recipes to use HWLOC 2.3 with ROCm to enable
topology awareness.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* genesis: New package.
* fujitsu-ssl2: fix unit test error
* genesis: Fix for comments and add test method
* genesis: Fix for comments
* genesis: Fix for comments
* libblastrampoline: new package
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
It turns out there are certain cases where having Open MPI use an external hwloc messes up other
applications that also rely on hwloc, but a different version.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
* VTK-m: No `pic` variant
A leftover conflict between `shared` and `pic` variants, the
latter is not part of the package anymore, leads to a solver
error with clingo.
This removes the outdated conflict section.
* VTK-m: Kokkos AMD GPU variant changed
Set the minimun C++ standard for LBANN, Hydrogen, and DiHydrogen to
C++17. The minumim C++ standard for Aluminum is C++14. Add new
versions for Aluminum, Hydrogen, and DiHydrogen. Added support for
high performance linkers in LBANN recipe (gold and lld). Added
variants to LBANN for enabling embedded Python support independently
from the Python front end.
* py-fenics-instant: new package for legacy fenics 2016 and 2017 versions
* Update var/spack/repos/builtin/packages/py-fenics-instant/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The signature for configure_args in the template for new
RPackage packages was incorrect (different than what is
defined and used in lib/spack/spack/build_systems/r.py)
See issue #21774
Keep spack.store.store and spack.store.db consistent in unit tests
* Remove calls to monkeypatch for spack.store.store and spack.store.db:
tests that used these called one or the other, which lead to
inconsistencies (the tests passed regardless but were fragile as a
result)
* Fixtures making use of monkeypatch with mock_store now use the
updated use_store function, which sets store.store and store.db
consistently
* subprocess_context.TestState now transfers the serializes and
restores spack.store.store (without the monkeypatch changes this
would have created inconsistencies)
Since signals are fundamentally racy, We can't bound the amount of time
that the `test_foreground_background_output` test will take to get to
'on', we can only observe that it transitions to 'on'. So instead of
using an arbitrary limit, just adjust the test to allow either 'on' or
'off' followed by 'on'.
This should eliminate the spurious errors we see in CI.
Follow-up to #17110
### Before
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/apple-clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
### After
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
`CC` and `SPACK_CC` were being set correctly, but `PATH` was using the name of the compiler `apple-clang` instead of `clang`. For most packages, since `CC` was set correctly, nothing broke. But for packages using `Makefiles` that set `CC` based on `which clang`, it was using the system compilers instead of the compiler wrappers. Discovered when working on `py-xgboost@0.90`.
An alternative fix would be to copy the symlinks in `env/clang` to `env/apple-clang`. Let me know if you think there's a better way to do this, or to test this.
* add to LD_LIBRARY_PATH so that it finds libimf.so
* amrex: fix handling of CUDA arch (#20786)
* amrex: fix handling of CUDA arch
* amrex: fix style
* amrex: fix bug
* Update var/spack/repos/builtin/packages/amrex/package.py
* Update var/spack/repos/builtin/packages/amrex/package.py
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* ecp-data-vis-sdk: Combine the vis and io SDK packages (#20737)
This better enables the collective set to be deployed togethor satisfying
eachothers dependencies
* r-sf: fix dependency error (#20898)
* improve documentation for Rocm (hip amd builds) (#20812)
* improve documentation
* astyle: Fix makefile for install parameter (#20899)
* llvm-doe: added new package (#20719)
The package contains duplicated code from llvm/package.py,
will supersede solve.
* r-e1071: added v1.7-4 (#20891)
* r-diffusionmap: added v1.2.0 (#20881)
* r-covr: added v3.5.1 (#20868)
* r-class: added v7.3-17 (#20856)
* py-h5py: HDF5_DIR is needed for ~mpi too (#20905)
For the `~mpi` variant, the environment variable `HDF5_DIR` is still required. I moved this command out of the `+mpi` conditional.
* py-hovorod: fix typo on variant name in conflicts directive (#20906)
* fujitsu-fftw: Add new package (#20824)
* pocl: added v1.6 (#20932)
Made version 1.5 or lower conflicts with a64fx.
* PCL: add new package (#20933)
* r-rle: new package (#20916)
Common 'base' and 'stats' methods for 'rle' objects, aiming to make it
possible to treat them transparently as vectors.
* r-ellipsis: added v0.3.1 (#20913)
* libconfig: add build dependency on texinfo (#20930)
* r-flexmix: add v2.3-17 (#20924)
* r-fitdistrplus: add v1.1-3 (#20923)
* r-fit-models: add v0.64 (#20922)
* r-fields: add v11.6 (#20921)
* r-fftwtools: add v0.9-9 (#20920)
* r-farver: add v2.0.3 (#20919)
* r-expm: add v0.999-6 (#20918)
* cln: add build dependency on texinfo (#20928)
* r-expint: add v0.1-6 (#20917)
* r-envstats: add v2.4.0 (#20915)
* r-energy: add v1.7-7 (#20914)
* r-ellipse: add v0.4.2 (#20912)
* py-fiscalyear: add v0.3.0 (#20911)
* r-ecp: add v3.1.3 (#20910)
* r-plotmo: add v3.6.0 (#20909)
* Improve gcc detection in llvm. (#20189)
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
* hatchet: updated urls (#20908)
* py-anuga: add new package (#20782)
* libvips: added v8.10.5 (#20902)
* libzmq: add platform conditions to libbsd dependency (#20893)
* r-dtw: add v1.22-3 (#20890)
* r-dt: add v0.17 (#20889)
* r-dosnow: add v1.0.19 (#20888)
* add version 1.0.16 to r-doparallel (#20886)
* add version 1.3.7 to r-domc (#20885)
* add version 0.9-15 to r-diversitree (#20884)
* add version 1.3-3 to r-dismo (#20883)
* add version 0.6.27 to r-digest (#20882)
* add version 1.5 to r-rngtools (#20887)
* add version 1.5.8 to r-dicekriging (#20877)
* add version 1.4.2 to r-httr (#20876)
* add version 1.28 to r-desolve (#20875)
* add version 2.2-5 to r-deoptim (#20874)
* add version 0.2-3 to r-deldir (#20873)
* add version 1.0.0 to r-crul (#20870)
* add version 1.1.0.1 to r-crosstalk (#20869)
* add version 1.0-1 to r-copula (#20867)
* add version 5.0.2 to r-rcppparallel (#20866)
* add version 2.0-1 to r-compositions (#20865)
* add version 0.4.10 to r-rlang (#20796)
* add version 0.3.6 to r-vctrs (#20878)
* amrex: add ROCm support (#20809)
* add version 2.0-0 to r-colorspace (#20864)
* add version 1.3-1 to r-coin (#20863)
* add version 0.19-4 to r-coda (#20862)
* add version 1.3.7 to r-clustergeneration (#20861)
* add version 0.3-58 to r-clue (#20860)
* add version 0.7.1 to r-clipr (#20859)
* add version 2.2.0 to r-cli (#20858)
* add version 0.4-3 to r-classint (#20857)
* add version 0.1.2 to r-globaloptions (#20855)
* add version 2.3-56 to r-chron (#20854)
* add version 0.4.10 to r-checkpoint (#20853)
* add version 2.0.0 to r-checkmate (#20852)
* add version 1.18.1 to r-catools (#20850)
* add version 1.2.2.2 to r-modelmetrics (#20849)
* add version 3.0-4 to r-cardata (#20847)
* add version 1.0.1 to r-caracas (#20846)
* r-lifecycle: new package at v0.2.0 (#20845)
* add version 3.0-10 to r-car (#20844)
* add version 3.4.5 to r-processx (#20843)
* add version 1.5-12.2 to r-cairo (#20842)
* add version 0.2.3 to r-cubist (#20841)
* add version 2.6 to r-rmarkdown (#20838)
* add version 1.2.1 to r-blob (#20819)
* add version 4.0.4 to r-bit (#20818)
* add version 2.4-1 to r-bio3d (#20816)
* add version 0.4.2.3 to r-bibtex (#20815)
* add version 3.1-4 to r-bayesm (#20807)
* add version 1.2.1 to r-backports (#20806)
* add version 2.0.3 to r-argparse (#20805)
* add version 5.4-1 to r-ape (#20804)
* add version 0.8-18 to r-amap (#20803)
* r-pixmap: added new package (#20795)
* zoltan: source code location change (#20787)
* refactor path logic
* added some paths to make compilers and libs discoverable
* add to LD_LIBRARY_PATH so that it finds libimf.so
and cleanup PEP8
* refactor path logic
* adding paths to LIBRARY_PATH so compiler wrappers will find -lmpi
* added vals for CC=icx, CXX=icpx, FC=ifx to generated module
* back out changes to intel-oneapi-mpi, save for separate PR
* Update var/spack/repos/builtin/packages/intel-oneapi-compilers/package.py
path is joined in _ld_library_path()
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
* set absolute paths to icx,icpx,ifx
* dang close parenthesis
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
Co-authored-by: mic84 <mrosso@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
Co-authored-by: darmac <xiaojun2@hisilicon.com>
Co-authored-by: Danny Taller <66029857+dtaller@users.noreply.github.com>
Co-authored-by: Tomoyasu Nojiri <68096132+t-nojiri@users.noreply.github.com>
Co-authored-by: Shintaro Iwasaki <siwasaki@anl.gov>
Co-authored-by: Glenn Johnson <glenn-johnson@uiowa.edu>
Co-authored-by: Kelly (KT) Thompson <KineticTheory@users.noreply.github.com>
Co-authored-by: Henrique Mendonça <henrique@users.noreply.github.com>
Co-authored-by: h-denpo <57649496+h-denpo@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Thomas Green <tomgreen66@hotmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
Co-authored-by: Abhinav Bhatele <bhatele@cs.umd.edu>
Co-authored-by: a-saitoh-fj <63334055+a-saitoh-fj@users.noreply.github.com>
Co-authored-by: QuellynSnead <quellyn@lanl.gov>
* sbang pushed back to callers;
star moved to util.lang
* updated unit test
* sbang test moved; local tests pass
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
fixes#20736
Before this one line fix we were erroneously deducing
that dependency conditions hold even if a package
was external.
This may result in answer sets that contain imposed
conditions on a node without the node being present
in the DAG, hence #20736.
fixes#20611
The conflict was triggered by an invalid value of the
'scheduler' variant. This causes Spack to error when libyogrt
facts are validated by the ASP-based concretizer.
At some point in the past, the skip_patch argument was removed
from the call to package.do_install() this broke the --skip-patch
flag on the dev-build command.
Set up environment and dependent packages properly when building
with intel-oneapi-mpi as a dependency MPI provider (e.g. point to
mpicc compiler wrapper).
This properly sets PATH/CPATH/LIBRARY_PATH etc. to make the
Spack-generated module file for intel-oneapi-compilers useful
(without this, 'icx' would not be found after loading the module
file for intel-oneapi-compilers).
fixes#20679
In this refactor we have a single cardinality rule on the
provider, which triggers a rule transforming a dependency
on a virtual package into a dependency on the provider of
the virtual.
Every other predicate in the concretizer uses a `_set` suffix to
implement user- or package-supplied settings, but compiler settings use a
`_hard` suffix for this. There's no difference in how they're used, so
make the names the same.
- [x] change `node_compiler_hard` to `node_compiler_set`
- [x] change `node_compiler_version_hard` to `node_compiler_version_set`
Previously, the concretizer handled version constraints by comparing all
pairs of constraints and ensuring they satisfied each other. This led to
INCONSISTENT ressults from clingo, due to ambiguous semantics like:
version_constraint_satisfies("mpi", ":1", ":3")
version_constraint_satisfies("mpi", ":3", ":1")
To get around this, we introduce possible (fake) versions for virtuals,
based on their constraints. Essentially, we add any Versions,
VersionRange endpoints, and all such Versions and endpoints from
VersionLists to the constraint. Virtuals will have one of these synthetic
versions "picked" by the solver. This also allows us to remove a special
case from handling of `version_satisfies/3` -- virtuals now work just
like regular packages.
This converts the virtual handling in the new concretizer from
already-ground rules to facts. This is the last thing that needs to be
refactored, and it converts the entire concretizer to just use facts.
The previous way of handling virtuals hinged on rules involving
`single_provider_for` facts that were tied to the virtual and a version
range. The new method uses the condition pattern we've been using for
dependencies, externals, and conflicts.
To handle virtuals as conditions, we impose constraints on "fake" virtual
specs in the logic program. i.e., `version_satisfies("mpi", "2.0:",
"2.0")` is legal whereas before we wouldn't have seen something like
this. Currently, constriants are only handled on versions -- we don't
handle variants or anything else yet, but they key change here is that we
*could*. For a long time, virtual handling in Spack has only dealt with
versions, and we'd like to be able to handle variants as well. We could
easily add an integrity constraint to handle variants like the one we use
for versions.
One issue with the implementation here is that virtual packages don't
actually declare possible versions like regular packages do. To get
around that, we implement an integrity constraint like this:
:- virtual_node(Virtual),
version_satisfies(Virtual, V1), version_satisfies(Virtual, V2),
not version_constraint_satisfies(Virtual, V1, V2).
This requires us to compare every version constraint to every other, both
in program generation and within the concretizer -- so there's a
potentially quadratic evaluation time on virtual constraints because we
don't have a real version to "anchor" things to. We just say that all the
constraints need to agree for the virtual constraint to hold.
We can investigate adding synthetic versions for virtuals in the future,
to speed this up.
This code in `SpecBuilder.build_specs()` introduced in #20203, can loop
seemingly interminably for very large specs:
```python
set([spec.root for spec in self._specs.values()])
```
It's deceptive, because it seems like there must be an issue with
`spec.root`, but that works fine. It's building the set afterwards that
takes forever, at least on `r-rminer`. Currently if you try running
`spack solve r-rminer`, it loops infinitely and spins up your fan.
The issue (I think) is that the spec is not yet complete when this is
run, and something is going wrong when constructing and comparing so many
values produced by `_cmp_key()`. We can investigate the efficiency of
`_cmp_key()` separately, but for now, the fix is:
```python
roots = [spec.root for spec in self._specs.values()]
roots = dict((id(r), r) for r in roots)
```
We know the specs in `self._specs` are distinct (they just came out of
the solver), so we can just use their `id()` to unique them here. This
gets rid of the infinite loop.
Environment yaml files should not have default values written to them.
To accomplish this, we change the validator to not add the default values to yaml. We rely on the code to set defaults for all values (and use defaulting getters like dict.get(key, default)).
Includes regression test.
This creates a set of packages which all use the same script to install
components of Intel oneAPI. This includes:
* An inheritable IntelOneApiPackage which knows how to invoke the
installation script based on which components are requested
* For components which include headers/libraries, an inheritable
IntelOneApiLibraryPackage is provided to locate them
* Individual packages for DAL, DNN, TBB, etc.
* A package for the Intel oneAPI compilers (icx/ifx). This also includes
icc/ifortran but these are not currently detected in this PR
We have to repeat all the spec attributes in a number of places in
`concretize.lp`, and Spack has a fair number of spec attributes. If we
instead add some rules up front that establish equivalencies like this:
```
node(Package) :- attr("node", Package).
attr("node", Package) :- node(Package).
version(Package, Version) :- attr("version", Package, Version).
attr("version", Package, Version) :- version(Package, Version).
```
We can rewrite most of the repetitive conditions with `attr` and repeat
only for each arity (there are only 3 arities for spec attributes so far)
as opposed to each spec attribute. This makes the logic easier to read
and the rules easier to follow.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Continuing to convert everything in `asp.py` into facts, make the
generation of ground rules for conditional dependencies use facts, and
move the semantics into `concretize.lp`.
This is probably the most complex logic in Spack, as dependencies can be
conditional on anything, and we need conditional ASP rules to accumulate
and map all the dependency conditions to spec attributes.
The logic looks complicated, but essentially it accumulates any
constraints associated with particular conditions into a fact associated
with the condition by id. Then, if *any* condition id's fact is True, we
trigger the dependency.
This simplifies the way `declared_dependency()` works -- the dependency
is now declared regardless of whether it is conditional, and the
conditions are handled by `dependency_condition()` facts.
There are currently no places where we do not want to traverse
dependencies in `spec_clauses()`, so simplify the logic by consolidating
`spec_traverse_clauses()` with `spec_clauses()`.
`version_satisfies/2` and `node_compiler_version_satisfies/3` are
generated but need `#defined` directives to avoid " info: atom does not
occur in any rule head:" warnings.
This PR addresses a number of issues related to compiler bootstrapping.
Specifically:
1. Collect compilers to be bootstrapped while queueing in installer
Compiler tasks currently have an incomplete list in their task.dependents,
making those packages fail to install as they think they have not all their
dependencies installed. This PR collects the dependents and sets them on
compiler tasks.
2. allow boostrapped compilers to back off target
Bootstrapped compilers may be built with a compiler that doesn't support
the target used by the rest of the spec. Allow them to build with less
aggressive target optimization settings.
3. Support for target ranges
Backing off the target necessitates computing target ranges, so make Spack
handle those properly. Notably, this adds an intersection method for target
ranges and fixes the way ranges are satisfied and constrained on Spec objects.
This PR also:
- adds testing
- improves concretizer handling of target ranges
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Currently, version range constraints, compiler version range constraints,
and target range constraints are implemented by generating ground rules
from `asp.py`, via `one_of_iff()`. The rules look like this:
```
version_satisfies("python", "2.6:") :- 1 { version("python", "2.4"); ... } 1.
1 { version("python", "2.4"); ... } 1. :- version_satisfies("python", "2.6:").
```
So, `version_satisfies(Package, Constraint)` is true if and only if the
package is assigned a version that satisfies the constraint. We
precompute the set of known versions that satisfy the constraint, and
generate the rule in `SpackSolverSetup`.
We shouldn't need to generate already-ground rules for this. Rather, we
should leave it to the grounder to do the grounding, and generate facts
so that the constraint semantics can be defined in `concretize.lp`.
We can replace rules like the ones above with facts like this:
```
version_satisfies("python", "2.6:", "2.4")
```
And ground them in `concretize.lp` with rules like this:
```
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) } 1
:- version_satisfies(Package, Constraint).
version_satisfies(Package, Constraint)
:- version(Package, Version), version_satisfies(Package, Constraint, Version).
```
The top rule is the same as before. It makes conditional dependencies and
other places where version constraints are used work properly. Note that
we do not need the cardinality constraint for the second rule -- we
already have rules saying there can be only one version assigned to a
package, so we can just infer from `version/2` `version_satisfies/3`.
This form is also safe for grounding -- If we used the original form we'd
have unsafe variables like `Constraint` and `Package` -- the original
form only really worked when specified as ground to begin with.
- [x] use facts instead of generating rules for package version constraints
- [x] use facts instead of generating rules for compiler version constraints
- [x] use facts instead of generating rules for target range constraints
- [x] remove `one_of_iff()` and `iff()` as they're no longer needed
I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.
So far, the command line is faster than running through Python, but I'm
working on fixing that. I found that if I do this:
```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp") # code from spack solve --show asp hdf5
control.load("display.lp")
control.ground([("base", [])])
control.solve(...)
```
It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.
So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.
Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.
This modification ensures that open-ended variants (variants accepting any string
or any integer) are projected to the finite set of values that are relevant for this
concretization.
Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.
The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.
- [x] move ad-hoc value handling code into spec_clauses so we do it in
one place for CLI and packages
- [x] move handling of `variant_possible_value`, etc. into
`concretize.lp`, where we can automatically infer variant existence
more concisely.
- [x] simplify/clarify some of the code for variants in `spec_clauses()`
fixes#20055
Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.
This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.
fixes#20040
Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.
refers #20040
Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.
This prevents the solver from avoiding expected configurations
just because they contain directives like:
depends_on('pkg+foo')
and `+foo` is not the default variant value for pkg.
fixes#19981
This commit adds support for target ranges in directives,
for instance:
conflicts('+foo', when='target=x86_64:,aarch64:')
If any target in a spec body is not a known target the
following clause will be emitted:
node_target_satisfies(Package, TargetConstraint)
when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.
fixes#20019
Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:
conflicts('%gcc@X.Y:', when='@:A.B')
where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.
refers #20079
Added docstrings to 'concretize' and 'concretized' to
document the format for tests.
Added tests for the activation of test dependencies.
refers #20040
This modification emits rules like:
provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").
for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.
Follow-up to #17110
### Before
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/apple-clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
### After
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
`CC` and `SPACK_CC` were being set correctly, but `PATH` was using the name of the compiler `apple-clang` instead of `clang`. For most packages, since `CC` was set correctly, nothing broke. But for packages using `Makefiles` that set `CC` based on `which clang`, it was using the system compilers instead of the compiler wrappers. Discovered when working on `py-xgboost@0.90`.
An alternative fix would be to copy the symlinks in `env/clang` to `env/apple-clang`. Let me know if you think there's a better way to do this, or to test this.
The dependencies needed a little clean up as several dependencies are
only needed for the +X variant. This PR consolidates all of the
dependencies that actually require +X and explicitly disables them when
~X to prevent accidentally picking up system libraries.
- modified the description of the +X variant
- arranges dependencies to group them
- added missing dependency on xz
- removed unneeded dependencies
- freetype
- glib
- set dependencies when +X
- cairo
- jpeg
- libpng
- libtiff
- tcl/tk
- R uses tcl/tk together, so only tk needs to be depended on, and only
when +X
- moved tcl/tk resources to with/without-x test
- added explicit with/without settings for
- cairo
- jpeglib
- libpng
- libtiff
- tcltk
The fixture was introduced in #19690 maybe accidentally.
It's not used in unit tests, and though it should be
mutable it seems an exact copy of it's immutable version.
Before this change, in pipeline environments where runners do not have access
to persistent shared file-system storage, the only way to pass buildcaches to
dependents in later stages was by using the "enable-artifacts-buildcache" flag
in the gitlab-ci section of the spack.yaml. This change supports a second
mechanism, named "temporary-storage-url-prefix", which can be provided instead
of the "enable-artifacts-buildcache" feature, but the two cannot be used at the
same time. If this prefix is provided (only "file://" and "s3://" urls are
supported), the gitlab "CI_PIPELINE_ID" will be appended to it to create a url
for a mirror where pipeline jobs will write buildcache entries for use by jobs
in subsequent stages. If this prefix is provided, a cleanup job will be
generated to run after all the rebuild jobs have finished that will delete the
contents of the temporary mirror. To support this behavior a new mirror
sub-command has been added: "spack mirror destroy" which can take either a
mirror name or url.
This change also fixes a bug in generation of "needs" list for each job. Each
jobs "needs" list is supposed to only contain direct dependencies for scheduling
purposes, unless "enable-artifacts-buildcache" is specified. Only in that case
are the needs lists supposed to contain all transitive dependencies. This
changes fixes a bug that caused the needs lists to always contain all transitive
dependencies, regardless of whether or not "enable-artifacts-buildcache" was
specified.
* py-typing: new version, avoid issues with newer versions of python
https://pypi.org/project/typing/
"For package maintainers, it is preferred to use
typing;python_version<"3.5" if your package requires it to support
earlier Python versions."
* update conflict version / more message detail
* change the depends_on, leave a comment suggesting correct usage
The actual, documented minimum version of the cfitsio dependency,
v3.181, is now neither available for (easy) download from NASA, nor as
a Spack package. No upper bound on version number exists (at this time).
Pipelines: DAG pruning
During the pipeline generation staging process we check each spec against all configured mirrors to determine whether it is up to date on any of the mirrors. By default, and with the --prune-dag argument to "spack ci generate", any spec already up to date on at least one remote mirror is omitted from the generated pipeline. To generate jobs for up to date specs instead of omitting them, use the --no-prune-dag argument. To speed up the pipeline generation process, pass the --check-index-only argument. This will cause spack to check only remote buildcache indices and avoid directly fetching any spec.yaml files from mirrors. The drawback is that if the remote buildcache index is out of date, spec rebuild jobs may be scheduled unnecessarily.
This change removes the final-stage-rebuild-index block from gitlab-ci section of spack.yaml. Now rebuilding the buildcache index of the mirror specified in the spack.yaml is the default, unless "rebuild-index: False" is set. Spack assigns the generated rebuild-index job runner attributes from an optional new "service-job-attributes" block, which is also used as the source of runner attributes for another generated non-build job, a no-op job, which spack generates to avoid gitlab errors when DAG pruning results in empty pipelines.
Add `manual_download = True` to packages that need to do manual
downloads but do not have the `manual_download attribute set. This
provides a message when installing these packages rather than a generic
fetch error.
Add versions 2020.12 and 2021.01. The viewer and trace viewer are now
integrated into a single program and one tar ball. Now available on
arm/aarch64 and now uses Java 11.
Update some things in hpctoolkit to prepare for a 2021.02.x release:
1. allow binutils to be built with +nls.
2. require libmonitor to be built with +dlopen.
3. allow rocm in more than just develop branch.
4. remove some conflicting setenv's in hpctoolkit module.
The SPACK_PYTHON environment variable can be set to a python interpreter to be
used by the spack command. This allows the spack command itself to use a
consistent and separate interpreter from whatever python might be used for package
building.
* add new flag when compiling mumps with %gcc@10.
* Fix style
* Try to fix formatting
* Use flag_handler approach suggested by @michaelkuhn
in the PR review.
* Delete former approach
* Another style issue
* Add another space
* More fixes
We still need mesa18 for some of our builds.
Those builds require python@2, normal mesa only works with
python@3.
* Remove the deprecation tag
* Add myself as a maintainer: I volunteer to help with this
package for the time being.
* There is only one version, no need to prefer it.
Modifications:
- Make use of SpackCommand objects wherever possible
- Deduplicated code when possible
- Moved cleaning of mirrors to fixtures
- Ensure mock configuration has a clear initialization order
* fixed install with ver 3 and python 3.0
* replaced @3 with @2.999
* [py-pyspark] added version requirements for py-py4j
* [py-pyspark] all versions require at least version 2.7 of python
* [py-pyspark] fixed comma syntax
Co-authored-by: Sid Pendelberry <sid@rit.edu>
The GROMACS package embeds references to its build tool chain.
Use the Spack utilities to make sure these references are correct
outside of the isolated Spack build environment.
`query()` calls `datetime.datetime.fromtimestamp` regardless of whether a
date query is being done. Guard this with an if statement to avoid the
unnecessary work.
Constructing a spec from a name instead of setting name directly forces
from_node_dict to call Spec.parse(), which is slow. Avoid this by using a
zero-arg constructor and setting name directly.
cmake was added as a runtime dependency to meson in #20449. This
introduces an unnecessary implicit cmake dependency, which increases
build time for meson considerably. cmake is only one of many methods for
finding dependencies (pkg-config, qmake etc.), which are also not
runtime dependencies of meson. Add cmake as a build dependency to mesa
instead.
This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.
This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated
The method is now called "use_repositories" and
makes it clear in the docstring that it accepts
as arguments either Repo objects or paths.
Since there was some duplication between this
contextmanager and "use_repo" in the testing framework,
remove the latter and use spack.repo.use_repositories
across the entire code base.
Make a few adjustment to MockPackageMultiRepo, since it was
stating in the docstring that it was supposed to mock
spack.repo.Repo and was instead mocking spack.repo.RepoPath.
Some compilers, such as the NV compilers, do not recognize -isystem
dir when specified without a space.
Works: -isystem ../include
Does not work: -isystem../include
This PR updates the compiler wrapper to include the space with -isystem.
Environment views fail when the tmpdir used for view generation is
on a separate mount from the install_tree because the files cannot
by symlinked between the two. The fix is to use an alternative
tmpdir located alongside the view.
* [py-moviepy] created template
* [py-moviepy] added dependencies
* [py-moviepy] removed fixmes, added homepage and description
* [py-moviepy] updated to pypi and updated checksum
* [py-moviepy] added setuptools dependency
* [py-moviepy] more specific version limit
* [py-moviepy] added checksum for version 1.0.1
* [py-moviepy] numpy restriction not nesessary here
* 3DTK: add new package
* Add missing opencv variants
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
* Fix cmake version req, add eigen dep
* Prefer trunk version
* Tell 3dtk where to find eigen
* Fix installation
* Fix installation
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
This PR fixes the case where groff fails to build if the spack install
path is really long. There are a couple of perl scripts that get built,
and used, during the build phase that will fail when the perl
interpreter line is too long. Filtering the lines will not work because
the files don not exist after the configure phase and patching after the
build phase is too late. This PR runs the scripts explicitly with the
spack perl via the $(PERL) variable in the call to the script.
* Procedure to deprecate old versions of software
* Add documentation
* Fix bug in logic
* Update tab completion
* Deprecate legacy packages
* Deprecate old mxnet as well
* More explicit docs
* py-dvc: new package
* Update var/spack/repos/builtin/packages/py-dvc/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-dvc: add version dependency for py-networkx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Clarify relaxed double precision option
This is only intended for use on the Fujitsu PRIMEHPC platform
* Fix typo
* Shorten line to keep linter happy
Added conflict for macsio@1.1~mpi after investigating source code. As of
1.1 tag macsio does not properly guard out MPI commands. This is
verified as corrected in @develop
* mxnet: convert to CMakePackage
* Package isn't installed yet, can't find libs
* Fix bug with GCC 8+ and CUDA 10 on PowerPC
* Add space
* Add patch to fix cmake cuda flags
* Space no longer needed
* Add patch to fix OpenBLAS linking
* Add missing CMake flag
* Fix env set, default to Distribution
* Add new version, patch
* added py-python-benedict recipe
* Update var/spack/repos/builtin/packages/py-python-benedict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
+ Provide optional variant `pythontools` (default False) that adds a run-time dependency on
`py-matplotlib`.
+ Latest versions require `cmake@3.18:` to support cuda features.
+ Enable a cmake option that forcibly disables qt support. Previously, draco would enable qt
support if it was available in the local build environment (outside of spack).
* graphviz: Remove ghostscript requirement when ~ghostscript
* Add doc variant and patch for 2.44.1
* Patch does not apply
* Update graphviz versions, using archives rather than git hash
* Complete implementation of doc variant
* Fix typo
This commit adds an option to the `external find`
command that allows it to search by tags. In this
way group of executables with common purposes can
be grouped under a single name and a simple command
can be used to detect all of them.
As an example introduce the 'build-tools' tag to
search for common development tools on a system
* add to LD_LIBRARY_PATH so that it finds libimf.so
* amrex: fix handling of CUDA arch (#20786)
* amrex: fix handling of CUDA arch
* amrex: fix style
* amrex: fix bug
* Update var/spack/repos/builtin/packages/amrex/package.py
* Update var/spack/repos/builtin/packages/amrex/package.py
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* ecp-data-vis-sdk: Combine the vis and io SDK packages (#20737)
This better enables the collective set to be deployed togethor satisfying
eachothers dependencies
* r-sf: fix dependency error (#20898)
* improve documentation for Rocm (hip amd builds) (#20812)
* improve documentation
* astyle: Fix makefile for install parameter (#20899)
* llvm-doe: added new package (#20719)
The package contains duplicated code from llvm/package.py,
will supersede solve.
* r-e1071: added v1.7-4 (#20891)
* r-diffusionmap: added v1.2.0 (#20881)
* r-covr: added v3.5.1 (#20868)
* r-class: added v7.3-17 (#20856)
* py-h5py: HDF5_DIR is needed for ~mpi too (#20905)
For the `~mpi` variant, the environment variable `HDF5_DIR` is still required. I moved this command out of the `+mpi` conditional.
* py-hovorod: fix typo on variant name in conflicts directive (#20906)
* fujitsu-fftw: Add new package (#20824)
* pocl: added v1.6 (#20932)
Made version 1.5 or lower conflicts with a64fx.
* PCL: add new package (#20933)
* r-rle: new package (#20916)
Common 'base' and 'stats' methods for 'rle' objects, aiming to make it
possible to treat them transparently as vectors.
* r-ellipsis: added v0.3.1 (#20913)
* libconfig: add build dependency on texinfo (#20930)
* r-flexmix: add v2.3-17 (#20924)
* r-fitdistrplus: add v1.1-3 (#20923)
* r-fit-models: add v0.64 (#20922)
* r-fields: add v11.6 (#20921)
* r-fftwtools: add v0.9-9 (#20920)
* r-farver: add v2.0.3 (#20919)
* r-expm: add v0.999-6 (#20918)
* cln: add build dependency on texinfo (#20928)
* r-expint: add v0.1-6 (#20917)
* r-envstats: add v2.4.0 (#20915)
* r-energy: add v1.7-7 (#20914)
* r-ellipse: add v0.4.2 (#20912)
* py-fiscalyear: add v0.3.0 (#20911)
* r-ecp: add v3.1.3 (#20910)
* r-plotmo: add v3.6.0 (#20909)
* Improve gcc detection in llvm. (#20189)
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
* hatchet: updated urls (#20908)
* py-anuga: add new package (#20782)
* libvips: added v8.10.5 (#20902)
* libzmq: add platform conditions to libbsd dependency (#20893)
* r-dtw: add v1.22-3 (#20890)
* r-dt: add v0.17 (#20889)
* r-dosnow: add v1.0.19 (#20888)
* add version 1.0.16 to r-doparallel (#20886)
* add version 1.3.7 to r-domc (#20885)
* add version 0.9-15 to r-diversitree (#20884)
* add version 1.3-3 to r-dismo (#20883)
* add version 0.6.27 to r-digest (#20882)
* add version 1.5 to r-rngtools (#20887)
* add version 1.5.8 to r-dicekriging (#20877)
* add version 1.4.2 to r-httr (#20876)
* add version 1.28 to r-desolve (#20875)
* add version 2.2-5 to r-deoptim (#20874)
* add version 0.2-3 to r-deldir (#20873)
* add version 1.0.0 to r-crul (#20870)
* add version 1.1.0.1 to r-crosstalk (#20869)
* add version 1.0-1 to r-copula (#20867)
* add version 5.0.2 to r-rcppparallel (#20866)
* add version 2.0-1 to r-compositions (#20865)
* add version 0.4.10 to r-rlang (#20796)
* add version 0.3.6 to r-vctrs (#20878)
* amrex: add ROCm support (#20809)
* add version 2.0-0 to r-colorspace (#20864)
* add version 1.3-1 to r-coin (#20863)
* add version 0.19-4 to r-coda (#20862)
* add version 1.3.7 to r-clustergeneration (#20861)
* add version 0.3-58 to r-clue (#20860)
* add version 0.7.1 to r-clipr (#20859)
* add version 2.2.0 to r-cli (#20858)
* add version 0.4-3 to r-classint (#20857)
* add version 0.1.2 to r-globaloptions (#20855)
* add version 2.3-56 to r-chron (#20854)
* add version 0.4.10 to r-checkpoint (#20853)
* add version 2.0.0 to r-checkmate (#20852)
* add version 1.18.1 to r-catools (#20850)
* add version 1.2.2.2 to r-modelmetrics (#20849)
* add version 3.0-4 to r-cardata (#20847)
* add version 1.0.1 to r-caracas (#20846)
* r-lifecycle: new package at v0.2.0 (#20845)
* add version 3.0-10 to r-car (#20844)
* add version 3.4.5 to r-processx (#20843)
* add version 1.5-12.2 to r-cairo (#20842)
* add version 0.2.3 to r-cubist (#20841)
* add version 2.6 to r-rmarkdown (#20838)
* add version 1.2.1 to r-blob (#20819)
* add version 4.0.4 to r-bit (#20818)
* add version 2.4-1 to r-bio3d (#20816)
* add version 0.4.2.3 to r-bibtex (#20815)
* add version 3.1-4 to r-bayesm (#20807)
* add version 1.2.1 to r-backports (#20806)
* add version 2.0.3 to r-argparse (#20805)
* add version 5.4-1 to r-ape (#20804)
* add version 0.8-18 to r-amap (#20803)
* r-pixmap: added new package (#20795)
* zoltan: source code location change (#20787)
* refactor path logic
* added some paths to make compilers and libs discoverable
* add to LD_LIBRARY_PATH so that it finds libimf.so
and cleanup PEP8
* refactor path logic
* adding paths to LIBRARY_PATH so compiler wrappers will find -lmpi
* added vals for CC=icx, CXX=icpx, FC=ifx to generated module
* back out changes to intel-oneapi-mpi, save for separate PR
* Update var/spack/repos/builtin/packages/intel-oneapi-compilers/package.py
path is joined in _ld_library_path()
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
* set absolute paths to icx,icpx,ifx
* dang close parenthesis
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
Co-authored-by: mic84 <mrosso@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
Co-authored-by: darmac <xiaojun2@hisilicon.com>
Co-authored-by: Danny Taller <66029857+dtaller@users.noreply.github.com>
Co-authored-by: Tomoyasu Nojiri <68096132+t-nojiri@users.noreply.github.com>
Co-authored-by: Shintaro Iwasaki <siwasaki@anl.gov>
Co-authored-by: Glenn Johnson <glenn-johnson@uiowa.edu>
Co-authored-by: Kelly (KT) Thompson <KineticTheory@users.noreply.github.com>
Co-authored-by: Henrique Mendonça <henrique@users.noreply.github.com>
Co-authored-by: h-denpo <57649496+h-denpo@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Thomas Green <tomgreen66@hotmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
Co-authored-by: Abhinav Bhatele <bhatele@cs.umd.edu>
Co-authored-by: a-saitoh-fj <63334055+a-saitoh-fj@users.noreply.github.com>
Co-authored-by: QuellynSnead <quellyn@lanl.gov>
The "fact" method before was dealing with multiple facts
registered per call, which was used when we were emitting
grounded rules from knowledge of the problem instance.
Now that the encoding is changed we can simplify the method
to deal only with a single fact per call.
* py-dictdiffer: fix offline dependencies
* Update var/spack/repos/builtin/packages/py-dictdiffer/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-flatten-dict: new recipe
* Update var/spack/repos/builtin/packages/py-flatten-dict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-flatten-dict: fix dependencies
* py-flatten-dict: fix dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo
package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically
* clingo-bootstrap: apple-clang options to bootstrap statically on darwin
* clingo: fix the path of the Python interpreter
In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.
This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.
* clingo: the commit for "spack" version has been updated.
- add variants for build targets, language bindings, backends
- ensure selected variants are compatible with zfp version
- point to GitHub (not LLNL) tar balls
- add dependencies
- update link to homepage
- add maintainers
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Patch provided by @Billae
Avoid the following error:
File "/home/danlipsa/projects/spack/lib/spack/llnl/util/tty/log.py", line 768, in _writer_daemon
line = _retry(in_pipe.readline)()
File "/home/danlipsa/projects/spack/lib/spack/llnl/util/tty/log.py", line 830, in wrapped
return function(*args, **kwargs)
File "/usr/lib/python3.8/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x97 in position 220: invalid start byte
This PR adds:
1. A patch that fixes a bug in version 2.70
(will be fixed upstream in the next release: https://savannah.gnu.org/support/?110396).
2. A fix for the way we patch shebang in bin/autom4te.in.
For 2, we need to keep the original modification timestamp of the file.
Otherwise, we either get an empty man page for autom4te (versions 2.69 and before)
or a failure at the build time (versions 2.70 and after).
The difference has to do with the update of the missing script: https://git.savannah.gnu.org/cgit/automake.git/commit/lib/missing?id=a22717dffe37f30ef2ad2c355b68c9b3b5e4b8c7
It will take time until developers of Autotools-based packages adjust their scripts
to the new version, therefore, 2.69 is marked as preferred.
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
* fix typo WONTON_ENABLE_Kokkos ---> TANGRAM_ENABLE_Kokkos
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
* fix typo WONTON_ENABLE_Kokkos ---> TANGRAM_ENABLE_Kokkos
* fix bugs in CMake section
* more compact cmake block
* update hash for 1.2.10 and add 1.2.11
* update recipe for Portage 3.0.0
* removing old versions - they won't build with the new recipe and the url specification doesn't work for them
* update version to 3.3.6
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
* added pytest-benchmark recipe
* Update var/spack/repos/builtin/packages/py-pytest-benchmark/package.py
Added Python2 dependence.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added package py-pytest-cpp
* Update var/spack/repos/builtin/packages/py-pytest-cpp/package.py
package is !=5.4.0 use @:5.3.999
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added package py-pytest-timeout
* Update var/spack/repos/builtin/packages/py-pytest-timeout/package.py
Added Python2.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added package py-openmc
* Update var/spack/repos/builtin/packages/py-openmc/package.py
specify branch when using branch names for versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
use run after fixture to install openmc lib
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
Simplify copying openmc library to py-openmc prefix using install
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
NumPy should be 1.9+
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix paren missing
* Update var/spack/repos/builtin/packages/py-openmc/package.py
fixed parens
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
use v0.11.0 in URL
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Sometimes we need to patch a file that is a dependency for some other
automatically generated file that comes in a release tarball. As a
result, make tries to regenerate the dependent file using additional
tools (e.g. help2man), which would not be needed otherwise.
In some cases, it's preferable to avoid that (e.g. see #21255). A way
to do that is to save the modification timestamps before patching and
restoring them afterwards. This PR introduces a context wrapper that
does that.
Python extensions use CC and LDSHARED from the sysconfig module to
build. When Spack installs Python, it replaces the Spack compiler
wrappers in these values with the underlying compilers (since these
wrappers are not useful outside of the context of running Spack).
In order to use the Spack compiler wrappers when building Python
extensions with Spack, Spack sets the LDSHARED environment variable
when running `Python.setup_py` (which overrides sysconfig). However,
many Python extensions use an alternative method to build (namely
PythonPackage.setup_py), which meant that LDSHARED was not set (and
RPATHs were not inserted for dependencies).
This commit makes the following changes:
* Sets LDSHARED in the environment: this applies to all commands
executed during the build, rather than for a single command
invocation
* Updates the logic to set LDSHARED: this replaces the compiler
executable in LDSHARED with the Spack compiler wrapper. This
means that for some externally-built instances of Python,
Spack will now switch to using the Spack wrappers when building
extensions. The behavior is expected to be the same for Spack-
built instances of Python.
* Performs similar modifications for LDCXXSHARED (to ensure RPATHs
are included for C++ codes)
On ppc64le and aarch64, Spack tries to execute any "config.guess" and
"config.sub" scripts it finds in the source package.
However, in the libsodium tarball, these files are present but not
executable. This causes the following error when trying to install
libsodium with spack:
Error: RuntimeError: Failed to find suitable substitutes for config.sub, config.guess
Fix this by chmod-ing the scripts in the patch() function of libsodium.
* recipe: add version 6.1.1 for pytest
add recipe for new dependency py-iniconfig
recipe: add version 6.1.1 for pytest
add recipe for new dependency py-iniconfig
* fix: 'SyntaxError: invalid syntax' during unittests
* requested changes on the pull request done
* requested changes on dep for py-pytest
* change constaint on python for importlib-metadata
* undo change on py-importlib-metada as requested
* bug fix
* bug fix on py-wcwidth
* fix as requested
* forget @ in when param
* forget a colon
* add new versions py-pytest and py-py
* fix setuptools* version
* add rule for more-itertools
* [py-intel-openmp] created template
* [py-intel-openmp] is wheel
* [py-intel-openmp] fixed version for linux
* [py-intel-openmp] removed fixmes, added homepage and description
* [py-intel-openmp] added macos support
* [py-intel-openmp] style fix
* petsc: add a +mkl-pardiso variant
mkl_pardiso solver is distributed with intel-mkl
* petsc: depend on mkl instead of intel-mkl
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The first of my two upstream patches to mypy landed in the 0.800 tag that was released this morning, which lets us use module and package parameters with a .mypy.ini file that has a files key. This uses those parameters to check all of spack in style, but leaves the packages out for now since they are still very, very broken. If no package has been modified, the packages are not checked, but if one has they are. Includes some fixes for the log tests since they were not type checking.
Should also fix all failures related to "duplicate module named package" errors.
Hopefully the next drop of mypy will include my other patch so we can just specify the modules and packages in the config file to begin with, but for now we'll have to live with a bare mypy doing a check of the libs but not the packages.
* use module and package flags to check packages properly
* stop checking package files, use package flag for libs
The packages are not type checkable yet, need to finish out another PR
before they can be. The previous commit also didn't check the libraries
properly, this one does.
Add version 4.12.6, 5.0.3
I think, the preferred was there to keep version 4.
But that's why we have spack, because people can install
whatever version they want.
And root has a properly versioned dependency.
* mumps: Fix for problematic src/makefile patch (#20590)
Minor change in src/Makefile between 5.2.0 and 5.3.3 causing patch to
break. Split into 2 patchfiles
* mumps: Additional patch for fixing #20590
This is to fix issue wherein build fails on Ubuntu due to undefined
symbols, despite symbols being included in other libraries referenced
on the compilation line. I believe the issue is that the inclusion
of libsmumps.so was (due to my original patch) causing
libmumps_common.so to be automatically loaded, but since libpords.so
was not also required, the error was occuring. I have added libpords.so
along with libmumps_common.so to be explicit dependencies of
libsmumps.so, etc., which seems to resolve the issue.
* ArrayFire: Add version 3.7.2.
* ArrayFire: Allow using MKL as the FFTW provider.
* ArrayFire: Ensure the libraries are properly found.
The required backend(s) can be specified in the library query.
* openssl: remove preprocessor flags incompatible with NVIDIA HPC SDK
* Update var/spack/repos/builtin/packages/openssl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Scott McMillan <smcmillan@nvidia.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [py-statsmodels] added version 0.12.1 and updated dependencies accordingly
* [py-statsmodels] added python requirements for new version and fixed formatting for readability
* added m4 dep to PVM recipe
* added libtirpc dep to PVM recipe
* decode str or bytestr string to unicode
* Resolved comments from @adamjstewart on setup_build_environment
* When the SCR spec specifies a resource_manager=SLURM or LSF flag, propagate the spec through to
the libyogrt scheduler=slurm or lsf
* Use libyogrt default scheduler option when the SCR spec does not specify LSF or SLURM
* updated relion for new versions
* Switched to checksum versions
* Enabled spack tracking for MKL and TBB when CPU optimizations are enabled
* Added variants to control MKL FFT and Ppatent feature
* Replaced tags with sha256 for older versions an and switched to virtual packages
* py-funcy: new recipe
* Update var/spack/repos/builtin/packages/py-funcy/package.py
add build and run python dependencies
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* sbang pushed back to callers;
star moved to util.lang
* updated unit test
* sbang test moved; local tests pass
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
* fixing outdated metis link
* updated url to the official website since the previous url was a GitHub repo that is an unofficial mirror that only contains the latest version
* py-dictdiffer: new recipie
* Update var/spack/repos/builtin/packages/py-dictdiffer/package.py
add correct setuptools dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update NEURON simulator package
- update recipe to support autoconf as well as cmake
- new versions >=7.8 support cmake
- remove old variants
- added patch for latest bug fix release 7.8.2
Co-authored-by: Kumbhar Pramod Shivaji <kumbhar@bbpv1.epfl.ch>
Co-authored-by: Kumbhar Pramod Shivaji <kumbhar@bb-c02vf1h0hv2r.epfl.ch>
* NAMD: FIX build +cuda
Hi,
If I try to compile NAMD with CUDA support, it fails because cannot file the file "{self.arch}.cuda" because it is undet the "arch" folder.
* NAMD: FIX mpi ~smp
Fix `spack install namd ^charmpp backend=mpi ~smp`
* ssht: New version 1.3.4
ssht changed configuration mechanism from "home-grown" to "cmake. The previously current version 1.2b1 (a beta release) is thus unfortunately not available any more.
* ssht: Don't set build type
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Don't use CUDA for hipblas
* old versions use TRY_CUDA
* Update var/spack/repos/builtin/packages/hipblas/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add version 2.2-2 to r-gwmodel
* Update var/spack/repos/builtin/packages/r-gwmodel/package.py
Fix comma, space issue.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add version 0.3.17 to r-inline
* Drop R version constraint
A really old version of R was specified in the 0.3.14 and 0.3.15
versions of r-inline. This constraint was dropped in the 0.3.17 version.
Drop it from the spack recipe as well.
* vasp: fix build with gfortran 10
Avoid Error: Type mismatch between actual argument at (1) and actual argument at (2)
* vasp: add version 6.1.1
* vasp 6: allow building without CUDA
* Adding PiP recipe
* pip@1 recipe (it seems working)
* change install dir hierarchy
* installing PiP man pages
* add pip-glibc & pip-gdb
* fix configure option designations, fix dependency types
* fix dependency type of pip
* use AutotoolsPackage in pip recipe
* add patch for pip-glibc & pip-gdb to enable 'disable-werror'
* change glibc install directory
* add linux distro check to pip-gdb
* create process-in-process package
* use flag_handler and join_path
* add gcc version constraint, change install-test to check-installed
* fix gcc version designations on conflicts()
* add constraint of target cpu, fix flake8 warnings
* add version constraint to resource()
* Some fixes to adapt the current version
not to execute 'piplnlibs'
change documentation install command
* Update
new branch name of PiP-gdb
adapting PiP-Testsuite
* update pip-gdb github urls
* The very first commit of Process-in-Process (PiP)
details can be found at https://github.com/RIKEN-SysSoft/PiP
* Fix comment style issues
* New Package: Process-in-Process (PiP) -- 2nd trial
* fix style issue
* change inline comments style (required to have two spaces)
Co-authored-by: Daiki Matsunaga <daikim@axe.bz>
Imagemagick-7.0.8 needs to link against libltdl. Otherwise, the build will fail with:
```
2 errors found in build log:
503 checking for libltdl...
504 checking ltdl.h usability... no
505 checking ltdl.h presence... no
506 checking for ltdl.h... no
507 checking for lt_dlinit in -lltdl... no
508 checking if libltdl package is complete... no
>> 509 configure: error: in `/tmp/gpjohnsn/spack-stage/spack-stage-imagemagick-7.0.8-7-4y44gaklhhciiwjzhfpxjfwdj5q
ltjp3/spack-src':
>> 510 configure: error: libltdl is required for modules and OpenCL builds
511 See `config.log' for more details
```
* add version 3.8.2 to r-gtools
* Improve formatting of description
In case the list gets formatted as a non-list:
- added semicolons to end of list items
- replaced dashes with [#]
* add version 1.30 to r-knitr
* Fix version constraints
- r-digest
- r-formatr
The version constraints on those packages should actually be in the `when`
clause.
'date' is a C++ header library offering extensive date and time
functionality for the C++11, C++14 and C++17 standards written by Howard
Hinnant and released under the MIT license. A slightly modified version
has been accepted (along with 'tz.h') as part of C++20. This package
regroups all header files from the upstream repository by Howard Hinnant
so that other R packages can use them in their C++ code. At present, few
of the types have explicit 'Rcpp' wrapper though these may be added as
needed.
Designed to ease the application and comparison of multiple hypothesis
testing procedures for FWER, gFWER, FDR and FDX. Methods are
standardized and usable by the accompanying 'mutossGUI'.
Utility functions that enhance the 'parallel' package and support the
built-in parallel backends of the 'future' package. For example,
availableCores() gives the number of CPU cores available to your R
process as given by the operating system, 'cgroups' and Linux
containers, R options, and environment variables, including those set by
job schedulers on high-performance compute clusters. If none is set, it
will fall back to parallel::detectCores(). Another example is
makeClusterPSOCK(), which is backward compatible with
parallel::makePSOCKcluster() while doing a better job in setting up
remote cluster workers without the need for configuring the firewall to
do port-forwarding to your local computer.
Contains third-party map tile provider information from 'Leaflet.js',
<https://github.com/leaflet-extras/leaflet-providers>, to be used with
the 'leaflet' R package. Additionally, 'leaflet.providers' enables users
to retrieve up-to-date provider information between package updates.
Provides a header only, C++11 interface to R's C interface. Compared to
other approaches 'cpp11' strives to be safe against long jumps from the
C API as well as C++ exceptions, conform to normal R function semantics
and supports interaction with 'ALTREP' vectors.
Query, set, delete credentials from the 'git' credential store. Manage
'GitHub' tokens and other 'git' credentials. This package is to be used
by other packages that need to authenticate to 'GitHub' and/or other
'git' repositories.
Importance sampling from the truncated multivariate normal using the GHK
(Geweke-Hajivassiliou-Keane) simulator. Unlike Gibbs sampling which can
get stuck in one truncation sub-region depending on initial values, this
package allows truncation based on disjoint regions that are created by
truncation of absolute values. The GHK algorithm uses simple Cholesky
transformation followed by recursive simulation of univariate truncated
normals hence there are also no convergence issues. Importance sample is
returned along with sampling weights, based on which, one can calculate
integrals over truncated regions for multivariate normals.
This release also includes the HDF5 VOL plugin so I've added an additional
funciton to ensure the HDF5_PLUGIN_PATH env var gets updated with the adios
install prefix
* intel-xed: add version 12.0.1
Rework the version numbers for intel-xed, now that xed has actual
releases and tags. Add releases 11.2.0 and 12.0.1. Rename 2019.03.01
to 10.2019.03 as a legacy version that fits in the new order.
Add variant +pic to compile libxed.a with PIC code so that it can be
linked into another shared library.
Add conflict for aarch64.
Add mwkrentel as maintainer.
* py-pyfiglet:new recipe
* Update var/spack/repos/builtin/packages/py-pyfiglet/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pyfiglet: use pypi url
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
fixes#20736
Before this one line fix we were erroneously deducing
that dependency conditions hold even if a package
was external.
This may result in answer sets that contain imposed
conditions on a node without the node being present
in the DAG, hence #20736.
fixes#20611
The conflict was triggered by an invalid value of the
'scheduler' variant. This causes Spack to error when libyogrt
facts are validated by the ASP-based concretizer.
At some point in the past, the skip_patch argument was removed
from the call to package.do_install() this broke the --skip-patch
flag on the dev-build command.
There's two issues with hip where it tries to autodetect the patch
version number from git (when installed), but it does not check if it
even is inside of a git repo. The result is we end up with a shared lib
with a trailing dash in the library suffix: `libamd64.so.x.y.z-`, which
confuses GCC. The patch tries to check if the `.git` folder exists, and
if it does not, it handles version numbering the same as when git was
not installed previously.
* opencl-c-headers: add new version 2020.12.18
* opencl-clhpp: add new version 2.0.13
* opencl-headers: now supports OpenCL 3.0 with new versions of opencl-c-headers and opencl-clhpp
* ocl-icd: add new version 2.2.14 add now can provide OpenCL 3.0
PaRSEC: the Parallel Runtime Scheduler and Execution Controller for micro-tasks on distributed heterogeneous systems.
Signed-off-by: Aurelien Bouteiller <bouteill@icl.utk.edu>
* py-tensorflow: 2.4.0 and dependency updates
* minor version updates
* fix numpy dependency
* dependency rework: compatible release issues, start to clarify cuda versions
* --incompatible_no_support_tools_in_action_inputs was removed in bazel 3.6
* adjustment to versions of cuda dependency, also make sure that
patches/filters still apply to certain release trains.
* python 3.8 and tf < 2.2 have issues
* missed py-grpcio version bump
Set up environment and dependent packages properly when building
with intel-oneapi-mpi as a dependency MPI provider (e.g. point to
mpicc compiler wrapper).
* eospac: add version 6.4.2beta
* eospac: clarify EOSPAC "beta" versions
Compared to 6.4.1, EOSPAC 6.4.2beta contains only one change, a fix
for an inability to read some SESAME files in ASCII format. From the
release announcement,
EOSPAC 6.4.2beta has been released for general use as the latest
(i.e., eospac6-latest) versions. This is a small patch to the
previously-released version 6.4.1, which was requested by an
affected user.
But the "beta" label can cause confusion, especially when a beta
version is the new preferred version, as is the case here. As
suggested by reviewers, add a comment clarifying EOSPAC's use of
"beta".
This properly sets PATH/CPATH/LIBRARY_PATH etc. to make the
Spack-generated module file for intel-oneapi-compilers useful
(without this, 'icx' would not be found after loading the module
file for intel-oneapi-compilers).
The C-Library for the current compiler should already be used by the compiler. So there is no point in returning any libs for this package.
Without this patch: if one uses this as an external package (as intended), then this will can inject system library paths into the build process at the wrong place.
fixes#20679
In this refactor we have a single cardinality rule on the
provider, which triggers a rule transforming a dependency
on a virtual package into a dependency on the provider of
the virtual.
This adds a -i option to "spack python" which allows use of the
IPython interpreter; it can be used with "spack python -i ipython".
This assumes it is available in the Python instance used to run
Spack (i.e. that you can "import IPython").
* Update recipe for AOMP.
Reduced repitition with version hashes.
Expanded dependency versioning.
Reduced repitition with cmake args.
Added version 3.10.0
* Update dependency versions and remove uneeded quotes.
* Update var/spack/repos/builtin/packages/aomp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update of Eccodes to 2.19.1
* PEP8
* PEP8
* PEP8-whitespace
* Update var/spack/repos/builtin/packages/eccodes/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Michael Blaschek <michael.blaschek@univie.ac.at>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Every other predicate in the concretizer uses a `_set` suffix to
implement user- or package-supplied settings, but compiler settings use a
`_hard` suffix for this. There's no difference in how they're used, so
make the names the same.
- [x] change `node_compiler_hard` to `node_compiler_set`
- [x] change `node_compiler_version_hard` to `node_compiler_version_set`
* OpenMPI: Depends on hwlock & libevent
Both hwlock & libevent are required dependencies of Open MPI.
While they are also shipped internally, newer releases (>=4.0)
will start looking for external packages by default.
This caused build issues of Open MPI 4.0.5 with Fortran on macOS
10.15.
* Open MPI 4.0: libevent external
Internally shipped libevent just works fine for prior releases.
#20076 moved Cray-specific MPICH support from the Spack MPICH package
to a new cray-mpich Package. This broke existing package installs
using external mpich on Cray systems. This PR keeps the cray-mpich
package but restores the Cray-specific MPICH support for older
installations.
In the future this support should be removed from the Spack mpich
package and users should be directed to use cray-mpich on Cray.
Previously, the concretizer handled version constraints by comparing all
pairs of constraints and ensuring they satisfied each other. This led to
INCONSISTENT ressults from clingo, due to ambiguous semantics like:
version_constraint_satisfies("mpi", ":1", ":3")
version_constraint_satisfies("mpi", ":3", ":1")
To get around this, we introduce possible (fake) versions for virtuals,
based on their constraints. Essentially, we add any Versions,
VersionRange endpoints, and all such Versions and endpoints from
VersionLists to the constraint. Virtuals will have one of these synthetic
versions "picked" by the solver. This also allows us to remove a special
case from handling of `version_satisfies/3` -- virtuals now work just
like regular packages.
This converts the virtual handling in the new concretizer from
already-ground rules to facts. This is the last thing that needs to be
refactored, and it converts the entire concretizer to just use facts.
The previous way of handling virtuals hinged on rules involving
`single_provider_for` facts that were tied to the virtual and a version
range. The new method uses the condition pattern we've been using for
dependencies, externals, and conflicts.
To handle virtuals as conditions, we impose constraints on "fake" virtual
specs in the logic program. i.e., `version_satisfies("mpi", "2.0:",
"2.0")` is legal whereas before we wouldn't have seen something like
this. Currently, constriants are only handled on versions -- we don't
handle variants or anything else yet, but they key change here is that we
*could*. For a long time, virtual handling in Spack has only dealt with
versions, and we'd like to be able to handle variants as well. We could
easily add an integrity constraint to handle variants like the one we use
for versions.
One issue with the implementation here is that virtual packages don't
actually declare possible versions like regular packages do. To get
around that, we implement an integrity constraint like this:
:- virtual_node(Virtual),
version_satisfies(Virtual, V1), version_satisfies(Virtual, V2),
not version_constraint_satisfies(Virtual, V1, V2).
This requires us to compare every version constraint to every other, both
in program generation and within the concretizer -- so there's a
potentially quadratic evaluation time on virtual constraints because we
don't have a real version to "anchor" things to. We just say that all the
constraints need to agree for the virtual constraint to hold.
We can investigate adding synthetic versions for virtuals in the future,
to speed this up.
This code in `SpecBuilder.build_specs()` introduced in #20203, can loop
seemingly interminably for very large specs:
```python
set([spec.root for spec in self._specs.values()])
```
It's deceptive, because it seems like there must be an issue with
`spec.root`, but that works fine. It's building the set afterwards that
takes forever, at least on `r-rminer`. Currently if you try running
`spack solve r-rminer`, it loops infinitely and spins up your fan.
The issue (I think) is that the spec is not yet complete when this is
run, and something is going wrong when constructing and comparing so many
values produced by `_cmp_key()`. We can investigate the efficiency of
`_cmp_key()` separately, but for now, the fix is:
```python
roots = [spec.root for spec in self._specs.values()]
roots = dict((id(r), r) for r in roots)
```
We know the specs in `self._specs` are distinct (they just came out of
the solver), so we can just use their `id()` to unique them here. This
gets rid of the infinite loop.
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
- [x] update all licensed files to say 2013-2021 using
`spack license update-copyright-year`
- [x] appease mypy with some additions to package.py that needed
for oneapi.py
This adds a new subcommand to `spack license` that automatically updates
the copyright year in files that should have a license header.
- [x] add `spack license update-copyright-year` command
- [x] add test
This adds two lines to `.gitattributes`:
- [x] exclude vendored code from GitHub's language calculation
- [x] recognize `.lp` files as Prolog (closest language to ASP that
linguist supports)
It looks like there have been two attempts
(https://github.com/github/linguist/issues/3867,
https://github.com/github/linguist/issues/4860) to add ASP as a language
to Linguist, but it's not widespread enough to be standard yet (or at
least the people who submitted the PRs haven't been able to show enough
stats to prove it). We'll settle for calling ASP "Prolog" for now as
that'll get us some syntax highlighting for `concretize.lp`.
* hdf-eos5: new package (HDF for Earth Observing Sytem using hdf v5)
* hdf-eos5: flake8 fixes
* hdf-eos5: trying to fix flake8 errors
* hdf-eos5: flake8 fix
* hdf-eos5: Fix to support Fortran codes
The -Df2cFortran compilation flag needed to support Fortran
* hdf-eos2: new package (HDF for Earth Observing System using hdf5)
* hdf-eos2: flake8 fixes
* hdf-eos2: fix to support Fortran
Need the compilation flag -Df2cFortran to allow support for Fortran
codes
libuuid is currently contained in util-linux, libuuid and uuid. This
change introduces a new virtual provider `uuid` and renames the existing
`uuid` package to `ossp-uuid`.
util-linux's libuuid is provided in the form of a separate package
util-linux-uuid to make sure that packages depending on uuid and
util-linux can use a separate uuid implementation, which the concretizer
does not allow if libuuid is contained in util-linux.
- added several patches
- added some missing dependencies
- remove unneeded dependencies
- add CUDA support
- disable queue support, which was limited, and broken anyway
- move package text that was specific to the package to a comment, so it
does not show up the environment module
- set conflicts for cuda and compilers
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* OpenMPI: Add version 4.1.0
* OpenMPI: Prefer version 4.0.5.
* OpenMPI: Update links
The download links changed, there is currently a redirection but it might not work forever. The website also switched to https.
Previously compiler-rt didn't correctly passthrough cmake
variables for python when building the various santizers.
This patch passes these variables through.
This patch may also correctly apply to any version of LLVM
to any version of LLVM that uses the newer monorepo style organization,
and any older llvm newer than 7.0.0 as long as the paths were set
appropriately. However, this was not done so because it was not
tested with older LLVM releases.
Fixes#19908
See also: https://bugs.llvm.org/show_bug.cgi?id=48180
This updates the UnifyFS packages to account for the latest v0.9.1
release.
Updates required and optional dependencies for the respective
releases.
Locks margo and mercury dependencies at specific versions while
integration with their latest versions is still in progress.
* PGI compiler has trouble with avx2 SIMD support
(https://github.com/FFTW/fftw3/issues/78)
* Hew to the project's preferred indentation standard.
* Expand '%nvhpc' logic to include '%pgi'.
* Exceeded the max line-length.
* Break up the long compound statement into nested if's.
* Inadvertently picked up an extraneous file.
* PGI compiler has trouble with avx2/avx-512 SIMD support, too.
* Add PGI runtime libs to LDFLAGS when '%pgi' in spec.
* Revert "Add PGI runtime libs to LDFLAGS when '%pgi' in spec."
This reverts commit 31c3ef8ea2.
* Add PGI runtime libs to LDFLAGS when '%pgi' in spec.
GCC looks for included files based on several env vars.
Remove C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, and OBJC_INCLUDE_PATH
from the build environment to ensure it's clean and prevent
accidental clobbering.
* Adding support for the CMake flags in LBANN that are missing.
* Added new flag to OpenCV dependency and removed negative variants
since OpenCV no longer turns on everything by default. Removed CMake
flags in LBANN that have been deprecated.
* Removed type='build' flags from dependencies so that they get linked
into a environment's view.
* Removed type='build' flags from dependencies so that they get linked
into a environment's view. Fixed DiHydrogen variant to enable
DistConv feature, renamed to +distconv from +legacy. Added conflicts
line to indicated that DistConv and ROCm don't work with +half
support.
* Fixed Flake8 and cleaned up ordering of variants.
* Flake8
* Backed out changes to not mark and cmake and ninja as build
dependencies, which was introduced to make sure that they appear in
a spack environment.
* Backed out changes to not mark doc related packages as build
dependencies, which was introduced to make sure that they appear
in a spack environment.
* Fixed how recipe communicates the intent to build and run tests to the
package CMake.
This is to make sure that the build system doesn't pick up a library that
would happen to be available.
Co-authored-by: Baptiste Jonglez <git@bitsofnetworks.org>
Environment yaml files should not have default values written to them.
To accomplish this, we change the validator to not add the default values to yaml. We rely on the code to set defaults for all values (and use defaulting getters like dict.get(key, default)).
Includes regression test.
This creates a set of packages which all use the same script to install
components of Intel oneAPI. This includes:
* An inheritable IntelOneApiPackage which knows how to invoke the
installation script based on which components are requested
* For components which include headers/libraries, an inheritable
IntelOneApiLibraryPackage is provided to locate them
* Individual packages for DAL, DNN, TBB, etc.
* A package for the Intel oneAPI compilers (icx/ifx). This also includes
icc/ifortran but these are not currently detected in this PR
I lost my mind a bit after getting the completion stuff working and
decided to get Mypy working for spack as well. This adds a
`.mypy.ini` that checks all of the spack and llnl modules, though
not yet packages, and fixes all of the identified missing types and
type issues for the spack library.
In addition to these changes, this includes:
* rename `spack flake8` to `spack style`
Aliases flake8 to style, and just runs flake8 as before, but with
a warning. The style command runs both `flake8` and `mypy`,
in sequence. Added --no-<tool> options to turn off one or the
other, they are on by default. Fixed two issues caught by the tools.
* stub typing module for python2.x
We don't support typing in Spack for python 2.x. To allow 2.x to
support `import typing` and `from typing import ...` without a
try/except dance to support old versions, this adds a stub module
*just* for python 2.x. Doing it this way means we can only reliably
use all type hints in python3.7+, and mypi.ini has been updated to
reflect that.
* add non-default black check to spack style
This is a first step to requiring black. It doesn't enforce it by
default, but it will check it if requested. Currently enforcing the
line length of 79 since that's what flake8 requires, but it's a bit odd
for a black formatted project to be quite that narrow. All settings are
in the style command since spack has no pyproject.toml and I don't
want to add one until more discussion happens. Also re-format
`style.py` since it no longer passed the black style check
with the new length.
* use style check in github action
Update the style and docs action to use `spack style`, adding in mypy
and black to the action even if it isn't running black right now.
We have to repeat all the spec attributes in a number of places in
`concretize.lp`, and Spack has a fair number of spec attributes. If we
instead add some rules up front that establish equivalencies like this:
```
node(Package) :- attr("node", Package).
attr("node", Package) :- node(Package).
version(Package, Version) :- attr("version", Package, Version).
attr("version", Package, Version) :- version(Package, Version).
```
We can rewrite most of the repetitive conditions with `attr` and repeat
only for each arity (there are only 3 arities for spec attributes so far)
as opposed to each spec attribute. This makes the logic easier to read
and the rules easier to follow.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This patch logic resovles a linking issue with ncurses in the mesa
package. This appears to be a recurring problem that was identified in
the mesa gitlab issues here:
https://gitlab.freedesktop.org/mesa/mesa/-/issues/2843
Using `_llvm_method = 'auto'` is broken. This patch replaces that with
`_llvm_method = 'config-tool'`, which is a hack, but makes it possible
to build.
I have commented on the closed issue (2843), referencing the original
author of the bug, and one of the mesa developers, so perhaps they will
fix the problem.
This PR does three related things to try to improve developer tooling quality of life:
1. Adds new options to `.flake8` so it applies the rules of both `.flake8` and `.flake_package` based on paths in the repository.
2. Adds a re-factoring of the `spack flake8` logic into a flake8 plugin so using flake8 directly, or through editor or language server integration, only reports errors that `spack flake8` would.
3. Allows star import of `spack.pkgkit` in packages, since this is now the thing that needs to be imported for completion to work correctly in package files, it's nice to be able to do that.
I'm sorely tempted to sed over the whole repository and put `from spack.pkgkit import *` in every package, but at least being allowed to do it on a per-package basis helps.
As an example of what the result of this is:
```
~/Workspace/Projects/spack/spack develop* ⇣
❯ flake8 --format=pylint ./var/spack/repos/builtin/packages/kripke/package.py
./var/spack/repos/builtin/packages/kripke/package.py:6: [F403] 'from spack.pkgkit import *' used; unable to detect undefined names
./var/spack/repos/builtin/packages/kripke/package.py:25: [E501] line too long (88 > 79 characters)
~/Workspace/Projects/spack/spack refactor-flake8*
1 ❯ flake8 --format=spack ./var/spack/repos/builtin/packages/kripke/package.py
~/Workspace/Projects/spack/spack refactor-flake8*
❯ flake8 ./var/spack/repos/builtin/packages/kripke/package.py
```
* qa/flake8: update .flake8, spack formatter plugin
Adds:
* Modern flake8 settings for per-path/glob error ignores, allows
packages to use the same `.flake8` as the rest of spack
* A spack formatter plugin to flake8 that implements the behavior of
`spack flake8` for direct invocations. Makes integration with
developer tooling nicer, linting with flake8 reports only errors that
`spack flake8` would report. Using pyls and pyls-flake8, or any other
non-format-dependent flake8 integration, now works with spack's rules.
* qa/flake8: allow star import of spack.pkgkit
To get working completion of directives and spack components it's
necessary to import the contents of spack.pkgkit. At the moment doing
this makes flake8 displeased. For now, allow spack.pkgkit and spack
both, next step is to ban spack * and require spack.pkgkit *.
* first cut at refactoring spack flake8
This version still copies all of the files to be checked as befire, and
some other things that probably aren't necessary, but it relies on the
spack formatter plugin to implement the ignore logic.
* keep flake8 from rejecting itself
* remove separate packages flake8 config
* fix failures from too many files
I ran into this in the PR converting pkgkit to std. The solution in
that branch does not work in all cases as it turns out, and all the
workarounds I tried to use generated configs to get a single invocation
of flake8 with a filename optoion to work failed. It's an astonishingly
frustrating config option.
Regardless, this removes all temporary file creation from the command
and relies on the plugin instead. To work around the huge number of
files in spack and still allow the command to control what gets checked,
it scans files in batches of 100. This is a completely arbitrary number
but was chosen to be safely under common line-length limits. One
side-effect of this is that every 100 files the command will produce
output, rather than only at the end, which doesn't seem like a terrible
thing.
* Dependencies of Go will now correctly set the GOPATH for the
appropriate spec to avoid using the user's default path.
* Bumped version to latest releases(1.15.6 & 1.14.13).
Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.
- [x] make `master` the default so we don't have to keep telling people
to install `clingo@master`. We'll update the preferred version when
there's a new release.
Continuing to convert everything in `asp.py` into facts, make the
generation of ground rules for conditional dependencies use facts, and
move the semantics into `concretize.lp`.
This is probably the most complex logic in Spack, as dependencies can be
conditional on anything, and we need conditional ASP rules to accumulate
and map all the dependency conditions to spec attributes.
The logic looks complicated, but essentially it accumulates any
constraints associated with particular conditions into a fact associated
with the condition by id. Then, if *any* condition id's fact is True, we
trigger the dependency.
This simplifies the way `declared_dependency()` works -- the dependency
is now declared regardless of whether it is conditional, and the
conditions are handled by `dependency_condition()` facts.
There are currently no places where we do not want to traverse
dependencies in `spec_clauses()`, so simplify the logic by consolidating
`spec_traverse_clauses()` with `spec_clauses()`.
`version_satisfies/2` and `node_compiler_version_satisfies/3` are
generated but need `#defined` directives to avoid " info: atom does not
occur in any rule head:" warnings.
Since zsh can load bash completion files natively, seems reasonable to just turn this on.
The only changes are to switch from `type -t` which zsh doesn't support to using `type`
with a regex and adding a new arm to the sourcing of the completions to allow it to work
for zsh as well as bash.
Could use more bash/dash/etc testing probably, but everything I've thought to try has
worked so far.
Notes:
* unit-test zsh support, fix issues
Specifically fixed word splitting in completion-test, use a different
method to apply sh emulation to zsh loaded bash completion, and fixed
an incompatibility in regex operator quoting requirements.
* compinit now ignores insecure directories
Completion isn't meant to be enabled in non-interactive environments, so
by default compinit will ask the user if they want to ignore insecure
directories or load them anyway. To pass the spack unit tests in GH
actions, this prompt must be disabled, so ignore explicitly until a
better solution can be found.
* debug functions test also requires bash emulation
COMP_WORDS is a bash-ism that zsh doesn't natively support, turn on
emulation for just that section of tests to allow the comparison to
work. Does not change the behavior of the functions themselves since
they are already pinned to sh emulation elsewhere.
* propagate change to .in file
* fix comment and update script based on .in
This PR addresses a number of issues related to compiler bootstrapping.
Specifically:
1. Collect compilers to be bootstrapped while queueing in installer
Compiler tasks currently have an incomplete list in their task.dependents,
making those packages fail to install as they think they have not all their
dependencies installed. This PR collects the dependents and sets them on
compiler tasks.
2. allow boostrapped compilers to back off target
Bootstrapped compilers may be built with a compiler that doesn't support
the target used by the rest of the spec. Allow them to build with less
aggressive target optimization settings.
3. Support for target ranges
Backing off the target necessitates computing target ranges, so make Spack
handle those properly. Notably, this adds an intersection method for target
ranges and fixes the way ranges are satisfied and constrained on Spec objects.
This PR also:
- adds testing
- improves concretizer handling of target ranges
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Cray's version of MPICH uses a different versioning system than
MPICH, so it has been split into its own package. It is an
external-only package (always provided by the system, never
installed by Spack).
* Kluge to get the gfortran linker to work correctly on Big Sur.
* Fixed formatting error; stetting the other.
* Removed spaces.
* Added comment, mainly to re-trigger Spack CI.
Currently, version range constraints, compiler version range constraints,
and target range constraints are implemented by generating ground rules
from `asp.py`, via `one_of_iff()`. The rules look like this:
```
version_satisfies("python", "2.6:") :- 1 { version("python", "2.4"); ... } 1.
1 { version("python", "2.4"); ... } 1. :- version_satisfies("python", "2.6:").
```
So, `version_satisfies(Package, Constraint)` is true if and only if the
package is assigned a version that satisfies the constraint. We
precompute the set of known versions that satisfy the constraint, and
generate the rule in `SpackSolverSetup`.
We shouldn't need to generate already-ground rules for this. Rather, we
should leave it to the grounder to do the grounding, and generate facts
so that the constraint semantics can be defined in `concretize.lp`.
We can replace rules like the ones above with facts like this:
```
version_satisfies("python", "2.6:", "2.4")
```
And ground them in `concretize.lp` with rules like this:
```
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) } 1
:- version_satisfies(Package, Constraint).
version_satisfies(Package, Constraint)
:- version(Package, Version), version_satisfies(Package, Constraint, Version).
```
The top rule is the same as before. It makes conditional dependencies and
other places where version constraints are used work properly. Note that
we do not need the cardinality constraint for the second rule -- we
already have rules saying there can be only one version assigned to a
package, so we can just infer from `version/2` `version_satisfies/3`.
This form is also safe for grounding -- If we used the original form we'd
have unsafe variables like `Constraint` and `Package` -- the original
form only really worked when specified as ground to begin with.
- [x] use facts instead of generating rules for package version constraints
- [x] use facts instead of generating rules for compiler version constraints
- [x] use facts instead of generating rules for target range constraints
- [x] remove `one_of_iff()` and `iff()` as they're no longer needed
* ParaView: add new ParaView-5.9.0-RC2 release
Signed-off-by: Vicente Adolfo Bolea Sanchez <vicente.bolea@kitware.com>
* Update var/spack/repos/builtin/packages/paraview/package.py
Indeed, I misunderstood the previous review. This looks good to me too.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.
So far, the command line is faster than running through Python, but I'm
working on fixing that. I found that if I do this:
```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp") # code from spack solve --show asp hdf5
control.load("display.lp")
control.ground([("base", [])])
control.solve(...)
```
It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.
So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.
This fixes a logging error observed on macOS 11.0.1 (Big Sur).
When performing a Spack install in debugging mode (e.g.
`spack -d install py-scipy`) Spack is supposed to write a log of
compiler wrapper command line invocations to the current working
directory.
Due to a regression error introduced by #18205, these files were
no-longer generated, and Spack was printing errors such as
"No such file or directory: None/." This is because the log file
directory gets set from `spack.main.spack_working_dir`, but that
variable is not set in the spawned process.
This PR ensures that the working directory (at the time of the
"spack install" invocation) is persisted to the subprocess.
Fixed hard tab in flux-sched edit and unbound hwloc in flux-core after
testing to better support modern MPIs in spack environments
Verified that flux-core@0.17 is when hwloc@2: became viable
Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.
This modification ensures that open-ended variants (variants accepting any string
or any integer) are projected to the finite set of values that are relevant for this
concretization.
2020.10.0 is the latest stable release, and the preferred version
for general use (when the user does not specify otherwise).
2020.11.0 is a prototype for the memory kinds feature that is also
available when requested.
Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.
The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.
- [x] move ad-hoc value handling code into spec_clauses so we do it in
one place for CLI and packages
- [x] move handling of `variant_possible_value`, etc. into
`concretize.lp`, where we can automatically infer variant existence
more concisely.
- [x] simplify/clarify some of the code for variants in `spec_clauses()`
* [cmd versions] add spack versions --new flag to only fetch new versions
format
[cmd versions] rename --latest to --newest and add --remote-only
[cmd versions] add tests for --remote-only and --new
format
[cmd versions] update shell tab completion
[cmd versions] remove test for --remote-only --new which gives empty output
[cmd versions] final rename
format
* add brillig mock package
* add test for spack versions --new
* [brillig] format
* [versions] increase test coverage
* Update lib/spack/spack/cmd/versions.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Update lib/spack/spack/cmd/versions.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Update geant4-data and individual datasets for Geant4 versions 10.6.3
and 10.7.0.
Update geant4 package with new versions 10.6.3 and 10.7.0. Update
dependencies on CLHEP and VecGeom with versions required for Geant4
10.7.
Add GEANT4_INSTALL_PACKAGE_CACHE=OFF to CMake args for 10.6 onwards.
Prevents install of the "package cahce" file that contains hard-coded
paths for dependencies, improving relocatability. It relies on Spack
setting CMAKE_PREFIX_PATH correctly in build/use environments that
consume the geant4 package.
`cmake @3.17:` is necessary to handle `cuda @11:` correctly. Earlier versions of `cmake` do not know that `cuda @11:` does not support `compute_30` any more, and list that compute capability as supported. This is handled in `cmake`'s file `Modules/FindCUDA/select_compute_arch.cmake`.
The bowtie2 Makefile uses `prefix`, not `PREFIX`, for versions before v2.4.
Credit to @tkameyama
Co-authored-by: george.hartzell <george.hartzell@sana.com>
* allow install of build-deps from cache via --include-build-deps switch
* make clear that --include-build-deps is useful for CI pipeline troubleshooting
fixes#20055
Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.
This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.
* bump up version for rocm-3.10.0 release
* bump up version for rocm-3.10.0
* remove duplicate version addition for 3.9.0
* bump up version for rocm-3.10.0 release
* bump up version for rocm-3.10.0 release
* bump up version for rocm-debug-agent and rocm-dbgapi
* bump up version for rocm-bandwidth-test,rocm-gdb,rocprofiler,roctracer for rocm-3.10.0
* add smoke test
* remove whitespaces
* fix minimum version issue
* reorder decorators & replace make with cmake build
* merge cmake build into one line
* reorganize smoke test function
Co-authored-by: Jieyang Chen <chenj3@ornl.gov>
* added dockerfile for opensuse leap 15
* updated maintainer info
* Update share/spack/docker/leap-15.dockerfile
* move copies and symlinks after package install
also use ${SPACK_ROOT} for spack calls as
this works with buildah
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* New package: py-qsymm
* py-qsymm: Convert to using tarballs from PyPi instead of git checkouts
* py-qsymm: add missing dependencies
* Update var/spack/repos/builtin/packages/py-qsymm/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-qsymm: Fix url to use pypi hidden download interface
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* AOCC-2.3.0 is now added to spack
Change-Id: I18fd9606e6fd9a288cc7dc6c6ead11ea17839a7c
* Added flag and version tests for AOCC-2.3.0
* Addressed review comments
Co-authored-by: vkallesh <Vijay-teekinavar.Kallesh@amd.com>
fixes#20040
Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.
refers #20040
Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.
This prevents the solver from avoiding expected configurations
just because they contain directives like:
depends_on('pkg+foo')
and `+foo` is not the default variant value for pkg.
* OpenBLAS: More Precise GCC Conflicts
Add more precise GCC conflicts so e.g. GCC 6 and GCC 7.5 don't fail.
* Compact syntax
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
As part of pull request #19452, a patch method was added to the mfem
package to delete byte order marks from 3 mfem source files. These
files first appeared in a stable release of mfem as of version
4.1. Consequently, attempts to install mfem 3.4 or mfem 4.0 fail
because no files exist at the path arguments of the filter_file
commands used to execute this operation. Decorating the patch method
so it runs only on mfem versions 4.1 and later resolves the errors
that were thrown due to files not found.
This commit adds that decorator.
* Qt: add options to disable docs and gui
- Add `~gui` option for minimal build
- Add `+doc` option to install docs, and attempt to disable the implicit
llvm dependency if not
- Removes the 'freetype' option which hasn't worked reliably in qt5, as
many of the gui components implicitly rely on freetype.
- Add and test version 5.15 (and skip qtlocation if disabling opengl)
- Refactor some of the dependency logic
I've tested this on linux with 5.15.2 and 4.8.7 in a couple of different
configurations.
* Address reviewer feedback and correctly disable llvm
* Fix qt doc generation
* py-rosdep: add new package
* setuptools needed at run-time
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* py-rospkg: add new package
* setuptools needed at run-time
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* py-catkin-pkg: add new package
* setuptools is needed at run-time
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
fixes#19981
This commit adds support for target ranges in directives,
for instance:
conflicts('+foo', when='target=x86_64:,aarch64:')
If any target in a spec body is not a known target the
following clause will be emitted:
node_target_satisfies(Package, TargetConstraint)
when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.
* spack recipe for gromacs with aocc compiler support
Change-Id: I364aab4a0aa2dcd44bc47eb50c81b2d94c99cfbd
* Removed arch and other associated compilers flags
Added cycle_subcounters variant
Co-authored-by: vkallesh <Vijay-teekinavar.Kallesh@amd.com>
fixes#20019
Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:
conflicts('%gcc@X.Y:', when='@:A.B')
where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.
* llvm-amdgpu: fix the build for version 3.9.0
Adapt the fix-system-zlib-ncurses.patch for version 3.9.0. Without
the patch, llvm-amdgpu builds, but then rocm-device-libs fails with
"cannot find -ltinfo."
Tighten the version requirements for cmake according to the
llvm/CMakeLists.txt file.
* Add a conflict for cmake 3.19.0.
refers #20079
Added docstrings to 'concretize' and 'concretized' to
document the format for tests.
Added tests for the activation of test dependencies.
refers #20040
This modification emits rules like:
provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").
for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.
* intel-tbb: patch for arm64 on macOS
as submitted upstream and used in homebrew
* intel-tbb: check patchable versions
* intel-tbb: avoid patch breakage when 2021.1 is released
2021.1-beta05 would be considered newer than 2021.1
* Add the 'exciting' package.
Version 14 (latest available) is defined.
An as-of-yet unpublished patch (dfgather.patch) from the developers is also
included.
* fixed flake8 errors (I *thought* I had already gotten them! OOPS!)
* Update var/spack/repos/builtin/packages/exciting/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixed install method to just do the install, and no build method is needed.
* *Actually* added the lapack dependency!
* removed variant from blas dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix: leading . is not needed in extension kwarg
* mfem: add support for NVIDIA AmgX
fix: proper spacing
* mfem: use conflict to indicate that AmgX is expected to depend on CUDA
fixes#19966
Global arrays supports GCC 10 since version 5.7.1,
therefore a conflict has been added to avoid old
releases to error at build-time.
Removed the 'blas' and 'lapack' variant since
BLAS and LAPACK are always a dependency, and
if not specified during configure, a version
of these APIs vendored with Global Arrays is
built.
Fixed a few options in configuration.
The point of this variant is to give the end user an option to use system
installed fabrics such as mofed instead of upstream fabrics such as rdma-core.
This was found to avoid run time errors on some systems.
Co-authored-by: nithintsk <nithintsk@github.com>
This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:
```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```
This caused clang's version to be misdetected as:
```
clang@11.0.0
Target:
```
This resulted in errors when trying to actually use it as a compiler.
When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:
```
==> Warning: "llvm@11.0.0+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for llvm@11.0.0+clang+lld+lldb]
```
Changing the regex to only match until the end of the line fixes these
problems.
Fixes: #19473
* Updated the cuDNN recipe to generate the proper version names for only
the arhcitecture that you are on. This prevents the concretizer from
selecting a source code version that is incompatible with your current
architecture. Additionally, add constraints to ensure that the
corresponding CUDA version is properly set as well.
* Added maintainer
* Fixed renaming for darwin systems
* Fixed flake8
* Fixed flake8
* Fixed range typo
* Update var/spack/repos/builtin/packages/cudnn/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixed style issues
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* seems to have been introduced errorously by users using gitk-based
workflows. This should be handled by the git package
* fixes build problems on OSX bigsur
* charmpp: various fixes
- change URLs to https
- address deprecated/renamed versions
- make it build with the cmake build system
* flake8
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This adds a new `mark` command that can be used to mark packages as either
explicitly or implicitly installed. Apart from fixing the package
database after installing a dependency manually, it can be used to
implement upgrade workflows as outlined in #13385.
The following commands demonstrate how the `mark` and `gc` commands can be
used to only keep the current version of a package installed:
```console
$ spack install pkgA
$ spack install pkgB
$ git pull # Imagine new versions for pkgA and/or pkgB are introduced
$ spack mark -i -a
$ spack install pkgA
$ spack install pkgB
$ spack gc
```
If there is no new version for a package, `install` will simply mark it as
explicitly installed and `gc` will not remove it.
Co-authored-by: Greg Becker <becker33@llnl.gov>
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.
Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.
This handles the following trickier aspects of testing with direct
support in Spack's package API:
- [x] Caching source or intermediate build files at build time for
use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
packages).
See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added -level_zero -rocm -opencl flags and sha256 for TAU v2.30.
* Removed the depends_on clause for OpenCL and added a variant for OneAPI level_zero.
* remove depends_on rocm
* remove depends_on rocprofiler
Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
The deprecatedProperties custom validator now can accept a function
to compute a better error message.
Improve error/warning message for deprecated properties
As of #18260, `spack load` and `spack env activate` now use
`prefix_inspections` from the modules configuration to decide
how to modify environment variables.
This updates the modules configuration documentation to describe
how to update environment variables with the `prefix_inspections`
section. This also updates the `spack load` and environments
documentation to refer to the new `prefix_inspections` documentation.
`spack load` and `spack env activate` now use the prefix inspections
defined in `modules.yaml`. This allows users to customize/override
environment variable modifications if desired.
If no `prefix_inspections` configuration is present, Spack uses the
values in the default configuration.
This PR reworks a few attributes in the container subsection of
spack.yaml to permit the injection of custom base images when
generating containers with Spack. In more detail, users can still
specify the base operating system and Spack version they want to use:
spack:
container:
images:
os: ubuntu:18.04
spack: develop
in which case the generated recipe will use one of the Spack images
built on Docker Hub for the build stage and the base OS image in the
final stage. Alternatively, they can specify explicitly the two
base images:
spack:
container:
images:
build: spack/ubuntu-bionic:latest
final: ubuntu:18.04
and it will be up to them to ensure their consistency.
Additional changes:
* This commit adds documentation on the two approaches.
* Users can now specify OS packages to install (e.g. with apt or yum)
prior to the build (previously this was only available for the
finalized image).
* Handles to avoid an update of the available system packages have been
added to the configuration to facilitate the generation of recipes
permitting deterministic builds.
This commit address the case of concretizing a root spec with a
transitive conditional dependency on a virtual package, provided
by an external. Before these modifications default variant values
for the dependency bringing in the virtual package were not
respected, and the external package providing the virtual was added
to the DAG.
The issue stems from two facts:
- Selecting a provider has higher precedence than selecting default variants
- To ensure that an external is preferred, we used a negative weight
To solve it we shift all the providers weight so that:
- External providers have a weight of 0
- Non external provider have a weight of 10 or more
Using a weight of zero for external providers is such that having
an external provider, if present, or not having a provider at all
has the same effect on the higher priority minimization.
Also fixed a few minor bugs in concretize.lp, that were causing
spurious entries in the final answer set.
Cleaned concretize.lp from leftover rules.
If a the default of a multi-valued variant is set to
multiple values either in package.py or in packages.yaml
we need to ensure that all the values are present in the
concretized spec.
Since each default value has a weight of 0 and the
variant value is set implicitly by the concretizer
we need to add a rule to maximize on the number of
default values that are used.
This commit introduces a new rule:
real_node(Package) :- not external(Package), node(Package).
that permits to distinguish between an external node and a
real node that shouldn't trim dependency. It solves the
case of concretizing ninja with an external Python.
`node_compiler_hard()` means that something explicitly asked for a node's
compiler to be set -- i.e., it's not inherited, it's required. We're
generating this in spec_clauses even for specs in rule bodies, which
results in conditions like this for optional dependencies:
In py-torch/package.py:
depends_on('llvm-openmp', when='%apple-clang +openmp')
In the generated ASP:
declared_dependency("py-torch","llvm-openmp","build")
:- node("py-torch"),
variant_value("py-torch","openmp","True"),
node_compiler("py-torch","apple-clang"),
node_compiler_hard("py-torch","apple-clang"),
node_compiler_version_satisfies("py-torch","apple-clang",":").
The `node_compiler_hard` there means we would have to *explicitly* set
py-torch's compiler to trigger the llvm-openmp dependency, rather than
just letting it be set by preferences. This is wrong; the dependency
should be there regardless of how the compiler was set.
- [x] remove fn.node_compiler_hard() call from spec_clauses when
generating rule body clauses.
If the version list passed to one_of_iff is empty, it still generates a
rule like this:
node_compiler_version_satisfies("fujitsu-mpi", "arm", ":") :- 1 { } 1.
1 { } 1 :- node_compiler_version_satisfies("fujitsu-mpi", "arm", ":").
The cardinality rules on the right and left above are never
satisfiale, and these rules do nothing.
- [x] Skip generating any rules at all for empty version lists.
As reported, conflicts with compiler ranges were not treated
correctly. This commit adds tests to verify the expected behavior
for the new concretizer.
The new rules to enforce a correct behavior involve:
- Adding a rule to prefer the compiler selected for
the root package, if no other preference is set
- Give a strong negative weight to compiler preferences
expressed in packages.yaml
- Maximize on compiler AND compiler version match
Variant of this kind don't have a list of possible
values encoded in the ASP facts. Since all we have
is a validator the list of possible values just includes
just the default value and possibly the value passed
from packages.yaml or cli.
This is done after the builder has actually built
the specs, to respect the semantics use with the
old concretizer.
Later we could move this to the solver as
a multivalued variant.
This is done after the builder has actually built
the specs, to respect the semantics use with the
old concretizer.
A better approach is to substitute the spec
directly in concretization.
The "none" variant value cannot be combined with
other values.
The '*' wildcard matches anything, including "none".
It's thus relevant in queries, but disregarded in
concretization.
- The test on concretization of anonymous dependencies
has been fixed by raising the expected exception.
- The test on compiler bootstrap has been fixed by
updating the version of GCC used in the test.
Since gcc@2.0 does not support targets later than
x86_64, the new concretizer was looking for a
non-existing spec, i.e. it was correctly trying
to retrieve 'gcc target=x86_64' instead of
'gcc target=core2'.
- The test on gitlab CI needed an update of the target
This commit adds support for specifying rules in
packages.yaml that refer to virtual packages.
The approach is to normalize in memory each
configuration and turn it into an equivalent
configuration without rules on virtual. This
is possible if the set of packages to be handled
is considered fixed.
The weight of the target used in concretization is, in order:
1. A specific per package weight, if set in packages.yaml
2. Inherited from the parent, if possible
3. The default target weight (always set)
Generate facts on externals by inspecting
packages.yaml. Added rules in concretize.lp
Added extra logic so that external specs
disregard any conflict encoded in the
package.
In ASP this would be a simple addition to
an integrity constraint:
:- c1, c2, c3, not external(pkg)
Using the the Backend API from Python it
requires some scaffolding to obtain a default
negated statement.
Conflict rules from packages are added as integrity
constraints in the ASP formulation. Most of the code
to generate them has been reused from PyclingoDriver.rules
The new concretizer and the old concretizer solve constraints
in a different way. Here we ensure that a SpackError is raised,
instead of a specific error that made sense in the old concretizer
but probably not in the new.
Instead of python callbacks, use cardinality constraints for package
versions. This is slightly faster and has the advantage that it can be
written to an ASP program to be executed *outside* of Spack. We can use
this in the future to unify the pyclingo driver and the clingo text
driver.
This makes use of add_weight_rule() to implement cardinality constraints.
add_weight_rule() only has a lower bound parameter, but you can implement
a strict "exactly one of" constraint using it. In particular, wee want to
define:
1 {v1; v2; v3; ...} 1 :- version_satisfies(pkg, constraint).
version_satisfies(pkg, constraint) :- 1 {v1; v2; v3; ...} 1.
And we do that like this, for every version constraint:
atleast1(pkg, constr) :- 1 {version(pkg, v1); version(pkg, v2); ...}.
morethan1(pkg, constr) :- 2 {version(pkg, v1); version(pkg, v2); ...}.
version_satisfies(pkg, constr) :- atleast1, not morethan1(pkg, constr).
:- version_satisfies(pkg, constr), morethan1.
:- version_satisfies(pkg, constr), not atleast1.
v1, v2, v3, etc. are computed on the Python side by comparing every
possible package version with the constraint.
Computing things like this has the added advantage that if v1, v2, v3,
etc. comprise *all* possible versions of a package, we can just omit the
rules for the constraint under consideration. This happens pretty
frequently in the Spack mainline.
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
ultimately hurt performance)
There are now three parts:
- `SpackSolverSetup`
- Spack-specific logic for generating constraints. Calls methods on
`AspTextGenerator` to set up the solver with a Spack problem. This
shouln't change much from solver backend to solver backend.
- ClingoDriver
- The solver driver provides methods for SolverSetup to generates an ASP
program, send it to `clingo` (run as an external tool), and parse the
output into function tuples suitable for `SpecBuilder`.
- The interface is generic and should not have to change much for a
driver for, say, the Clingo Python interface.
- SpecBuilder
- Builds Spack specs from function tuples parsed by the solver driver.
The original implementation was difficult to read, as it only had
single-letter variable names. This converts all of them to descriptive
names, e.g., P -> Package, V -> Virtual/Version/Variant, etc.
To handle unknown compilers propely in tests (and elsewhere), we need to
add unknown compilers from the spec to the list of possible compilers.
Rework how the compiler list is generated and includes compilers from
specs if the existence check is disabled.
Specs like hdf5 ^mpi were unsatisfiable because we added a requierment
for `node("mpi").`. This can't be resolved because "mpi" is not a
package.
- [x] Introduce `virtual_node()`, which says *some* provider must be in
the DAG.
This adds compiler flags to the ASP solve so that we can have conditions
based on them in the solve. But, it keeps order out of the solve to
avoid unneeded complexity and combinatorial explosions.
The solver determines which flags are on a spec, but the order is
determined by DAG precedence (childrens' flags take precedence over
parents' and are added on the right) and order (order flags were
specified on the command line is respected).
The solver is responsible for determining when to propagate flags, when
to inheit them from other nodes, when to take them from compiler
preferences, etc.
Weight microarchitectures and prefers more rercent ones. Also disallow
nodes where the compiler does not support the selected target.
We should revisit this at some point as it seems like if I play around
with the compiler support for different architectures, the solver runs
very slowly. See notes in comments -- the bad case was gcc supporting
broadwell and skylake with clang maxing out at haswell.
We didn't have a cardinality constraint for multi-valued variants, so the
solver wasn't filling them in.
- [x] add a requirement for at least one value for multi-valued variants
Variants like `cpu_target` on `openblas` don't have defineed values, but
they have a default. Ensure that the default is always a possible value
for the solver.
Spack was generating the same dependency connstraints twice in the output ASP:
```
declared_dependency("abinit", "hdf5", "link")
:- node("abinit"),
variant_value("abinit", "mpi", "True"),
variant_value("abinit", "mpi", "True").
```
This was because `AspFunction` was modifying itself when called.
- [x] fix `AspFunction` so that every call returns a new object
- [x] Add support for packages.yaml and command-line compiler preferences.
- [x] Rework compiler version propagation to use optimization rather than
hard logic constraints
Technically the ASP output order does not matter, but it's hard to diff
two different solve fomulations unless we order it.
- [x] make sure ASP output is emitted in a deterministic order (by
sorting all hash keys)
This needs more thought, as I am pretty sure the weights are not correct.
Or, at least, I'm not convinced that they do what we want in all cases.
See note in concretize.lp.
Solver now prefers newer versions like the old concretizer. Prefer
package preferences from packages.yaml, preferred=True, package
definition, and finally each version itself.
Competition output only prints out one model, so we do not have to
unnecessarily parse all the non-optimal models. We'll just look at the
best model and bring that in.
In practice, this saves a lot of JSON parsing and spec construction time.
Clingo actually has an option to output JSON -- use that instead of
parsing the raw otuput ourselves.
This also allows us to pick the best answer -- modify the parser to
*only* construct a spec for that one rather than building all of them
like we did before.
- Instead of using default logic, handle variant defaults by minimizing
the number of non-default variants in the solution.
- This actually seems to be pretty fast, and it fixes the long-standing
issue that writing this:
spack install hdf5 ^mpich
will fail if you don't specify hdf5+mpi. With optimization and
allowing enums to be enumerated, the solver seems to be able to quickly
discover that +mpi is the only way hdf5 can depend on mpich, and it
forces the switch to be thrown.
Use '1 { version(x); version(y); version(z) } 1.' instead of declaring
conflicts for non-matching versions. This keeps the sense of version
clauses positive, which will allow them to be used more easily in
conditionals later.
Also refactor `spec_clauses()` method to return clauses that can be used
in conditions, etc. instead of just printing out facts.
- This handles setting the compiler and falling back to a default
compiler, as well as providing default values for compilers/compiler
versions.
- Versions still aren't quite right -- you can't properly override
versions on compiler specs.
- Model architecture default settings and propagation off of variants
- Leverage ASP default logic to set architecture to default if it's not
set otherwise.
- Move logic out of Python and into concretize.lp as first-order rules.
We are relying on default logic in the variant handling in that we set a
default value if we never see `variant_set(P, V, X)`.
- Move the logic for this into `concretize.lp` instead of generating it
for every package.
- For programs that don't have explicit variant settings, clingo warns
that variant_set(P, V, X) doesn't appear in any rule head, because a
setting is never generated.
- Specifically suppress this warning.
- moving the dump logic into spack.solver.asp.solve() allows us to print
out useful debug info sooner
- prior approach required a successful solve to print out anyhting.
According to the documentation for spack and pkg-config,
$view/share/pkgconfig should also be a valid place to look
for package config files. This commit ensures that when
spack activate env $dir is called, the environment has this
directory in PKG_CONFIG_PATH.
As of #13100, Spack installs the dependencies of a _single_ spec in parallel.
Environments, when installed, can only get parallelism from each individual
spec, as they're installed in order. This PR makes entire environments build
in parallel by extending Spack's package installer to accept multiple root
specs. The install command and Environment class have been updated to use
the new parallel install method.
The specs and kwargs for each *uninstalled* package (when not force-replacing
installations) of an environment are collected, passed to the `PackageInstaller`,
and processed using a single build queue.
This introduces a `BuildRequest` class to track install arguments, and it
significantly cleans up the code used to track package ids during installation.
Package ids in the build queue are now just DAG hashes as you would expect,
Other tasks:
- [x] Finish updating the unit tests based on `PackageInstaller`'s use of
`BuildRequest` and the associated changes
- [x] Change `environment.py`'s `install_all` to use the `PackageInstaller` directly
- [x] Change the `install` command to leverage the new installation process for multiple specs
- [x] Change install output messages for external packages, e.g.:
`[+] /usr` -> `[+] /usr (external bzip2-1.0.8-<dag-hash>`
- [x] Fix incomplete environment install's view setup/update and not confirming all
packages are installed (?)
- [x] Ensure externally installed package dependencies are properly accounted for in
remaining build tasks
- [x] Add tests for coverage (if insufficient and can identity the appropriate, uncovered non-comment lines)
- [x] Add documentation
- [x] Resolve multi-compiler environment install issues
- [x] Fix issue with environment installation reporting (restore CDash/JUnit reports)
This change makes improvements to the `spack ci rebuild` command
which supports running gitlab pipelines on PRs from forks. Much
of this has to do with making sure we can run without the secrets
previously required for running gitlab pipelines (e.g signing key,
aws credentials, etc). Specific improvements in this PR:
Check if spack has precisely one signing key, and use that information
as an additional constraint on whether or not we should attempt to sign
the binary package we create.
Also, if spack does not have at least one public key, add the install
option "--no-check-signature"
If we are running a pipeline without any profile or environment
variables allowing us to push to S3, the pipeline could still
successfully create a buildcache in the artifacts and move on. So
just print a message and move on if pushing either the buildcache
entry or cdash id file to the remote mirror fails.
When we attempt to generate a pacakge or gpg key index on an S3
mirror, and there is nothing to index, just print a warning and
exit gracefully rather than throw an exception.
Support the use of PR-specific mirrors for temporary binary pkg
storage. This will allow quality-of-life improvement for developers,
providing a place to store binaries over the lifetime of a PR, so
that they must only wait for packages to rebuild from source when
they push a new commit that causes it to be necessary.
Replace two-pass install with a single pass and the new option:
--require-full-hash-match. Doing this also removes the need to
save a copy of the spack.yaml to be copied over the one spack
rewrites in between the two spack install passes.
Work around a mirror configuration issue caused by using
spack.util.executable to do the package installation.
* Update pipeline trigger jobs for PRs from forks
Moving to PRs from forks relies on external synchronization script
pushing special branch names. Also secrets will only live on the
spack mirror project, and must be propagated to the E4S project via
variables on the trigger jobs.
When this change is merged, pipelines will not run until we update
the "Custom CI configuration path" in the Gitlab CI Settings, as the
name of the file has changed to better reflect its purpose.
* Arg to MirrorCollection is used exclusively, so add main remote mirror to it
* Compute full hash less frequently
* Add tests covering index generation error handling code
* Add WRF 3.9.1.1 and improve recipe robustness
* Include version 3.9.1.1 as common benchmarking workload
* Fix compilation against recent glibc (detect spack installed libtirpc)
* Detect and handle failed compilation (upstream use make -i)
* WRF: PR changes round 1
fix build jobs
fix maintainers
fix pkgconfig dependency
use Executable to run compile stage
repair some overzealous autoformatting by black
* WRF: make recipe py26 compatible
* wrf: recipe review changes round 2
* more python 26 fixes
The unattended install using the pre-compiled binaries (tl-install)
needs a .profile file or it goes in interactive mode blocking the
install process forever
* Added guard for setting CUB_DIR to only when cuda variant is true
* Added support for OpenMP on OSX platforms
* Updated the way that LBANN, Hydrogen, and DiHydrogen handle
apple-clang with OpenMP and Clang installed on OS X via brew.
* Fixed bug in spec resolution
* Fixed merge conflict
* Fixed typo
* Fixed flake8
* AMD - Bumped up version for hip-rocclr, rocm-opencl, rocm-smi-lib
* AMD ROCm - HIP update and bump up version to 3.9.0 for rccl,debug agent, hip-rocclr and atmi
* Update package.py
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/hip/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Since #11598 sbang has been installed within the install_tree. This doesn’t play
nicely with install_tree padding, since sbang can’t do its job if it is installed in a
long path (this is the whole point of sbang).
This PR changes the padding specification. Instead of $padding inside paths,
we now have a separate `padding:` field in the `install_tree` configuration.
Previously, the `install_tree` looked like this:
```
/path/to/opt/spack_padding_padding_padding_padding_padding/
bin/
sbang
.spack-db/
...
linux-rhel7-x86_64/
...
```
```
This PR updates things to look like this:
/path/to/opt/
bin/
sbang
spack_padding_padding_padding_padding_padding/
.spack-db/
...
linux-rhel7-x86_64/
...
So padding is added at the start of all install prefixes *within* the unpadded
root. The database and all installations still go under the padded root.
This ensures that `sbang` is in the shorted possible path while also allowing
us to make long paths for relocatable binaries.
As of #18205, all packages must be pickle-able to be installed by
Spack.
This adds a test to check that each package can be pickled. If any
package fails to pickle, the test keeps going and collects the names
of all failed packages; it then takes the first one that failed and
attempts to re-pickle it, generating the full stack trace for the
failed pickle attempt.
Spack creates a separate process to do package installation. Different
operating systems and Python versions use different methods to create
it but up until Python 3.8 both Linux and Mac OS used "fork" (which
duplicates process memory, file descriptor table, etc.).
Python >= 3.8 on Mac OS prefers creating an entirely new process
(referred to as the "spawn" start method) because "fork" was found to
cause issues (in other words "spawn" is the default start method used
by multiprocessing.Process). Spack was dependent on the particular
behavior of fork to replicate process memory and transmit file
descriptors.
This PR refactors the Spack internals to support starting a child
process with the "spawn" method. To achieve this, it makes the
following changes:
- ensure that the package repository and other global state are
transmitted to the child process
- ensure that file descriptors are transmitted to the child process in
a way that works with multiprocessing and spawn
- make all the state needed for the build process and tests picklable
(package, stage, etc.)
- move a number of locally-defined functions into global scope so that
they can be pickled
- rework tests where needed to avoid using local functions
This PR also reworks sbang tests to work on macOS, where temporary
directories are deeper than the Linux sbang limit. We make the limit
platform-dependent (macOS supports 512-character shebangs)
See: #14102
In compiler bootstrapping pipelines, we add an artificial dependency
between jobs for packages to be built with a bootstrapped compiler
and the job building the compiler. To find the right bootstrapped
compiler for each spec, we compared not only the compiler spec to
that required by the package spec, but also the architectures of
the compiler and package spec.
But this prevented us from finding the bootstrapped compiler for a
spec in cases where the architecture of the compiler wasn't exactly
the same as the spec. For example, a gcc@4.8.5 might have
bootstrapped a compiler with haswell as the architecture, while the
spec had broadwell. By comparing the families instead of the architecture
itself, we know that we can build the zlib for broadwell with the gcc for
haswell.
* py-json-get: new package at 1.1.1
* py-json-get: new package at 1.1.1
* r-bigalgebra: new package at 0.8.4
* r-bigalgebra: new package at 0.8.4 with corrections
* Added an additional change to tarball and dependencies
* removing accidentally added file
* Added tarball that uses mirror and removed redundant dependencies
* Fixed version and added dep.
* Updated checksum
* Fixed urls
* Added list_url
Co-authored-by: las_djorton <las_djorton@build.las.iastate.edu>
* Add CUDA support to superlu-dist
* Use spec['cuda'].libs.directories[0] iso spec['cuda'].prefix.lib
so it works for both lib and lib64
The suggested:
args.append('-DTPL_CUDA_LIBRARIES=' +
spec['cuda'].libs.ld_flags)
did not work because it does not link with cuBLAS.
Currently, full JSON output is the only machine readable option for `spack find`
in an environment.
`spack find --format` is also designed to be machine readable, but we print extra
headers in environments.
-[x] don't print headers in `spack find` output when in an environment
* No version of yaml-cpp in spack can build shared AND
static libraries at the same time. So drop the "static"
variant and let "shared" handle that alone.
Or in other words: No version handles the
BUILD_STATIC_LIBS flag.
* The flag for building shared libraries changed from
BUILD_SHARED_LIBS to YAML_BUILD_SHARED_LIBS at some
point. So just pass both flags.
* Use the newer define_from_variant.
* [py-cuml] created template
* [py-cuml] setup phases and added build_directory
* [py-cuml] added dependencies
* [py-cuml] depends on libcumlprims
* [py-cuml] requiring multigpu version
* [py-cuml] figuring out the best way to get concretization to happen cleanly
* [py-cuml] removed singlegpu variat from libcuml
* [py-cuml] depends on py-cudf
* [py-cuml] depends on cupy
* [py-cuml] fixed typoo
* [py-cuml] depends on py-scipy
* [py-cuml] depends on py-treelite
* [py-cuml] py-treelite is now a variant of treelite
* [py-cuml] depends on joblib
* [py-cuml] depends on py-scikit-learn
* [py-cuml] flake8
* [py-cuml] added homepage and description. removed fixmes
* [py-cuml] updated checksum
* Enabling build of v1.9.x development branch.
* v1.8.1 is the preferred (stable) version.
* Fixing code style
Co-authored-by: Filippo Spiga <fspiga@nvidia.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [podio] put python dir in python path
* Update var/spack/repos/builtin/packages/podio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When invoking "buildcache list" multiple times, the command was
reporting no specs in the cache the second time around. The
presence of an up-to-date index was causing the internal
representation to be left un-initialized.
* tskit package
* Update var/spack/repos/builtin/packages/tskit/package.py
I can't see any hard requirement for 3.6:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixes following PR review
* Update var/spack/repos/builtin/packages/tskit/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Added a command to set up Spack for our tutorial at
https://spack-tutorial.readthedocs.io.
The command does some common operations we need first-time users to do.
Specifically:
- checks out a particular branch of Spack
- deletes spurious configuration in `~/.spack` that might be
left over from prior parts of the tutorial
- adds a mirror and trusts its public key
Version 5.32.0 has been out for quite a while and Linux distributions
are shipping it. I have also done a rebuild of some common packages with
the new version. Let's make it the preferred version.
* amrex: new options names for version > 20.11
* amrex: change option name DIM -> AMReX_SPACEDIM
* Update var/spack/repos/builtin/packages/amrex/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added code to help DiHydrogen find cuDNN and CUB
* Cleaning up dependencies on CUB and adding guards for when newer
versions of CUDA include CUB and it should be excluded.
* Changed Hydrogen to disable half support by default.
* Have LBANN force Hydrogen and DiHydrogen to build without half when the variant is disabled.
* Added explicit variants to enusre that if LBANN is build without Cuda,
Aluminum, or Half support, it enforces those constraints for Hydrogen
and DiHydrogen. Cleaned up the use of Python extend versus append in
LBANN and DiHydrogen recipes.
* Fixed Flake8
* [evtgen] add env var
* Update var/spack/repos/builtin/packages/evtgen/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
See #19784
virtualgl CMake system is looking for a specific libjpeg-turbo include
file, not present in libjpeg (currently the only other jpeg provider)
* cget package
* Update var/spack/repos/builtin/packages/cget/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cget/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cget/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Updates in LBANN an Aluminum code now allow working with versions
HWLOC 1.11.x and 2.x and up.
* Updating the minimum CMake version to address a pending PR in LBANN
that will require C++17 support and needs CMake to properly separate
the compiler flags from nvcc.
* Clarified the support for different versions of HWLOC in LBANN
Previously, we hardcoded a list of Spack versions which could be used by the containerize command.
This PR removes that list. It's a maintenance burden when cutting a release, and prevents older versions of Spack from creating containers to be used by newer versions.
* filtlong package
* Update var/spack/repos/builtin/packages/filtlong/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* bump up version for 3.9.0 release
* update version of rocminfo for rocm-3.9.0
* bump up rocm-cmake version for rocm-3.9.0
* bump up rocm-smi and rocmdevice-libs for 3.9.0
* bumpup comgr version for rocm_ 3.9.0
* bump rocm-clang-ocl for rocm-3.9.0
* bump hipify-clang for rocm-3.9.0
* Trilinos: Add STRUMPACK dependency
* break long lines, flake8 cleanup
* Use spec['strumpack'].libs.directories[0]
instead of spec['strumpack'].prefix.lib
because libraries may be in lib or lib64.
Likewise use headers.directories[0] iso prefix.include.
Suggested by adamjstewart
* allows UCX since v1.7 to build with more recent version of gdrcopy (v2.X)
* Update var/spack/repos/builtin/packages/ucx/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
There was an error introduced in #19209 where `full_hash()` and
`build_hash()` are called on older specs that we've read in from the DB;
older specs may not be able to compute these hashes (e.g. if they have
removed patches used in computing the full_hash).
When serializing a Spec, we want to generate the full/build hash when
possible, but we need a mechanism to skip it for Specs that have
themselves been read from YAML (and may not support this).
To get around this ambiguity and to fix the issue, we:
- Add an attribute to the spec called `_hashes_final`, that is `True`
if we can't lazily compute `build_hash` and `full_hash`.
- Set `_hashes_final` to `False` for new specs (i.e., lazily
computing hashes is ok)
- Set `_hashes_final` to `True` for concrete specs read in via
`from_node_dict`, as it may be too late to recompute hashes.
- Compute and write out all hashes in `node_dict_with_hashes` *if
possible*.
Effectively what this means is that we can round-trip specs that are
missing `_build_hash` and `_full_hash` without recomputing them, but for
all new specs, we'll compute them and store them. So Spack should work
fine with old DBs now.
* hip: rocminfo is a runtime requirement
* hip: +setup_run_environment, +setup_dependent_run_environment
* hip: run environment: get lib dir using libs.directories[0], not prefix.lib
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This fixes sbang relocation when using old binary packages, and updates
code in `relocate.py`.
There are really two places where we would want to handle an `sbang`
relocation:
1. Installing an old package that uses `sbang` with shebang lines like
`#!/bin/bash $spack_prefix/sbang`
2. Installing a *new* package that uses `sbang` with shebang lines like
`#!/bin/sh $install_tree/sbang`
The second case is actually handled automatically by our text relocation;
we don't need any special relocation logic for new shebangs, as our
relocation logic already changes references to the build-time
`install_tree` to point to the `install_tree` at intall-time.
Case 1 was not properly handled -- we would not take an old binary
package and point its shebangs at the new `sbang` location. This PR fixes
that and updates the code in `relocation.py` with some notes.
There is one more case we don't currently handle: if a binary package is
created from an installation in a short prefix that does *not* need
`sbang` and is installed to a long prefix that *does* need `sbang`, we
won't do anything. We should just patch the file as we would for a normal
install. In some upcoming PR we should probably change *all* `sbang`
relocation logic to be idempotent and to apply to any sort of shebang'd
file. Then we'd only have to worry about which files to `sbang`-ify at
install time and wouldn't need to care about these special cases.
* add python-docutils dependency
* adds symlink to script for better compatibility if py-docutils installation
* Improve post_install phase of py-docutils
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix review of rdma-core package
* improve formating of py-docutils package
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [NEW] Added amdfftw, amdlibflame and amdscalapack recipes
Updated base fftw, libflame and netlib-scalapack recipes
to accommodate the above listed AMD Optimizing CPU Libraries
which are a set of numerical routines optimized for AMD platforms.
Updated amdblis spack recipe
amdblis:
1. updated with amdblis 2.2 release
amdfftw:
1. "--enable-single" now work as synonym for "--enable-float"
amdlibflame:
1. Added enable_or_disable_threads() to set value for "--enable-multithreading" flag
Libflame:
1. Added enable_or_disable_threads() to set value for "--enable-multithreading" flag
2. Corrected invocation of "enable_or_disable('threads')"
Change-Id: I9da0a2c2c4e2075b7fa2776e7cfe6548a2e0b32f
* Added amd-toolchain-support as maintainers
Added team github account amd-toolchain-support
as maintainers for all the recipes owned by
AMD Optimizing CPU Libraries (AOCL) team
Change-Id: I9a7969bd48fc42cfbb88dd7bd93e0802c6138582
* Incorporated review comments
Updated packages.yaml with aocl components
Handled Flake8 test failures
Change-Id: I0a03f02d8c9f326b2434ec907958c3de3a8e18eb
* Readded accidental removal of stream recipe
amdfftw:
1. Updated the aocc clang selection as per spack standards
fftw:
1. Currently apple-clang section is redundant,
already it is handled in the conflict checks.
Change-Id: Idef4a3f61717eb81f321e0cd16e7ba9619eac846
* Fix for style and docs/validate (pull_request) test
unnumbered format placeholders from {} to {0}
Change-Id: If67a3374177ec067573e5504462d257712fafc05
* changed compiler references to Spack's compiler wrapper:spack_cc, spack_cxx, spack_fc
Change-Id: I7ae29c978fff16e37773913f14c84df232499763
* Removed 'single' variant from amdfftw recipe
Instead of conflict for apple-clang + openmp, handled this senario
via below available feature:
depends_on('llvm-openmp', when='%apple-clang +openmp')
Change-Id: I701b23d83e822a500ca3aaf2b60cc9ace09e13dc
* Added relevant info for users who prefers to use single precision
Change-Id: I3506e21da428ddef5fb7895b5aaed32c2a061ef6
* Minor changes on fftw, amdfftw and libflame
amdfftw:
1. Removed escape symbol to the single quotes
2. Rewording the conflict line from Recommended
to Required
fftw:
1. Reorded to following recommended sections:
versions, variants, dependencies, providers,
patches
libflame:
1. Added provides entry for 5.1.0 version
Change-Id: I21ebff99b6dfde031763154693ecb3f1fa47b476
* Removed single quote from amdfftw docstring to fix style failures
Change-Id: Ife939a5a2f5ccbc8879b730c7bebfe2fcfef9332
* camp: changes to support hip build
* hip: add fallback path for external hip to detect other rocm components
Co-authored-by: Greg Becker <becker33@llnl.gov>
fixes#15183
- Moved the container related content from
workflows.rst into containers.rst
- Deleted the docker_for_developers.rst file,
since it describes an outdated procedure
Co-authored-by: Axel Huebl <a.huebl@hzdr.de>
Co-authored-by: Omar Padron <omar.padron@kitware.com>
`config.get_config` now caches the results and returns the same
configuration if called multiple times with the same arguments
(i.e. the same section and scope).
As a consequence, it is expected that users will always call
update methods provided in the `config` module after changing
the configuration (even if manipulating it as a Python nested
dictionary). The following two examples should cover most
scenarios:
* Most configuration update logic in the core (e.g. relating to
adding new compiler) should call `Configuration.update_config`
* Tests that need to change the global configuration should use the
newly-provided `config.replace_config` function.
(if neither of these methods apply, then the essential requirement
is to use a method marked as `_config_mutator`)
Failure to call such a function after modifying the configuration
will lead to unexpected results (e.g. calling `get_config` after
changing the configuration will not reflect the changes since the
first call to get_config).
* Patched hypre to better add flags based on compiler.
* Update package.py
This file seems to have lots of edits, so the patch may succeed with offsets. Has anyone checked with spack patch to be sure it'll work with versions 2.15 - 2.20?
* "spack install" now has a "--require-full-hash-match" option, which
forces Spack to skip an available binary package when the full hash
doesn't match. Normally only a DAG-hash match is required, which
ensures equivalent Specs, but does not account for changing logic
inside the associated package.
* Add a local binary cache index which tracks specs that have a binary
install available in a remote binary cache. It is updated with
"spack buildcache list" or for a given spec when a binary package
is retrieved for that Spec.
In #18394 it was noted, that this package should be changed
from a generic "Package" to a "CMakePackage".
It makes a bunch of things easier.
And it uses all the common cmake code.
* Added hash values for LBANN v0.101 and Hydrogen v1.5.0. Updated the
LBANN package to be more successful in resolving a legal configuration
of MPI and HWLOC packages. This required the removal of the MPI
virtual package since it is unable to resolve dependencies with
minimum version requirements. As a result to enable a reasonable
install line for LBANN this requires explicit forwarding of MPI
variants to Hydrogen and Aluminum. Due to the lack of variant
forwarding, there are many explicitly replicated dependencies for both
LBANN and Hydrogen. Fixed the error in LBANN where gpu variant was
replaced by the cuda variant, but not all dependencies were fixed.
* Fixed the minumum cuDNN version for newer versions of LBANN.
* Added explicit versioning of the MPI libraries for DiHydrogen to avoid
all of the conflicts with minimum required versions of the OpenMPI library.
* Removed explicit MPI versions and went back to using the MPI virtual
dependency. Updated construction of variant forwarding to use
iterative construction of constraints and variants. This exacerbates
the challenges with backtracking in the current concretizer, but
should be fixed in the new concretizer.
* Added support for including the DiHydrogen library in LBANN as well as
support for the distributed convolution (DistConv) parallel
algorithms. Also include support for building with half precision.
* Moving dependencies around
* Added conflict statement to ensure that the variant dihydrogen is
required for distconv.
* Removed the preferred field
* Fixed Flake8 and cuDNN version bounds
* gemini dep py-cyordereddict +
* dep ipyparallel +
* py-ipython-cluster +
* py-cyordereddict URL+dep fix
* Update var/spack/repos/builtin/packages/py-cyordereddict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-ipython-cluster-helper dep fix
* py-ipyparallel dep fix
* ipython-cluster-helper debug
* ipython-cluster-helper debug
* ipyparallel dep fix
* ipython-cluster-helper dep fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added initial version of package and patch for precice-bindings
* updated package name
* cleanup in script; added version requirement to cython
* Remove unnecessary part of patch
* cleanup package
* added initial version of package and patch for precice-bindings
* updated package name
* cleanup in script; added version requirement to cython
* Remove unnecessary part of patch
* cleanup package
* update style of package
* reformatting to fullfil style requirements
* reformatting again
* fixing some of the issues mention in PR; working on fixing install stage
* readded py-wheel as dependency
Co-authored-by: Benjamin Rüth <benjamin.rueth@tum.de>
Spack has a fallback for hash checking with m55sums that may not be
supported in earlier versions of Python 3.x. The comments in the
Spack code acknowledge that this is best effort and may fail, but
recent vermin checks (running as part of our CI) reject this. This
disables vermin checks for that fallback.
* enable flatcc to be built with gcc/9.X.X
* add static option for building libyogrt
* cleanup
* Initial working version
* rework new oneapi wrappers
* tested and removed my initials from source
* cleanup
* Update __init__.py
* remove whitespace
* working now with mods for testing, detection. Detection for oneapi is working, but entry needs to be modified to add link path for libimf.so. Cleared cruft for old Intel versions
* fixed some formatting
* cleanup
* flake8 cleanup
* flake8
* fixed syntax of compiler version detection tests
* fixed syntax of compiler version detection tests
modified: detection.py
* fix typo
* fixes for compilers tests
* remove erroneous tests for outdated -std= flags, remove ifx version check (output won't parse)
Co-authored-by: Frank Willmore <willmore@anl.gov>
* Patch CMake version check in Umpire
* Update version constraint for cmake_version_check patch
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add maintainers to Umpire
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [py-torch-nvidia-apex] added version 201019 and added dependency on py-pybind11
* [py-torch-nvidia-apex] changed versioning format
* [py-torch-nvidia-apex] removed redundent version condition
* [py-torch-nvidia-apex] removed condition on dependency
* Adding AOCC support for M4
* combining 4 if-statements into a single if-statement with or conditions
* keeping parentheses around the or expressions
* fixing flake8 test failures
Co-authored-by: mohan babu <mohbabul@amd.com>
For some mysterious reason Qt4 stopped building the xmlpatterns
component, needed by some downstream packages. With this patch, the
component successfully builds with
```
qt@4.8.7~dbus~debug~examples~framework~gtk~opengl~phonon+shared~sql~ssl~tools~webkit freetype=none arch=linux-rhel7-haswell %gcc@10.2.0
```
`sbang` now lives at https://github.com/spack/sbang, and it has its own
test suite that's more extensive than what's in Spack. We'll leave sbang
tests to sbang from now on, and just vendor `bin/sbang` directly.
Remaining `sbang` tests have to do with patching files, not with
`sbang`'s functionality.
This update also fixes a bug with `sbang` and multiple command line
arguments that was introduced in #19529. See:
* https://github.com/spack/sbang/pull/1
* https://github.com/spack/sbang/pull/2
- [x] include latest `sbang` from https://github.com/spack/sbang
- [x] remove old `sbang` tests from Spack
- [x] update `COPYRIGHT` and `cmd/license.py`
* Update package.py
Remove breaking patch.
Patching the shebang is useless is the dependencies are properly loaded before execution. Furthermore, the long paths which can be generated when installing with Spack can exceed the maximum length of the shebang.
* Add newer versions of strelka.
* [libcudf] created template
* [libcudf] depends on cuda
* [libcudf] set cmake dir
* [libcudf] depends on boost
* [libcudf] depends on py-pyarrow
* [libcudf] depends on librmm
* [libcudf] depends on dlpack
* [libcudf] added more dependency information from https://github.com/rapidsai/libcudf/blob/v0.15.0/CONTRIBUTING.md#customizing-the-build
* [libcudf] removed python dependencies
* [libcudf] fixed url that got mangled in package renaming
* [libcudf] added default build options from build.sh
* [libcudf] added version 0.16.0a
* [libcudf] removed version 0.16.0a as it's an alpha version
* [libcudf] added homepage and description. removed fixmes
* [libcudf] flake8
* [libcudf] arrow requires +orc
* [libcudf] requires +parquet
* [libcudf] checksum changed
`sbang` was previously a bash script but did not need to be. This
converts it to a plain old POSIX shell script and adds some options. This
also allows us to simplify sbang shebangs to `#!/bin/sh /path/to/sbang`
instead of `#!/bin/bash /path/to/sbang`.
The new script passes shellcheck (with a few exceptions noted in the file)
- [x] `SBANG_DEBUG` env var enables printing what *would* be executed
- [x] `sbang` checks whether it has been passed an option and fails gracefully
- [x] `sbang` will now fail if it can't find a second shebang line, or if
the second line happens to be sbang (avoid infinite loops)
- [x] add more rigorous tests for `sbang` behavior using `SBANG_DEBUG`
On Cori(Cray-XC40), I need to pass the entire path for the compilers, this is what is saved in c_compiler, cpp_compiler, f_compiler. Therefore, when for the MPI wrappers only the binary name is provided I run into the same issue. There is no drawback of passing the entire path, this is set by the user through the compiler path anyways.
* added -lpthread flag in kv/tests/CMakeLists.txt
* Update var/spack/repos/builtin/packages/papyrus/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
PHP supports an initial shebang, but its comment syntax can't handle our 2-line
shebangs. So, we need to embed the 2nd-line shebang comment to look like a
PHP comment:
<?php #!/path/to/php ?>
This adds patching support to the sbang hook and support for
instrumenting php shebangs.
This also patches `phar`, which is a tool used to create php packages.
`phar` itself has to add sbangs to those packages (as phar archives
apparently contain UTF-8, as well as binary blobs), and `phar` sets a
checksum based on the contents of the package.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* New package: py-minrpc
* Delete package.py.save
* Update var/spack/repos/builtin/packages/py-minrpc/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`sbang` is not always accessible to users of packages, e.g., if Spack
is installed in someone's home directory and they deploy software
for others. Avoid this by:
1. Always installing the `sbang` script in the `install_tree`
2. Relocating binaries to point to the copy in the `install_tree`
and not the one in the Spack installation.
This PR also:
- ensures that `sbang` is reinstalled if it is modified in Spack
- adds tests
- updates the way `gobject-introspection` patches Makefiles
to support `sbang`
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Add BLT package
* Switch install function
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add type='run' to cmake dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add git attribute to BLT
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cp2k: locate correct include dir when using intel-parallel-studio+mkl for fftw-api
* libxc: drop arch-specific intel opt. flags
fixes#17794
* libint: drop arch-specific intel opt. flags, always build Fortran example with FC
fixes#17509
* package/pmdk add variants, version 1.9
* add dependency
* Update var/spack/repos/builtin/packages/pmdk/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The logic in `config.py` merges lists correctly so that list elements
from higher-precedence config files come first, but the way we merge
`dict` elements reverses the precedence.
Since `mirrors.yaml` relies on `OrderedDict` for precedence, this bug
causes mirrors in lower-precedence config scopes to be checked before
higher-precedence scopes.
We should probably convert `mirrors.yaml` to use a list at some point,
but in the meantie here's a fix for `OrderedDict`.
- [x] ensuring that keys are ordered correctly in `OrderedDict` by
re-inserting keys from the destination `dict` after adding the keys from
the source `dict`.
- [x] also simplify the logic in `merge_yaml` by always reinserting
common keys -- this preserves mark information without all the special
cases, and makes it simpler to preserve insertion order.
Assuming a default spack configuration, if we run this:
```console
$ spack mirror add foo https://bar.com
```
Results before this change:
```console
$ spack config blame mirrors
--- mirrors:
/Users/gamblin2/src/spack/etc/spack/defaults/mirrors.yaml:2 spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/
/Users/gamblin2/.spack/mirrors.yaml:2 foo: https://bar.com
```
Results after:
```console
$ spack config blame mirrors
--- mirrors:
/Users/gamblin2/.spack/mirrors.yaml:2 foo: https://bar.com
/Users/gamblin2/src/spack/etc/spack/defaults/mirrors.yaml:2 spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/
```
Shell integration no longer requires setting `SPACK_ROOT`, so we can
simplify the documentation on it. The docs on shell support and using
packages are getting a bit old, and information on `spack load` (which
seems to be everyone's most common way of using packages) is hard to
find.
This PR simplifies the shell documentation to remove SPACK_ROOT, and also
moves some sections around for clearer organization.
- [x] make docs on sourcing setup scripts clearer and simpler
- [x] introduce `spack load` early in the basic usage guide instead of
burying it in the module docs
- [x] clean up module docs so that spack module tcl loads comes later
- [x] be clear about the different ways to use packages so that the users
can find the docs better.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
GROMACS still requires a version of FFTW when compiling it to utilize
NVIDIA GPUs. In fact, the type of calculation that depends on FFTW --
Particle-Mesh Ewald (PME) -- is generally run on the host system's CPUs,
even when GPUs are available.
* New package: py-rise
* Fix URL and add description
* Update var/spack/repos/builtin/packages/py-rise/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* gaussian-src: initial commit to build from source
* do not install the source to ensure to not accidentally distribute
it to users
* set required runtime env vars based on the login.profile
* gaussian-view: update to 6.1.1
PR #19482 updated gcc to only apply the zstd patch until @10.2 but the
releases/gcc-10 branch actually does not contain the patch yet, that is,
gcc@10.3 will most likely have the same problem. Apply the patch for all
10.x releases instead.
* gemini dep -py-bcolz +
* Update var/spack/repos/builtin/packages/py-bcolz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-bcolz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-bcolz URL fix
* Update var/spack/repos/builtin/packages/py-bcolz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
fixes#19476
Module file content is written to file in a
temporary location and read back to be analyzed
by unit tests.
The approach to patch "open" and write to a
StringIO in memory has been abandoned, since
over time other operations insisting on the
filesystem have been added to the module file
generator.
* [py-pyarrow] telling setup.py that we want cuda support
* [py-pyarrow] added orc variant
* [py-pyarrow] passing the orc variant down the line
* [py-pyarrow] added variant description
* ocl-icd: fix build problems
* New package: opencl-c-headers
* New package: opencl-clhpp
* New bundled package: opencl-headers
- bundle C and C++ header files
* ocl-icd: Add +headers variant to use this as opencl provider
* ocl-icd: add new upstream release 2.2.13
* ocl-icd: add asciidoc-py3 and xmlto dependency needed for manpage generation
* ocl-icd and opencl-headers provides OpenCL 3.0
- also add more explicit version providing for older ocl-icd versions
* opencl-headers: add maximum of supported opencl versions for all versions
* opencl-headers: there aren't final releases with OpenCL 3.0
* [orc] created template
* [orc] depends on maven
* [orc] building with -fPIC
* [orc] fixed name of c flags option
* [orc] depends on openssl
* [orc] added dependencies and disableing installing vendored libs
* [orc] disabling hdfs
* [orc] depending on specific versions of dependencies
* [orc] no building of third party libs
* [orc] helping cmake find the dependencies
* [orc] disabling features that would require static protobuf libraries
* [orc] dependency versions are ranges
* [orc] added homepage and description. removed fixmes
* [orc] flake8
* [orc] switching to compilier indipendent code
* r-sf: fix build error
* Update var/spack/repos/builtin/packages/r-sf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Add herwig3
* Prepare fixes based on MR (needs checking)
* Set all dependencies (except python) as build-type
* OK now
* Move import to the top of the file
* Fix dependency name
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Synchronization on GitHub macOS runners seems to be very slow, and
frequently the foreground/background tests fail due to the race this
causes. This increases the tolerance for slowness a bit more, to allow up
to 4 spurious output lines in the tests.
This should hopefully result in no more false negatives on these tests
for macOS on GitHub.
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Add qgraf
* Update package.py
Changes from review
* Changes from MR
* Fix for URLs containing @ symbol
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Zsh and newer versions of bash have a builtin `which` function that will
show you if a command is actually an alias or a function. For functions,
the entire function is printed, and our `spack()` function is quite long.
Instead of printing out all that, make the `spack()` function a wrapper
around `_spack_shell_wrapper()`, and include some no-ops in the
definition so that users can see where it was created and where Spack is
installed.
Here's what the new output looks like in zsh:
```console
$ which spack
spack () {
: this is a shell function from: /Users/gamblin2/src/spack/share/spack/setup-env.sh
: the real spack script is here: /Users/gamblin2/src/spack/bin/spack
_spack "$@"
return $?
}
```
Note that `:` is a no-op in Bourne shell; it just discards anything after
it on the line. We use it here to embed paths in the function definition
(as comments are stripped).
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Update madgraph to 2.8.1
* Changes from MR
* Changes from MR
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* Updated blaspp package
* Modified lapackpp for newest release
* Formatting
* Updates to lapackpp package for new version
* Added dependency on cblas
* Removed cblas dependency
* updated to lapackpp
* Added new version for blaspp and lapackpp
* Removed debugging output
* Converted version matching logic for for loop
* mpich: yaksa configure fix
modified: var/spack/repos/builtin/packages/mpich/package.py
* typo
* python is not needed when building from preconfigured tarballs
* add maintainers
* Added FFLAGS for apple-clang:11
* Added issue #
* Update var/spack/repos/builtin/packages/mpich/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* update version to avoid compile error
* Update var/spack/repos/builtin/packages/r-rgdal/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Add FORM
* Update package.py
Changes from review
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Fixes for thepeg
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* mfem: specify PETSC_DIR, link correct sundials libraries
* fix: only use PETSC_DIR directly for static builds
* fix: only use sundials nvecmpiplusx for MFEM 4.2+
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* LHAPDF should extend Python to get env variables correct
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* Adding AOCC compiler to SPACK community
The AOCC compiler system offers a high level of advanced optimizations, multi-threading and processor support that includes global optimization, vectorization, inter-procedural analyses, loop transformations, and code generation. AMD also provides highly optimized libraries, which extract the optimal performance from each x86 processor core when utilized. The AOCC Compiler Suite simplifies and accelerates development and tuning for x86 applications.
* Added unit tests for detection and flags for AOCC
* Addressed reviewers comments w.r.t version checks and url,checksum related line lengths
Co-authored-by: Test User <spack@example.com>
* add updated version of py-dnaio
* Add py-setuptools-scm build dependency
* Fine tune the py-xopen dependency constraint
The needed version of xopen does not become specific until v0.4 of
dnaio.
* Set constraint on py-setuptools-scm
The py-setuptools-scm dependency is needed beginning with v0.4.
* updated version of py-cutadapt
* Update dependency specs
* Add py-setuptools-scm build dependency
* More constraint fixes
* Fix version range for py-xopen
* Added tau version 2.29.1 hash
* Update var/spack/repos/builtin/packages/tau/package.py
Make version name match branch name (master)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add updated version of py-xopen
* Update dependency constraints
* Further refine the python constraints
Also, put them all together.
* Put python constraints at top of list
* New package - Gate
This PR adds the Gate package as well as the ITK dependency.
* Fix flake 8 errors
* Be more explicit with CMake options
Make sure Cmake values related to variants are explicitly set to either
ON/OFF.
The ITK_USE_MKL flag will turn on the following:
- USE_FFTWD=ON
- USE_FFTWF=ON
- USE_SYSTEM_FFTW=ON
Since the package depends on fftw-api, those options will always be set.
* A collection of tensorflow fixes and updates
* tensorflow 2.3.1 requires the workaround for external protobuf as well
* Update tensorflow-estimator to 2.3.0
* Update tensorboard to 2.3.0
* Update tensorboard-plugin-wit to use actual releases
* Patch that potentially fixes#16073
* add myself to maintainer list
* Changed make command to support new slate build variable 'blas='
* Updated to use package's "make install" target
* Added variant 'blas' to support switching blas provider and removed legacy 'mkl' variant.
* Fixed problem caused by systems which use a non-bash /bin/sh
* Removed blas= variant in preference for setting blas provider via spec syntax (e.g., ^openblas).
* Fixed formating
* Changed to MakefilePackage and cleaned up make argument generation
* Implemented "edit" method
* Removed blank line
* Sqitched to using mpi compiler wrapper variables
* Update var/spack/repos/builtin/packages/slate/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* ADD: testing to dev-build command
* RM: mutally exclusive group for testing in parser
* FIX: test option to subparser and not testing
* ADD: spack-completion.bash
* RM: local devbuildcosmo cmd
* FIX: bad merge --drop-in -b --before options forgotten
* FIX: --test place in spack-completion.bash
* FIX: typo
* FIX: blank line removing
* FIX: trailing white space
Co-authored-by: Elsa Germann <egermann@tsa-ln002.cm.cluster>
The package list at https://spack.readthedocs.io/en/latest/package_list.html claims "it is automatically generated based on the packages in the latest Spack release" but it is actually based on the develop branch. This leads to confusion when users find that e.g. herwigpp is included in the list, but it cannot be found when they install the latest release. That latest release has a package list at https://spack.readthedocs.io/en/stable/package_list.html which does indeed not include herwigpp.
Changing the language from "the latest Spack release" to "this Spack version" might make that clearer. Maybe.
* Update libensemble to v0.7.1
* Update var/spack/repos/builtin/packages/py-libensemble/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add nvhpc compiler definition: "spack compiler add" will now look
for instances of the NVIDIA HPC SDK compiler executables
(nvc, nvc++, nvfortran) in supplied paths
* Add the nvhpc package which installs the nvhpc compiler
* Add testing for nvhpc detection and C++-standard/pic flags
Co-authored-by: Scott McMillan <smcmillan@nvidia.com>
Output was, e.g. `Executables in /bin and /,u,s,r,/,b,i,n are both associated with the same spec xz@5.2.2`, will be `Executables in /bin and /usr/bin are both associated with the same spec xz@5.2.2`.
Previously config.guess and config.sub were patched only
in the root of the source path.
This modification extend the previous behavior to patch every
config.guess or config.sub file even in subfolders, if need be.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* allow environments to specify dev-build packages
* spack develop and spack undevelop commands
* never pull dev-build packges from bincache
* reinstall dev_specs when code has changed; reinstall dependents too
* preserve dev info paths and versions in concretization as special variant
* move install overwrite transaction into installer
* move dev-build argument handling to package.do_install
now that specs are dev-aware, package.do_install can add
necessary args (keep_stage=True, use_cache=False) to dev
builds. This simplifies driving logic in cmd and env._install
* allow 'any' as wildcard for variants
* spec: allow anonymous dependencies
raise an error when constraining by or normalizing an anonymous dep
refactor concretize_develop to remove dev_build variant
refactor tests to check for ^dev_path=any instead of +dev_build
* fix variant class hierarchy
* Make release_90 preferred version.
* Be more explicity about CUDA dependencies.
* Remove duplicate CUDA dependency in Flang package and introduce nvptx variant.
* Fix nvptx variant message.
* Fixed wrong link to version 0.0.0 and add hash for version 0.1.4
* Fix failing build for neovim@master and neovim@stable and add hash for version 0.4.0
* Fix flake8 issues
* Removed unnecessary newline
* Depedency conditions restriction to neovim >= 0.2.0 as previous versions fail to compile
* Removed build dependency on git
* Removed master from all conditions
* autotools: add attribute to delete libtool archives .la files
According to Autotools Mythbuster (https://autotools.io/libtool/lafiles.html)
libtool archive files are mostly vestigial, but they might create issues
when relocating binary packages as shown in #18694.
For GCC specifically, most distributions remove these files with
explicit commands:
https://git.stg.centos.org/rpms/gcc/blob/master/f/gcc.spec#_1303
Considered all of that, this commit adds an easy way for each
AutotoolsPackage to remove every .la file that has been installed.
The default, for the time being, is to maintain them - to be consistent
with what Spack was doing previously.
* autotools: delete libtool archive files by default
Following review this commit changes the default for
libtool archive files deletion and adds test to verify
the behavior.
* Add new package: py-rbtools
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Original version added --no-gcc to CFLAGS when compiling with intel
compilers. This does not appear to be needed and indeed causes problems
(see #18894) with newer intel compilers; I have modified so it is not
added for intel@19: (I confirmed it is needed/works for intel@20, based
on comments in #18854 looks like holds for intel@19 as well).
(Also fix old formatting issue flake8 was complaining about)
* Update of py-redis for merlin-1.7.5
* Add hiredis variant and python versions for 3.5.x versions.
* Update var/spack/repos/builtin/packages/py-redis/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added slurm version 20-02-4-1 and support to build slurmrestd
* cleanup formatting
* cleanup libjwt to pass unittests
* missed one boilerplate
* hacking a pass in unittests
* defer to default install routine in libjwt
This commit refactors the computation of the search path
for aclocal in its own method, so that it's easier to reuse
for packages that need to have a custom autoreconf phase.
Co-authored-by: Toyohisa Kameyama <kameyama@riken.jp>
The r-devtools package was not installable due to a few issues.
- The rstudioapi spec was for 0.11.0 but the rstudioapi version is
actually 0.11. This caused an error during concretization.
- Set r-usethis to depend on rlang@0.4.3: rather than r-lang@0.4.3.
- Set r-usethis to depend on r-gh@1.1.0: rather than r-gh@1.1.0.
- Added version r-gh-1.1.0 as it is not currently present in spack.
* Added CUDAHOSTCXX variable needed to compile with cuda and mpi.
* Added guard for setting CUDAHOSTCXX with MPI.
* Acceptable working version of dealii+cuda+mpi.
By default Spack uses the latest (highest version) GCC
compiler available, which might change across updates
of the Github CI environment.
Since a C compiler is always installed and `mpich~fortran`
will result in faster build times, avoid building the FORTRAN
interface as part of the test.
* cpio: Fix issue compiling with newer intel compilers (#18854)
Do not add --no-gcc for recent intel compilers (e.g. 20.x)
* cpio: Remove --no-gcc flag for intel@19 as well as intel@20
Based on comments from @nrichart, removing --no-gcc option for intel@19
as well as intel@20
* Provide draco-7_8_0.
+ Also provide a patchfile for draco-7_6_0 to support CrayPE builds.
+ Version 7.8.0 has a new variant `+caliper`.
+ Sort dependencies alphabetically after grouping by required and optional.
* Remove patchfile that is no longer needed.
+ Newer versions of draco do not require this patch.
+ Older versions of draco are not supported for spectrum-mpi.
* Change new variant +caliper to default to False.
* pandoc: add variant for texlive
Modifies the pandoc package by adding a variant for texlive, which is only needed for PDF output. Enables this variant by default.
* Fix whitespace
Fix for #19095
When given +openmp, add the correct compiler openmp flag to the link
stage. This seems to be required for %intel compilers.
I do this for all compilers, not just %intel, because it does not seem
to harm anything and might be beneficial for others (and just seems
'correct').
* py-scikit-image: bump version
* address reviewer comments
* address reviewer comments
* address reviewer comments
* py-scikit-image : update dependencies : part 2
* cloudpicke is a docs only dependency, enable it with a variant if necessary
* address reviewer comments
* cleanup build vs run deps
* address reviewer comments
* Initial cut at FLCL spackage. Works with GCC so far.
* Update spackage to list release which supports spack. Add @agaspar as a maintainer. Default unit tests to disabled when building with spack.
* Change url to 0.2.
* Nope, 0.3.
* add package py-lmodule version 0.1.0
Lmodule is tested with lmod >= 7.x. Lmod 6 has different json
structure in spider which is not supported by lmodule
* py-charm4py: new package
Charm++ for python
Installation notes:
1) charm4py ships with its own charm++ tarball. It really wants
to use the version it ships with. It also builds charm++ in a special way to
produce libcharm.so (but not charmc, etc), so it does not seem
worthwhile to try to hack to build using a spack installed charmpp.
2) Originally, the installation was failing due to unresolved cuda
symbols when setup.py was doing a ctypes.CDLL of libcharm.so (in order
to verify version?). This appears to be due to the fact that
libcharm.so had undefined cuda symbols, but did not show libcudart.so as
a dependency (in e.g. ldd output). To fix this, I had to add
libcudart.so explicitly when linking libcharm.so, but since setup.py
untars a tarball to build libcharm, the solution was a tad convoluted:
2a) Add a patch in spack to py-charm4py which creates a patchfile
"spack-charm4py-setup.py.patch" which will modify a Makefile file (after it
is untarred) to add the flags in env var SPACK_CHARM4PY_EXTRALIBS to
the link command for libcharm.so
2b) The spack patch file also patches setup.py to run patch using the
aforementioned patchfile to patch the Makefile after it is untarred, and
sets the SPACK_CHARM4PY_EXTRALIBS variable appropriately in the setup
environment.
* Update var/spack/repos/builtin/packages/py-charm4py/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-charm4py/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-charm4py: flake8 fixes
remove useless import
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update Package, Pymol 2.4
* Fixed flake8 stuff
* more style fixes
* missing ( at EOF
* added py-pymol 2.3 back this
* extra line removal
* white space in empty line removal
* added libpng and py-pyqt5 to prefix_path
* Fix 'unexpected product version' error for macOS 11.0
* Adjustment: add the minimum version that this macOS patch is necessary.
* Adding a keyword to prevent the patch being applied to systems other than darwin (macOS)
* Deleting quotation marks
* AMD ROCm 3.8.0 - roctracer-dev
* Update var/spack/repos/builtin/packages/roctracer-dev/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added py-ply dependency
* remove py-ply
* Update var/spack/repos/builtin/packages/roctracer-dev/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* aomp 3.7.0 and rccl 3.8.0 update
* Bump up to ROCm 3.8.0 support on AOMP
* Create 0001-Add-amdgcn-to-devicelibs-bitcode-names-3.8.patch
* Create 0001-Add-amdgcn-to-devicelibs-bitcode-names.patch
* Update var/spack/repos/builtin/packages/aomp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This reverts #18359 and follow-on PRs intended to address issues with
#18359 because that PR changes the hash of all specs. A future PR will
reintroduce the changes.
* Revert "Fix location in spec.yaml where we look for full_hash (#19132)"
* Revert "Fix fetch of spec.yaml files from buildcache (#19101)"
* Revert "Merge pull request #18359 from scottwittenburg/add-binary-distribution-cache-manager"
When we attempt to determine whether a remote spec (in a binary mirror)
is up-to-date or needs to be rebuilt, we compare the full_hash stored in
the remote spec.yaml file against the full_hash computed from the local
concrete spec. Since the full_hash moved into the spec (and is no longer
at the top level of the spec.yaml), we need to look there for it. This
oversight from #18359 was causing all specs to get rebuilt when the
full_hash wasn't fouhd at the expected location.
It looks like intel compilers generate warnings for omp pragmas when
openmp flag is not given, which due to other flags set get promoted to
errors.
This adds a flag to ignore the pragma omp warnings (icc diagnostic
number 3180 on %intel@14:).
This changes makes sure that when we run the pipeline job that updates
the buildcache package index on the remote mirror, we also update the
key index. The public keys corresponding to the signing keys used to
sign the package was pushed to the mirror as a part of creating the
buildcache index, so this is just ensuring those keys are reflected
in the key index.
Also, this change makes sure the "spack buildcache update-index"
job runs even when there may have been pipeline failures, since we
would like the index always to reflect the true state of the mirror.
* Add rocblas 3.8.0 and add all Tensile deps
* Deploy rocm_smi to the bin/ folder so that it is in $PATH
* BUILD_WITH_TENSILE_HOST=ON on 3.7.0+ and fix flake8
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
[py-particle] format
[py-particle] switch to pypi downloads
[py-particle] specify dependencies in more details
[py-particle] format
Update var/spack/repos/builtin/packages/py-particle/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Update var/spack/repos/builtin/packages/py-particle/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Update var/spack/repos/builtin/packages/py-particle/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Since those files currently exist in buildcaches (in S3 buckets) with
potentially different content types, we should be less restrictive in
what content types we accept when attempting to fetch them. This PR
removes the content type constraint so any file with the matching
name will be found.
* Remove duplication of reconstructed RPATHs caused by multiple
identical entries in prefixes dictionary
* Don't rewrite RPATHs if relative RPATHs are unchanged because the
directory layout is unchanged
* Need to check the binary is not a Mach-o binary in a linux package or an ELF binary in a macOS package.
* use sys.platform
* Darwin -> darwin for sys.platform
* Created +python_deps variant
- the timemory python bindings can still be imported without these runtime packages and forcing a dependence by default significantly increases the spack install time
* Added conflict
- added conflicts('+python_deps', when='~python')
* rocm-3.8.0 updates for hipblas,rocsolver,rocm-opencl
* rocm-3.8.0 updates to rocalution and rename and change rocmvalidationsuite
* rocm-3.8.0 update to miopen-hip
* Revert "rocm-3.8.0 updates for hipblas,rocsolver,rocm-opencl"
This reverts commit 2542e8b1be.
* rocm-3.8.0 changes for rocsolver and hipblas
* new package: py-gitpython
* Update var/spack/repos/builtin/packages/py-gitpython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Sinan81 <Sinan81@earth>
* Add more updates for kokkos 3.2 release, particularly nvcc-wrapper
* Use an ordinary Package
Co-authored-by: Jeremiah J Wilke <jjwilke@kokkos-dev-2.sandia.gov>
* Rework spack.util.web.list_url()
list_url() now accepts an optional recursive argument (default: False)
for controlling whether to only return files within the prefix url or to
return all files whose path starts with the prefix url. Allows for the
most effecient implementation for the given prefix url scheme. For
example, only recursive queries are supported for S3 prefixes, so the
returned list is trimmed down if recursive == False, but the native
search is returned as-is when recursive == True. Suitable
implementations for each case are also used for file system URLs.
* Switch to using an explicit index for public keys
Switches to maintaining a build cache's keys under build_cache/_pgp.
Within this directory is an index.json file listing all the available
keys and a <fingerprint>.pub file for each such key.
- Adds spack.binary_distribution.generate_key_index()
- (re)generates a build cache's key index
- Modifies spack.binary_distribution.build_tarball()
- if tarball is signed, automatically pushes the key used for signing
along with the tarball
- if regenerate_index == True, automatically (re)generates the build
cache's key index along with the build cache's package index; as in
spack.binary_distribution.generate_key_index()
- Modifies spack.binary_distribution.get_keys()
- a build cache's key index is now used instead of programmatic
listing
- Adds spack.binary_distribution.push_keys()
- publishes keys from Spack's keyring to a given list of mirrors
- Adds new spack subcommand: spack gpg publish
- publishes keys from Spack's keyring to a given list of mirrors
- Modifies spack.util.gpg.Gpg.signing_keys()
- Accepts optional positional arguments for filtering the set of keys
returned
- Adds spack.util.gpg.Gpg.public_keys()
- As spack.util.gpg.Gpg.signing_keys(), except public keys are
returned
- Modifies spack.util.gpg.Gpg.export_keys()
- Fixes an issue where GnuPG would prompt for user input if trying to
overwrite an existing file
- Modifies spack.util.gpg.Gpg.untrust()
- Fixes an issue where GnuPG would fail for input that were not key
fingerprints
- Modifies spack.util.web.url_exists()
- Fixes an issue where url_exists() would throw instead of returning
False
* rework gpg module/fix error with very long GNUPGHOME dir
* add a shim for functools.cached_property
* handle permission denied error in gpg util
* fix tests/make gpgconf optional if no socket dir is available
* Flang master branch is now the preferred version.
* Flang master branch can now use LLVM 9
* Remove master as this was never used by Flang.
* Add LLVM-Flang release_90 and release_90.
Magma is not currently compatible with CUDA-11. While this is reflected
in the package, it is done with a comment in a `depends_on` directive,
which has the effect of trying to install a version of CUDA that may be
different from the one in the current environment, without any message
to the end user. A `conflicts` is a better way to handle this.
* Disable bash completion by default.
* flake8
* Adding explicit dependence on libuuid
* Adding explicit dependence on cryptsetup
This way we don't pick up host crypto packages by mistake.
* Fixing the completion directory.
* Update var/spack/repos/builtin/packages/util-linux/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* flake8
* Removing libuuid linkage according to @michaelkuhn on #18696
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* trigger ascent e4s pipeline on merge to spack develop
* change pipeline name ecpcitest/e4s is the pipeline that will be triggered for merge on develop its the E4S use-case.
This PR adds the current release version of mumax and tweaks the install
of the previous beta version.
- Set the url parameter to reflect the release version over the beta
version. Hopefully, this will be consistent going forward.
- Set an explicit url for the previous beta version.
- Accept values for `cuda_arch`. The previous version had its own list
but the release version does not.
- Replace the built in cuda compute capabilities list with the one
provided by Spack for the 3.10beta version.
This PR fixes a couple of things with the libbeagle package.
- libbeagle can only be built for one GPU type. Add a test for that.
- version 2 had the arch statement in
libhmsbeagle/GPU/kernels/Makefile.am but version 3 has it in
configure.ac. Put the variant specified value in configure.ac for
consistency.
Due to recent changes in the `netcdf-c` package, it is now necessary to explicitly request a non-mpi-enabled hdf5 build if building a non-mpi-enabled seacas.
* libvdwxc: unbreak concretization, request fftw-api
mixing both fftw and fftw-api in a dependency tree can trigger the
following:
```
$ spack spec cp2k@master +sirius
==> [2020-09-16-12:36:06.552981] sirius applying constraint gsl
==> [2020-09-16-12:36:06.554270] sirius applying constraint openblas@0.3.10%gcc@7.5.0~consistent_fpcsr~ilp64+pic+shared threads=none arch=linux-opensuse_leap15-sandybridge
Traceback (most recent call last):
File "./bin/spack", line 64, in <module>
sys.exit(spack.main.main())
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/main.py", line 762, in main
return _invoke_command(command, parser, args, unknown)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/main.py", line 490, in _invoke_command
return_val = command(parser, args)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/cmd/spec.py", line 103, in spec
spec.concretize()
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2228, in concretize
user_spec_deps=user_spec_deps),
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2716, in normalize
visited, all_spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2654, in _normalize_helper
dep, visited, spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2613, in _merge_dependency
visited, spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2654, in _normalize_helper
dep, visited, spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2554, in _merge_dependency
provider = self._find_provider(dep, provider_index)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2489, in _find_provider
providers = provider_index.providers_for(vdep)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/provider_index.py", line 80, in providers_for
return sorted(s.copy() for s in result)
File "/data/tiziano/debug-spack/spack2/lib/spack/llnl/util/lang.py", line 249, in <lambda>
lambda s, o: o is not None and s._cmp_key() < o._cmp_key())
TypeError: '<' not supported between instances of 'str' and 'NoneType'
```
while at the same point disallowing MKL as a fftw provider.
Solving both issues by depending on `fftw-api@3` instead and adding a
conflict on `^fftw~mpi` when using `+mpi` (thanks to alalazo).
* cp2k: use conflicts instead of runtime checks for fftw/openblas variants
* Initial Draft of Cosmoflow Spackage
Need to add in logic to streamline cpu/gpu builds
* Added ~cuda logic to cosmoflow spackage
Added logic to support a ~cuda build for cosmoflow
* Requested Changes to Cosmoflow Spackage
Made requested changes to cosmoflow spackage
MatIO development has switched to github from sourceforge. Updated the `git` and `url` variables and added the four new versions (1.5.14 -- 1.5.17) that have been released since the last update of this package.
* qbox: install to correct directory structure
* qbox: Have qb executable put in bin rather than src subdir
* qbox: Fix python script shebangs to use python from path
* qbox: Add dependencies on gnuplot, python2 for utilities
* qbox: fix flake8 issue
* qbox: Add $prefix/util to PATH
* Initial CRADL Spackage Work
Currently resolving ```--single-version-externally-managed``` error
* Fixed GPUtil Issues
Thanks to Vinay Ramakrishnaiah for overwriting install
* Finished CRADL Install Function
Finished CRADL install function which is basically copying the scripts
to the install directory. Also resolved flake8 issues for PR purposes
Update pipelines documentation to describe how 'tags', 'variables',
'image', 'before_script', 'script', and 'after_script' can be
supplied at the top level, to be used by any of the runner mappings,
and also overridden by any of the runner mappings.
Also show an example of capturing the custom spack SHA at pipeline
generation time, so all jobs are sure to run with the same version
of spack, as a means to illustrate the $env:VARIABLE_NAME syntax.
* Add new package: webbench
* Update var/spack/repos/builtin/packages/webbench/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cpio: add --rtlib=compiler-rt for %fj
* cpio: simplify if
* Update var/spack/repos/builtin/packages/cpio/package.py
This seems better.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add new package: hping
* Update var/spack/repos/builtin/packages/hping/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/hping/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Fix flake8 errors
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Use the config path instead of the basename
* Removing unused variables
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Test
Making sure if there are 2 include config files with the same basename they are both implemented
* Edit test assert
Co-authored-by: Greg Becker <becker33@llnl.gov>
* ncurses: adding external support.
* Update var/spack/repos/builtin/packages/ncurses/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ncurses/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ncurses/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixing includes.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Fixes#18441
When writing an environment, there are cases where the lock file for
the environment may be removed. In this case there was a period
between removing the lock file and writing the new manifest file
where an exception could leave the manifest in its old state (in
which case the lock and manifest would be out of sync).
This adds a context manager which is used to restore the prior lock
file state in cases where the manifest file cannot be written.
* Checksummed New Flux Versions
Checksummed new flux versions to let spack detect them
* Added CXXFlags to build Flux-sched
Added missing cxxflags to build flux-sched
* Adding Cuda Variant to SW4Lite
Added cuda variant of sw4lite as per guidance in README
* Updated SW4Lite+cuda to Current Header Conventions
Updated sw4lite+cuda to use current conventions for spackage include
dirs
* Fixing FLake8 Issue with Sw4lite+cuda Fix
Fixed overly long line and further underlined sticky note reminding me
to run flake8 BEFORE pushing
* Switching to Spack Compiler Wrapper
Switching to spack compiler wrapper for consistency
* Orca: Add new versions.
* Orca: Support OpenMPI without the legacy wrappers.
By default, Spack builds OpenMPI without the legacy wrappers when using the Slurm scheduler. This breaks Orca since its binaries are hardcoded to call "mpirun". To workaround this issue, add a "mpirun" wrapper which calls "srun" when required.
* cp2k: do not support ~openmp for v8+
* sirius: version bump
* cp2k: fix overlapping deps for elpa
fixes#18029
* cp2k: update SIRIUS dependency for v8+
* spfft: requires CMake 3.11+
* cp2k: fix build with +sirius
* darshan-util: remove return(-1) from void function
* Update var/spack/repos/builtin/packages/darshan-util/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/darshan-util/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* gamess-ri-mp2-miniapp: initial import
* flake8
* Update var/spack/repos/builtin/packages/gamess-ri-mp2-miniapp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This is a special case of overriding since each section is being matched with the current spec.
The trailing ':' for sections with override is now removed when parsing the configuration so the special handling for the modules configuration stopped working but it went unnoticed.
Starting with OpenSceneGraph 3.5.5, support for windows managed by Qt
has been moved to the seperate project osgQt. Hence, a dependency on Qt
is not needed any longer for version 3.5.5 or newer.
In order to still satisfy the dependency on OpenGL, a depends_on('gl')
has been added.
without setting the build enviroment, the installation fails with
```
1 error found in build log:
35946 fmtutil [INFO]: /usr/local/pkg/Installs/linux-ubuntu18.04-skylake_avx512/gcc7.4.0/texlive/20190410/rgs2nakycorkgzno/t
exmf-var/web2c/pdftex/pdfcslatex.fmt installed.
35947 fmtutil [INFO]: Disabled formats: 6
35948 fmtutil [INFO]: Successfully rebuilt formats: 45
35949 fmtutil [INFO]: Total formats: 51
35950 fmtutil [INFO]: exiting with status 0
35951 ==> [2020-09-07-21:23:21.482745] '/usr/local/pkg/Installs/linux-ubuntu18.04-skylake_avx512/gcc7.4.0/texlive/20190410/
rgs2nakycorkgzno/bin/x86_64-linux/mtxrun' '--generate'
>> 35952 /usr/bin/env: 'texlua': No such file or directory
```
May be there is a better way...
Cython requires a library that is available in Python 3.8, or before
Python 3.8 with setuptools. This specifies that setuptools is a run
dependency to allow running with Python < 3.8
`spack install --yes-to-all` doesn't actually make the build non-interactive,
but that is why people typically use it. This documents that you must also
specify `--no-checksum` for a fully non-interactive build.
* Modules: Deduplicate suffixes but don't sort them.
The suffixes' order is defined by the order in which they appear in the configuration file.
* Modules: Modify tests to use spack_yaml.load_config.
spack_yaml.load_config ensures that the configuration is stored in an ordered manner. Without this change, the behavior of the tests did not match Spack's.
* Modules: Tweak the suffixes test to better catch ordering issues.
* new package: py-textblob
add variant to py-nltk to allow for data download/installation
add dependencies to py-nltk so that bin/nltk works
* add resources and resource generation script
* spack config: default modification scope can be an environment
The previous model was that environments are the highest priority config
scope for config reading operations, but were not considered for config
writing operations. Now, the active environment is the highest priority
config scope for both reading and writing operations.
Now spack config add, spack external find and spack compiler set environment
configuration in the environment by default if an environment is active. This is a
change in default behavior for these routines, but better matches the mental
model for an environment taking precedence over the user's default config file.
* add scope argument to 'spack external find' to choose non-default scope
* Increase testing for config modifications on environments
Co-authored-by: Gregory Becker <becker33@llnl.gov>
At some point in the build phase a script
spack-src/scripts/convert-template
has a shebang looking for python in the path.
Currently this picks up system python if in invoker's path, but should
be using python from spack, so add a build dependency on python.
Many system-installed binaries (at least in Debian) are built against a
libtinfo.so that has versioned symbols. If spack builds a version without this
functionality, and it winds up in the user's LD_LIBRARY_PATH via spack load,
system binaries will begin to complain.
```
$ less log.txt
less: /opt/spack/.../libtinfo.so.6: no version information available (required by less)
```
Co-authored-by: Luke D'Alessandro <ldalessa@uw.edu>
The 'external_modules' attribute on a Spec, when read from a YAML
configuration file, may contain extra formatting that is lost when
that Spec is written-to/read-from JSON format. This was resulting in
a hashing instability (when the Spec was read back, it would report a
different hash). This commit adds a function which removes the extra
formatting from 'external_modules' as it is passed to the Spec in
__init__ to ensure a consistent hash.
* Add rocm 3.7.0 libs
* Make 3.7.0-only dependency on numactl explicit
* Add rocm-device-libs dep to rocm-clang-ocl
* Update the cmakelists dir in rocm-debug-agant
* Make rocm-debug-agent work on 3.7.0
* Disable tensile host; following rocm-arch recommendations
* ldak: new package at 5.1
* flake8
* Re-run tests
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-gitdb
* Update var/spack/repos/builtin/packages/py-gitdb/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Set GOPATH in build environment to avoid creating files in the user's
default GOPATH (e.g. ~/go).
* Support for external find.
* Added latest releease 0.74.3.
Remove prior built-in Trilinos subrepository.
Added a Trilinos conflict discovered while documenting ForTrilinos:
```
***
*** ERROR: Setting Trilinos_ENABLE_SEACASExodus=OFF which was 'ON' because SEACASExodus has a required library dependence on disabled TPL Netcdf!
***
```
As detailed in https://bugs.python.org/issue33725, starting new
processes with 'fork' on Mac OS is not guaranteed to work in general.
As of Python 3.8 the default process spawning mechanism was changed
to avoid this issue.
Spack depends on the fork-based method to preserve file descriptors
transparently, to preserve global state, and to avoid pickling some
objects. An effort is underway to remove dependence on fork-based
process spawning (see #18205). In the meantime, this allows Spack to
run with Python 3.8 on Mac OS by explicitly choosing to use 'fork'.
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* [py-torch-geometric] depends on py-torch-sparse
* [py-torch-geometric] setting TORCH_CUDA_ARCH_LIST
* [py-torch-geometric] added the rest of the dependencies
* [py-torch-geometric] added cuda variant and added more build env vars
* [py-torch-geometric] added variant info for depenedencies
* [py-torch-geometric] flake8
* [py-torch-geometric] add variant description
* HPCC Benchmark: added HPC Challenge (HPCC) benchmark
* HPCC Benchmark: modified error message on lack of fftw2 interface in MKL
* hpcc: fixed styling add one more installation example
* hpcc: styling fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* hpcc: changed include and lib location setter
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* hpcc: fixed styling add one more installation example
* hpcc: removed readme.md
* hpcc: develop repo now is in github
* hpcc: march arguments are set explicitly in case of intel compilers, added -restrict flag, which needed for older intel compilers (at least <=19.0.5.281)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* wrf: new package
* wrf: fix install dir
* wrf: ndown location
* Add more compiler and nesting options to wrf package
* Fix configure that didn't find pgf90, use tempfile and compile in parallel
* WRF v4.2 with parallel I/O support through pnetcdf
Signed-off-by: michael laufer <michael.laufer@toganetworks.com>
* extend Package, compiler wrapper now used, small fixes
Signed-off-by: michael laufer <michael.laufer@toganetworks.com>
* Update var/spack/repos/builtin/packages/wrf/package.py
fixed typo
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Levi Baber <baberlevi@gmail.com>
Co-authored-by: eXact lab <info@exact-lab.it>
Co-authored-by: michael laufer <michael.laufer@toganetworks.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-update-checker
* add test deps
* Update var/spack/repos/builtin/packages/py-update-checker/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-update-checker/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-update-checker/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove lint stuff.
Co-authored-by: Sinan81 <Sinan81@earth>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`configure` script of Modules 4.5.2 is a bit too strict and breaks when
special options like `--disable-dependency-tracking` are set. This issue
will be fixed on Modules project starting version 4.5.3
(cea-hpc/modules#354).
This change adapts `configure` options set when installing version 4.5.2
to avoid options unrecognized on this version.
Fix#18420
* libvterm: renumber version and add 1.0.3
neovim: build on aarrch64
* Remove unneeded comment.
* libvterm: newer bazaar snapshot version is set to version 0.0.
neovim: change for libvterm version change, and libtermkey version bug is fixed.
* update libvterm versions.
* Add new package: byte-unixbench
* refine install flow
* Update var/spack/repos/builtin/packages/byte-unixbench/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add new package: leptonica
* Update var/spack/repos/builtin/packages/leptonica/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-spacy: new version 2.3.2
update en-core-web-sm to @2.3.1
add en-vectors-web-lg@2.3.0
* update deps
* wasabi
Co-authored-by: Andrew Elble <aweits@localhost.localdomain>
* Update version for package Draco
+ Add support for `draco-7.7.0`.
+ Introduces new `+cuda` variant. This variant is only allowed in version
`7.7.0:`.
+ Restrict `random123` to compatible versions.
+ Restrict `libquo` to compatible versions.
+ Moving forward, require `python@3:`
+ Moving forward, the ``+superlu_dist` variant is not longer supported.
+ Improve printed output for `--test` mode by adding `ctest` option
`--output-on-failure`
+ Provide a patch to support for IBM Spectrum-MPI in version `7.7.0:`
+ Provide a patch to allow variant `~cuda` to actually disable GPU portions of
the code when a GPU is discovered on the local system.
* Remove unnecessary function decoration.
* Adding externals for bison and flex
Added because bison actually pulls in a ton of stuff.
* Need to escape parentheses.
* Need to add re package.
* Adding re package.
* spectrum-mpi: adding external support.
* Package is tested, works on LLNL lassen
* Spectrum external now detects the correct compiler
* Changing code to not output all compilers
Done per becker33's request on #18055
If Thyra isn't explicitly enabled at the package level, trilinos fails
to build.
```
/var/folders/gy/mrg1ffts2h945qj9k29s1l1dvvmbqb/T/s3j/spack-stage/spack-stage-trilinos-12.18.1-vfmemkls4ncta6qoptm5s7bcmrxnjhnd/spack-src/packages/muelu/adapters/stratimikos/Thyra_XpetraLinearOp_def.hpp:167:15: error:
no member named 'ThyraUtils' in namespace 'Xpetra'
Xpetra::ThyraUtils<Scalar,LocalOrdinal,GlobalOrdinal,Node>::toXpetra(rcpFromRef(X_in), comm);
~~~~~~~~^
```
* py-basemap
* Updated versions + URL attribute
* Update var/spack/repos/builtin/packages/py-basemap/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-basemap/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removed unnecssary comment
* flake8
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Adding external support for mvapich2.
This picks up all the options that are currently settable by
the spack package. It also detects the compiler and sets it
appropriately.
* Removing debugging printing.
* Adding changes suggested by @nithintsk
+ Added version 2.1.9
+ Previously the SZ package incorrectly depended on CMake without a
version dependancy, but actually version 3.13 or newer is required
+ Added myself as a maintainer for the SZ spack package
This is a bug release with some new features and bug fixes. Among them:
[Batch] Set number of MPI processes for SLURM. (Ben Tovar)
[General] Use the right signature when overriding gettimeofday. (Tim
Shaffer)
[Resource Monitor] Add context-switch count to final summary. (Ben
Tovar)
[Resource Monitor] Fix kbps to Mbps typo in final summary. (Ben Tovar)
[WorkQueue] Update example apps to python3. (Douglas Thain)
* samtools: Add version 0.1.8 for OSS soapdenovo-trans.
* Add depend on zlib and samtools to build on aarch64.
* soapdenovo-trans: Change the condition of depend on zlib and samtools.
* New package: cxxopts
* Use +unicode instead of unicode=True
- Make the unicode option more explicit
* Add two new variants to spack for upcoming 1.5, stable and develop
* Add as maintainer
* Add depends_on on clauses
* Remove unrelated change
I know that it's just an example, but I was trying to figure out what was going on and it wasn't making sense....
`tput sgr0` resets the terminal state (http://linuxcommand.org/lc3_adv_tput.php) and I can't see any reason to do it twice. Deleting the second occurrence doesn't seem to break the fancy prompt effect.
* qgis
* Update package.py
QGIS 3.12.1 can use PROJ >= 4.9.3. Therefore both version restrictions on PROJ were incorrect.
https://github.com/qgis/QGIS/blob/final-3_12_1/INSTALL
* Update package.py
Add explanation to (hopefully temporary) removal of hdf5 dependency.
* Remove overly restrictive GRASS version number.
* flake8
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Set location for dependencies specifying explicitly both
the include and lib path. This permits to handle cases where
the libraries are installed in lib64 instead of lib.
fixes#17556fixes#10842closes#18150
Compilers can have strange versions, as the version is provided by the user. We know the real version internally, (by querying the compiler) so expose it as a property and use it in places we don't trust the user. Eventually we'll refactor this with compilers as dependencies, but this is the best fix we've got for now.
- [x] Make `real_version` a property and cache the version returned by the compiler
- [x] Use `real_version` to make C++ language level flags work
Restores the fetching progress bar sans failure outputs; restores non-debug reporting of using fetch cache for installed packages; and adds a unit test.
* Add status bar check to test and fetch output when already installed
Some of the feature flags are named differently and clwb is missing on
my i7-1065G7. cascadelake and cannonlake might have similar problems but
I do not have access to those architectures to test.
- add cuda variant, enabled by default, but conflicting with
strumpack@:3.9.999
- add zfp variant, enabled by default, but conflicting with
strumpack@:3.9.999
- update minimum CMake version to 3.11
- for version 4.0.0:, do not use mpi wrappers. v4.0.0 uses CMake
MPI targets
- for version 4.0.0, add dependency on butterflypack@1.2.0:
- remove versions 3.1.0 and older
- make parmetis variant True by default
- add TODO for slate variant (spack package not ready yet)
While I believe there must have been a reason to restrict libtool to <=
2.4.2, adios compiles just fine with libtool 2.4.6 for me.
In fact, without this change, I'm getting this error:
libtool: Version mismatch error. This is libtool 2.4.6, but the
libtool: definition of this LT_INIT comes from libtool 2.4.2.
libtool: You should recreate aclocal.m4 with macros from libtool 2.4.6
This doesn't make much sense, since spack did build libtool@2.4.2 as a
dependency, and was supposedly trying to use it. My guess is that on
this system (NERSC's cori) the system libtool in /usr/bin, which is
2.4.6 somehow got picked up partially.
Semi-recently the lua spackage was updated to explicitly add libtinfow
to the lua build line. Ncurses provides this but only when the +termlib
variant is enabled
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
* fix typo WONTON_ENABLE_Kokkos ---> TANGRAM_ENABLE_Kokkos
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
* make_package_relative: relocate rpaths on cray
* relocate_package: relocate rpaths on cray
* platforms: add `binary_formats` property
We need to know which binary formats are supported on a platform so we
know which types of relocations to try. This adds a list of binary
formats to the platform and removes a bunch of special cases from
`binary_distribution.py`.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
See #18033
libssh seemed to detect and link to system krb5 libraries if found
to provide gssapi support, causing issues/system dependencies/etc.
We add a boolean variant gssapi
If +gssapi, the spack krb5 package is added as a dependency.
If ~gssapi, the Cmake flags are adjusted to not use gssapi so that
does not link to any krb5 package.
xz-utils already builds a shared library. The +pic variant adds the
compiler pic flag to the static archive so that it can be linked into
another shared library.
* Add Collier and SysCalc recipes
* Remove extra syscalc version
* Build collier with -j1 for @:1.2.4
* Add recipe for gosam-contrib
* Update gosam-contrib recipe with 'provides'
* Madgraph recipe, first version
* Finalize madgraph recipe + flake8
* Make py2 version of madgraph default; fix hash for syscalc; fix patch
* Handle virtual packages (#3)
* Update package.py
* Update packages.yaml
* Remove virtual packages - pt. 1
* Remove virtual packages - pt. 2
* Changes from review - pt. 1
* Changes from code review - pt. 2
* Update var/spack/repos/builtin/packages/collier/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/madgraph5amc/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add hash for version 2.7.2 (available in our private mirror)
* Fixes for 2.7.3 family
* Patches for 2.7.3{.py3,}{.atlas,}
* Fix hash of syscalc
* Hack to fix concretization (2.7.3 matches 2.7.3.py3)
* Add conflict statement (reported to devs)
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Delete madgraph5amc-2.7.2.atlas.patch
* Delete madgraph5amc-2.7.2.patch
* Update package.py
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: iarspider <iarpsider@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Adding new packages Mvapich2x and Mvapich2-GDR which can be installed only via binary mirrors
* Added docstring descriptions to both packages
* Removed variant wrapper for cuda dependencies
* Fixed multiple flake8 errors
* Updated APIs to pass unit tests
* Updated APIs for MVAPICH2-X package and fixed flake8 warnings for MVAPICH2-GDR
* Changed url back to single line
* Removed extra parantesis around URL string
Co-authored-by: nithintsk <nithintsk@github.com>
* [root] add dataframe cmake option
@chissg @HadrienG2 @drbenmorgan
This has been a separate cmake option starting v6-19 I believe: 31292b9082
It should default to true -- not sure why, but this recipe sets it to off.
I could add a variant too, but since it has become an integral part of root and doesn't introduce extra dependencies, I'd propose to just set it to true like I do here.
* Update package.py
Before this PR, packages.yaml files that contained an
empty "paths" or "modules" attribute were not updated
correctly, since the update function was not reporting
them as changed after the update.
This PR fixes that issue and adds a unit test to
avoid regression.
This commit adds output to the "spack external find"
command to inform users of the result of the operation.
It also fixes a bug introduced in #17804 due to the fact
that a function was not updated to conform to the new
packages.yaml format (_get_predefined_externals).
* Update the change to add gomp compatibity to llvm-openmp.
* Update the change to add gomp compatibity to llvm-openmp using append instead of extend.
* Fix flake8 issue.
Co-authored-by: Jim Galarowicz <jgalarowicz@newmexicoconsortium.org>
* pFUnit: Added support for version 4
pFUnit v4 uses submodules, so we must fetch from the repo rather
than grabbing the tarball (see #11642).
* pFUnit: Added conflicts
pFUnit 4 causes an internal compiler error with gcc 7.2.0, and
several pFUnit versions are incompatible with shared libraries.
* pFUnit: Added conflicts for version 4
Verson 4 uses Fortran 2008 features and cannot be built with gcc
compilers prior to 8.4.
* pFUnit: Fixed conflicts/dependencies as suggested
* pFUnit: Version 4 no longer fetches from git
Checksummable files are fetched instead.
* pFUnit: Simplify major version check
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* pFUnit: Removed unnecessary patch for v4
The patch is still applied to v3.
* pFUnit: Modified MPI flag for v4
pFUnit v3 and v4 use different CMake flags to enable/disable MPI
support. Also added a conflict for v3 with MPI enabled using
gfortran 10, since newer gfortran is more finicky about datatypes.
* pFUnit: Rearranged mpi logic
* pFUnit: changed m4 to a build dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* pFUnit: Added URL back
I did not realize it was needed by "spack versions" and
"spack checksum". Thanks @adamjstewart!
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When the user explicitly sets ~fortran, mpich builds without fortran
support. This will make building C/C++ libraries using clang easier,
since clang does not offer a fortran compiler by default (yet).
Since the user has to disable Fortran support explicitly, this change
is not breaking.
* Handle uninstalled rootspecs in buildcache
- Do not parse specs / find matching specs when in an environment and no
package string is provided
- Error only when a spec.yaml or spec string are not installed. In an
environment it is fine when the root spec does not exist.
- When iterating through the matched specs, simply skip uninstalled
packages
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
Style and documentation tests take just a few minutes
to run. Since in Github actions one can't restart a single
job but needs to restart an entire workflow, here we group
tests with similar duration together.
- [x] Remove references to `master` branch
- [x] Document how release branches are structured
- [x] Document how to make a major release
- [x] Document how to make a point release
- [x] Document how to do work in our release projects
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
This is needed because libcuda is used by the driver,
whereas libcudart is used by the runtime. CMake searches
for cudart instead of cuda.
On LLNL LC systems, libcuda is only found in compat and
stubs directories, meaning that the lookup of libraries
fails.
`spack -V` stopped working when we added the `releases/latest` tag to
track the most recent release. It started just reporting the version,
even on a `develop` checkout. We need to tell it to *only* search for
tags that start with `v`, so that it will ignore `releases/latest`.
`spack -V` also would print out unwanted git eror output on a shallow
clone.
- [x] add `--match 'v*'` to `git describe` arguments
- [x] route error output to `os.devnull`
`spack buildcache list` was trying to construct an `Arch` object and
compare it to `arch_for_spec(<spec>)`. for each spec in the buildcache.
`Arch` objects are only intended to be constructed for the machine they
describe. The `ArchSpec` object (part of the `Spec`) is the descriptor
that lets us talk about architectures anywhere.
- [x] Modify `spack buildcache list` and `spack buildcache install` to
filter with `Spec` matching instead of using `Arch`.
- [x] Make it easier to get a `Spec` with a proper `ArchSpec` from an
`Arch` object via new `Arch.to_spec()` method.
- [x] Pull `spack.architecture.default_arch()` out of
`spack.architecture.sys_type()` so we can get an `Arch` instead of
a string.
* Loosen Axom's variants, add shared variant for axom, fix clang/xlf rpath'ing problem on blueos
* Fix flake8
* Add main branch to list of known git branches
fixes#18028
Since now external packages support multiple modules
the correct thing to do is to check if the name of the
*first* module to be loaded contains the string "cray"
`cmake @3.16.3` is the version provided by Ubuntu 20.04. Adding this version here avoids the warning
```
==> Warning: Missing a source id for cmake@3.16.3
```
when using the system `cmake`.
* Spack recipes for ROCm Stage 1 Build components
* fix flake8 errors
* fixes for flake8 errors
* Add a patch for cmake 3.x suport
* Fix rpath issue where hsa-rocr-dev does not allow it to be filled in by spack
* Remove inherited cmake args from comgr
* Make hsakmt-roct compile: no -Werror because with const cast in numa, and actually add the numa dependency
* Remove redundant cmake args which is inherited
* Fix some dependencies
* Fix some python 2.x compatibilities
* Add amd gpu targets to rocfft
* Make comgr a link dep of rocm-dbgapi and remove redundant cmake args
* Remove redundant cmake args
* Remove more redundant cmake args
* Final redundant args
* Use cmake 3.x instead of a fixed version
* Remove random variable
* Use installed rocclr instead of nonexisting directory
* Don't build outside the staging folder
* Deploy some missing cmake target file
* Formatting
* Fix target list
* Properly handle the rocclr dependency
* Formatting
* Fix vermin test
* Make all 3.5.0 package depend exactly on eachother
* Add a few missing link dependencies
* Fix flake8
* Remove some other redundant flags
* Add gcc install prefix for gcc builds of llvm-amdgpu
* review changes for the spack recipes
* Do not hard-code versions
* Fix atmi install
- no more relative rpaths outside of install directory (required patch)
- fix build -> link dependencies
- remove unused build dependency
* Fix flake8 errors
* Remove unused variable and make things python 2.x compatible
* Fix flake8
* Move compiler config from rocfft -> hipcc
* Remove redundant dependency on fftw-api
* Remove redundant import
* Avoid hitting the ROCM_PATH variable altogether with a patch; also just fill in all variables
* Add missing deps z3, zlib and ncurses+termlib to llvm-amdgpu
* Fix perl shebang and add dep
* Fix typo and patch HIP_CLANG_ROOT detection in hip's cmake files
* fixing build failure due z3 and adding zlib for rocgdb
* new changes to add z3,curses dependency for llvm-amdgpu
* fix flake8 error
Co-authored-by: root <root@localhost.localdomain>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
This is needed because libcuda is used by the driver,
whereas libcudart is used by the runtime. CMake searches
for cudart instead of cuda.
On LLNL LC systems, libcuda is only found in compat and
stubs directories, meaning that the lookup of libraries
fails.
* Libtool: add spack external find support
* Less specific regex
* match -> search
* Clarify that min returns first alphabetically, not shortest
* Simplify version determination
The modifications in 193e8333fa
introduced a bug in the loading of compiler modules, since a
function that was expecting a list of string was just getting
a string.
This commit fixes the bug and adds an assertion to verify the
prerequisite of the function.
* add py-ufl package from fenics
* add py-fiat package from fenics
* add py-ffcx package from fenics
* add py-dijitso package from fenics
* add dolfinx library from fenics
* amend ffcx to use ufl and fiat master branches
* setup variants complex and int64 of dolfinx
* add dolfinx python library as package
* add test dependencies to py-dolfinx
* remove broken doc variant
* remove test dependencies from py-dolfinx
* flake8 fixes to dolfinx and py-dolfinx
* make sure dolfinx cmake picks up the correct python version
* list build phases in py-dolfinx package
* remove unnecessary package url
* make pkgconf a build dependency
* make all python dependencies build+run
* py-ffcx needs py-setuptools to be a build/run dependency to support ffcx executable
* remove unnecessary variants from dolfinx
* add missing dependencies to py-dijitso
* remove stray line from py-dolfinx
* simplify definition of build_directory in py-dolfinx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* use depends_on("python") rather than extends("python") in py-ffcx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* use depends_on("python") rather than extends("python") in py-fiat
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* use depends_on("python") rather than extends("python") in py-ufl
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* rename py-fiat to py-fenics-fiat
* rename py-ufl to py-fenics-ufl
* fix error in depends_on(petsc) definition
* add missing dep on numpy to py-fenics-fiat
* specify python@3.8: as requirement for all fenics components
* use tuples rather than list for depends_on type=
* specify eigen@3.3.7: as dependency for dolfinx
* add js947 and chrisrichardson as maintainers for the fenics packages
* remove scipy dependency from py-dolfinx
* rename package py-ffcx -> py-fenics-ffcx
* rename package dolfinx -> fenics-dolfinx
* rename package py-dolfinx -> py-fenics-dolfinx
* remove pointless URL from py-fenics-dolfinx package
* rename package py-dijitso -> py-fenics-dijitso
* formatting
* remove unecessary cmake args from fenics-dolfinx
* revert py-fenics-fiat python version to 3:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* revert py-fenics-ufl python version to 3.5:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add conflict to fenics-dolfinx for C++17 support
* revert py-fenics-ffcx python version to 3.5:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* pbbam: fix build error
* Update var/spack/repos/builtin/packages/pbbam/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Packages can implement “detect_version” to support detection
of external instances of a package. This is generally easier
than implementing “determine_spec_details”. The API for
determine_version is similar: for example you can return
“None” to indicate that an executable is not an instance
of a package.
Users may implement a “determine_variants” method for a package.
When doing external detection, executables are grouped by version
and each group results in a single invocation of “determine_variants”
for the associated spec. The method returns a string specifying
the variants for the package. The method may additionally return
a dictionary representing extra attributes for the package.
These will be stored in the spec yaml and can be retrieved
from self.spec.extra_attributes
The Spack GCC package has been updated with an implementation
of “determine_variants” which adds the following extra
attributes to the package: c, cxx, fortran
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).
With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
Older versions do not compile correctly. New users should use 2.004,
not any of the older versions.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
FlatCC has been removed from UnifyFS as a dependency on the develop
branch and for future releases.
spath is now an optional dependency for UnifyFS to normalize relative
paths provided by the user.
The tcl module for r-dorng will fail to load due to the [] characters in
the description. This happens for Tcl formatted modules loaded by Lmod
at least.
```
module load r-dorng-1.7.1-gcc-9.2.0-wtq7bne
Lmod has detected the following error: .../spack/share/spack/modules/linux-centos7-broadwell/r-dorng-1.7.1-gcc-9.2.0-wtq7bne:(r-dorng-1.7.1-gcc-9.2.0-wtq7bne):
invalid command name "L'Ecuyer"
```
Split text for short and long descriptions.
* Add variants to petsc
This PR adds the follolwing variants to the petsc package
- gmp
- jpeg
- libpng
- giflib
- mpfr
- netcdf
- pnetcdf (parallel-netcdf)
- moab
- eigen
- random123
- exodusii
- mstk
- cgns
- memkind
- muparser
- p4est
- saws
- libyaml
- zstd
* Fix flake8 errors
* Additional changes to Petsc recipe
This commit addresses the issues with dependencies that were brought up
in the comments. There are also a few other enhancements.
- the language of the new variant descriptions was changed to be more
consistent with what was already in the recipe
- an explicit '+mpi' was added to the depends_on('hypre...') directives
- an explicit '+mpi' was added to the depends_on('trilinos...')
directives
- the run time error checking for '~mpi' was replaced with 'conflicts()'
directives that will cause the install to fail sooner
- additional variants that were 'parallel only' were added to the '~mpi'
check
* Set the '~mpi`' conflicts msg to a variable
* Changing raja, chai, and umpire packages so all will compile with each other.
* Need a CUDA version of CHAI when compiling with raja+cuda+chai
* Updating checks for commit.
* Adding comments explaining why chai+umpire tests were disabled
* Reactivating tests for CHAI and Umpire
* reordering versions
* Unified handling of Cuda Arch
* Adding latest versions
* Unused/Untested: removed
* Aesthetic and test mode in Chai
* Unified handling of Cuda Arch
* Using 'ON' consistently, instead of 'On'
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix, suggestion and patch:
Chai depends on RAJA, not the other way.
Apply suggested master-main version mapping.
Add Umpire version 3.0.0 and patch.
Co-authored-by: Robert Blake <blake14@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package - REDItools
This PR adds the REDItools package, along with a new package dependency,
py-fisher. This contains a patch generated from the python 2to3 script
as well as some other fixes. I am not sure if the project is ready to
support python-3 yet but I submitted the other patches upstream.
* Update var/spack/repos/builtin/packages/reditools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new version 2020.3; new variants nosuffix and fft; version selections for plumed
* fixed too long lines
* fixed whitespaces
* revised fft interface according to @haampie 's suggestions
Co-authored-by: lu64bag3 <gerald.mathias@lrz.de>
* new package Wonton
* remove the flecsi variant because flecsi-sp does not have a spackage
* fix url, clean up whitespaces
* formatting
* put in explicit else clauses for variants in CMake section because CMake's behavior is system-dependent
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
`spack -V` stopped working when we added the `releases/latest` tag to
track the most recent release. It started just reporting the version,
even on a `develop` checkout. We need to tell it to *only* search for
tags that start with `v`, so that it will ignore `releases/latest`.
`spack -V` also would print out unwanted git eror output on a shallow
clone.
- [x] add `--match 'v*'` to `git describe` arguments
- [x] route error output to `os.devnull`
* update version: intel packages daal, ipp, mkl-dnn, mkl, mpi, parallel-studio, pin, tbb and makes url parameter consistent and always use single quote.
* Fixes a typo with one of the sha256 checksum..
* Adds version entries for new versions of Intel packages.
* Adds hashes for new versions of Intel packages.
* Adds missing hash of Intel compiler.
* Adds the newest version of Intel MPI 2019.8.
* Fixes hash for intel-parallel-studio and intel-tbb.
* Fixes version number of Intel MPI.
* Adds GPI-2 package.
* Fixes flake8 noticed issues.
* Second try to fix flake8 comment
* Fixes some issues adamjstewart noticed.
* Fixes package according to flake8 complains.
* Fixes flake8 issue.
* Renames next version to master and removes master.
* Adds maintainer into gpi-2 and returns master branch for the git
repository.
Co-authored-by: Robert Mijakovic <robert.mijakovic@lrz.de>
* Dyninst: 10.2 release
* Use 'elf' instead of 'elfutils'
* Use v10.2.0 tag
* Change minimum elfutils to 0.173
* Move STERILE_BUILD option to correct cmake_args
* make a sacrifice to the flake8 gods
* Add maintainer
* Revert to using elf@1 for elfutils
* Allow all ParaView versions to depend on Python 2
* Keep conflict for 5.9 and up with python 2
* Fix line too long
* Don't use backslash
* Try fixing indent
* Clean logic for python cmake flags
* Try fixing indent
Previously the python package for vim used static linking, and depending
on what system libraries were available and linked against could cause
symbol conflicts for python leading to segfaults in loading c modules in
the standard library (i.e. heapq). This patch address this issue by
dynamically linking them.
If you use git to clone a repository ssh, git transfers control the ssh
binary available on your path, if that ssh binary was built with
contradictory version of openssl/kerberos, then your git commands will
fail.
* sirius, update versions, fixes, add missing options
- sirius/spfft: depend on fftw-api
- cleanup +shared option
- sirius add option for memory pool
- sirius add version 6.5.3 and 6.5.4
- sirius: add spfft dependency for @master, @develop
* add nlcglib package
Robust wave function optimization for SIRIUS.
* add q-e-sirius package
based on q-e package
* Update var/spack/repos/builtin/packages/q-e-sirius/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* nlcglib: pass nvcc_wrapper to cmake
* Add 6.5.6
* Make flake8 happy
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* canu: fix depends issue & using java instead of jdk
* Update var/spack/repos/builtin/packages/canu/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* typo error correction
* Adding recipe for `colorspacious` (a python package)
* Copyright year changed
* revert last commit on basic_usage.rst
* better with a good description
* fix according to failed test
* Update var/spack/repos/builtin/packages/py-colorspacious/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-colorspacious/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Nightly builds with MacOS started failing again
due to an upgrade of the default virtual environment
that now uses Python 3.8
This makes us hit #14102 and every build fails. This
commit should be reverted along with the fix to #14102.
* Additional versions of py-jsonschema.
* Tweak to force Maestro to use jsonschema@3.2.0:
* Correction of whitespace (flake8 error).
* Merges importlib's Python version conditons
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add new versions of spfft
* Extend CudaPackage and use virtual fftw package
Co-authored-by: Simon Pintarelli <simon.pintarelli@cscs.ch>
* Add CUDA 11 compatibility note
* Depend on older cuda <= 10 for spfft <= 0.9.11
Co-authored-by: Simon Pintarelli <simon.pintarelli@cscs.ch>
* introduce logic for boost+context dependency and generic_context variant
* fix OTF2 instrumentation minor problem
* default coroutine impl depends on platform
* fix flake8
* add reference to ~generic_coroutines conflict info
* Update var/spack/repos/builtin/packages/hpx/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Octopus: Add support for version 10.0.
Fix compilation when using the MKL as a provider for BLAS/LAPACK. Octopus will now detect that the MKL also provides the FFTW API and will refuse to compile when both the FFTW library and the MKL are given to the configure script.
* Octopus: Add supported version range for libxc.
* berkeley-db: add version 18.1.40, update build options in package
* combine adamjstewart's changes
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [kassiopeia] New package
* [kassiopeia] Remove master branch, update dependencies
* Update var/spack/repos/builtin/packages/kassiopeia/package.py
Unable to test since I do not have a license to intel-parallel-studio, but I see no reason why it would not work if.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [kassiopeia] depends_on mpi
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [kassiopeia] cmake_args with self.spec.satisfies and elses
* [kassiopeia] args.extend -> args.append
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* h5py: explicitly specify version
hdf5@1.10.5 on Cray is wrongly detected as 1.8.4.
* Update var/spack/repos/builtin/packages/py-h5py/package.py
Thanks. Also had this first, then CI was complaining about line length ...
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Before:
```console
$ licensee diff --license mit LICENSE-MIT
Comparing to MIT License:
Input Length: 1092
License length: 1020
Similarity: 92.46%
diff --git a/LICENSE b/LICENSE
index 0ce42af..be0ff1c 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,3 +1,4 @@
{+spack project developers. see the top-level copyright file for details.+}
permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "software"), to deal in
the software without restriction, including without limitation the rights to
```
After:
```console
$ licensee diff --license mit LICENSE-MIT
Comparing to MIT License:
Input Length: 1020
License length: 1020
Similarity: 100.00%
Exact match!
```
This gets us a 100% license match from GitHub's `licensee` tool.
`spack buildcache list` was trying to construct an `Arch` object and
compare it to `arch_for_spec(<spec>)`. for each spec in the buildcache.
`Arch` objects are only intended to be constructed for the machine they
describe. The `ArchSpec` object (part of the `Spec`) is the descriptor
that lets us talk about architectures anywhere.
- [x] Modify `spack buildcache list` and `spack buildcache install` to
filter with `Spec` matching instead of using `Arch`.
- [x] Make it easier to get a `Spec` with a proper `ArchSpec` from an
`Arch` object via new `Arch.to_spec()` method.
- [x] Pull `spack.architecture.default_arch()` out of
`spack.architecture.sys_type()` so we can get an `Arch` instead of
a string.
Ci is currently failing on brew update with the error:
```
Error: Cannot install bazelisk because conflicting formulae are installed.
bazel: because Bazelisk replaces the bazel binary
Please `brew unlink bazel` before continuing.
Unlinking removes a formula's symlinks from /usr/local. You can
link the formula again after the install finishes. You can --force this
install, but the build may fail or cause obscure side effects in the
resulting software.
```
Avoiding:
```
$ brew update
$ brew upgrade
```
solves the issue by preventing the risk of conflicting formulae
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
* Update LBANN, Hydrogen, Aluminum to inherit CudaPackage
* Update CMake constraints: LBANN, Hydrogen, and Aluminum now require
cmake@3.16.0: (better support for pthreads with nvcc)
* Aluminum: add variants for host-enabled MPI and RMA features in a
MPI-GPU RDMA-enabled library
* NCCL: add versions 2.7.5-1, 2.7.6-1, and 2.7.8-1
* Hydrogen: add version 1.4.0
* LBANN: add versions 0.99 and 0.100
* Aluminum: add versions 0.4.0 and 0.5.0
* new package(s): py-gql
and related dependencies:
py-aiohttp
py-async-timeout
py-graphql-core
py-idna-ssl
py-multidict
py-websockets
py-yarl
new versions:
py-requests
* fixes
Co-authored-by: Andrew W Elble <aweits@skl-a-00.rc.rit.edu>
* NWChem 7.0.0
* add python2 for 6.8.1. removed 6.8 https://github.com/spack/spack/pull/17779#discussion_r462700413
* nwchem 6.8.1 breaks with gcc 10 and later
* restored extra python bits for version 6.8.1. add env. definition of basis libraries
* changes for flake8
* url fixed
* prevent 6.8.1 being compiled with gcc 10
* Ferret: Add missing dependency with curl.
* Ferret: Don't force using the static version of libgfortran.
* Ferret: Ensure Spack's compiler wrappers are used.
This allows properly setting the rpaths.
* Ferret: Add support for versions 7.3 to 7.6.
* Ferret: Add a variant to install Ferret standard datasets.
* Ferret: Define some useful runtime environnement variables.
* Ferret: Fix flake8.
Also add myself as a maintainer as suggested by @alalazo.
As discussed in issue #17638, wherein kahip fails to build when
scons is dependent on python@3.
This converts the print statements in various SConstruct files
into python3 friendly print functions.
I found most of the affected SConstruct files in both @2.00 and
the later versions I found on web, but some files were only in @2.00.
I split the patches into two files for that reason, but have not
tried the later versions.
* LAMMPS: Use LATTE 1.2.2 starting with version 20200602.
Version 20200602 and upper requires Latte 1.2.2. This caused the internal Latte distribution to be used instead of the Latte install provided by Spack.
* LAMMPS: Add new versions 20200630 and 20200721.
* dcmtk: fixed type error
* Update var/spack/repos/builtin/packages/dcmtk/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: IDL
* Update var/spack/repos/builtin/packages/idl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/idl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added license header and changed url_for_version to just url
* removed unused imports, addressed comments
* removed trailing whitespace on line 14
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
During configure lhapdf5 searches for python. On one system
I tested on (ubuntu 19.10) it finds a system installed python3
and fails to create the python extension.
Variant named to make explicit that this is only a python2 extension.
* update version: intel packages daal, ipp, mkl-dnn, mkl, mpi, parallel-studio, pin, tbb and makes url parameter consistent and always use single quote.
* Fixes a typo with one of the sha256 checksum..
* Adds version entries for new versions of Intel packages.
* Adds hashes for new versions of Intel packages.
* Adds missing hash of Intel compiler.
* Adds the newest version of Intel MPI 2019.8.
* Fixes hash for intel-parallel-studio and intel-tbb.
* Fixes version number of Intel MPI.
Co-authored-by: Robert Mijakovic <robert.mijakovic@lrz.de>
* openfoam: use MPI 'headers' property (fixes#17730)
* openfoam: +spdp variant, usable for OpenFOAM 1906 and later
in contrast to +float32, which uses single-precision throughout, +spdp
uses the following:
- single-precision for most internals
- double-precision for linear solver
* openfoam: add m4 as build dependency
* scotch: update to 6.0.9 released Oct 2019
Co-authored-by: Mark Olesen <Mark.Olesen@esi-group.com>
Libunwind already builds a shared library. The +pic variant adds the
compiler pic flag to the static archive so that it can be linked into
another shared library.
Eospac's build breaks on gcc@10: due to dependence on -fcommon behavior
and gnu changing to -fno-common. Added conditional argument to support
bleeding edge compilers
Style and documentation tests take just a few minutes
to run. Since in Github actions one can't restart a single
job but needs to restart an entire workflow, here we group
tests with similar duration together.
Relative paths in views have been broken since #17608 or earlier.
- [x] Fix by passing base path of the environment into the `ViewDescriptor`.
Relative paths are calculated from this path.
Relative paths in views have been broken since #17608 or earlier.
- [x] Fix by passing base path of the environment into the `ViewDescriptor`.
Relative paths are calculated from this path.
This PR adds the r-dss package and the r-bsseq package, also new, as a
dependency. This includes the latest versions, which required updates to
the following dependencies:
- r-biocgenerics
- r-iranges
- r-s4vectors
- r-summarizedexperiment
Older versions of r-dss and r-bsseq are included as well to ensure
compatibility with older versions of the above dependencies.
* add tutorial setup script to share/spack
* Add check for Ubuntu 18, fix xvda check, fix apt-get errors
- now works on t2.micro, t2.small, and m instances
- apt-get needs retries around it to work
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* add tutorial setup script to share/spack
* Add check for Ubuntu 18, fix xvda check, fix apt-get errors
- now works on t2.micro, t2.small, and m instances
- apt-get needs retries around it to work
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
A bug was introduced in #13100 where ChildErrors would be redundantly
printed when raised during a build. We should eventually revisit error
handling in builds and figure out what the right separation of
responsibilities is for distributed builds, but for now just skip
printing.
- [x] SpackErrors were designed to be printed by the forked process, not
by the parent, so check if they've already been printed.
- [x] update tests
A bug was introduced in #13100 where ChildErrors would be redundantly
printed when raised during a build. We should eventually revisit error
handling in builds and figure out what the right separation of
responsibilities is for distributed builds, but for now just skip
printing.
- [x] SpackErrors were designed to be printed by the forked process, not
by the parent, so check if they've already been printed.
- [x] update tests
* WHIZARD: add versions 2.8.4 and 2.8.3
* New package: LCIO
* WHIZARD: add optional dependency on LCIO
* WHIZARD: add optional dependency on Openloops
* WHIZARD: allow building with either hepmc or hepmc3 dependencies
* Openloops: set process_lib_dir in configure
* Openloops: fix reference to variant
astropy 3.2.1 fails to build with python 3.8.3 with
errors similar to this:
astropy/stats/_stats.c:318:11: error: too many arguments to function 'PyCode_New'
PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
These are files that are generated by cython, but are included in the
tarball. Since there's apparently been an API change to PyCode_New, they will
need to be re-cythonized to compile correctly.
Fixes#17299
Cray Shasta systems appear to use an unmodified Sles or other Linux operating system on the backend (like Cray "Cluster" systems and unlike Cray "XC40" systems that use CNL).
This updates the CNL version detection to properly note that this is the underlying OS instead of CNL and delegate to LinuxDistro.
* environment-views: fix bug where missing recipe/repo breaks env commands
When a recipe or a repo has been removed from Spack and an environment
is active, it causes the view activation to crash Spack before any
commands can be executed. Further, the error message it not at all clear
in explaining the issue.
This forces view regeneration to always start from scratch to avoid the
missing package recipes, and defaults add_view=False in main for views activated
by the `spack -e` option.
* add messages to env status and deactivate
Warn users that a view may be corrupt when deactivating an environment
or checking its status while active. Updated message for activate.
* tests for view checking
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* switch from bool to int debug levels
* Added debug options and changed lock logging to use more detailed values
* Limit installer and timestamp PIDs to standard debug output
* Reduced verbosity of fetch/stage/install output, changing most to debug level 1
* Combine lock log methods; change build process install to debug
* Changed binary cache install messages to extraction messages
* bugfix: make compiler preferences slightly saner
This fixes two issues with the way we currently select compilers.
If multiple compilers have the same "id" (os/arch/compiler/version), we
currently prefer them by picking this one with the most supported
languages. This can have some surprising effects:
* If you have no `gfortran` but you have `gfortran-8`, you can detect
`clang` that has no configured C compiler -- just `f77` and `f90`. This
happens frequently on macOS with homebrew. The bug is due to some
kludginess about the way we detect mixed `clang`/`gfortran`.
* We can prefer suffixed versions of compilers to non-suffixed versions,
which means we may select `clang-gpu` over `clang` at LLNL. But,
`clang-gpu` is not actually clang, and it can break builds. We should
prefer `clang` if it's available.
- [x] prefer compilers that have C compilers and prefer no name variation
to variation.
* tests: add test for which()
Apple's gcc is really clang. We previously ignored it by default but
there was a regression in #17110.
Originally we checked for all clang versions with this, but I know of
none other than `gcc` on macos that actually do this, so limiting to
`apple-clang` should be ok.
- [x] Fix check for `apple-clang` in `gcc.py` to use version detection
from `spack.compilers.apple_clang`
The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* llvm-flang Only build offload code if cuda enabled
The current version executes `cmake(*args)` always as part of the post install. If device offload is not part of the build, this results in referencing `args` without it being set and the error:
```
==> Error: UnboundLocalError: local variable 'args' referenced before assignment
```
Looking at prevoous version of `llvm-package.py` this whole routine appears to be only required for offload, some indent `cmake/make/install` to be under the `if`.
* Update package.py
Add comment
Fixes#17299
Cray Shasta systems appear to use an unmodified Sles or other Linux operating system on the backend (like Cray "Cluster" systems and unlike Cray "XC40" systems that use CNL).
This updates the CNL version detection to properly note that this is the underlying OS instead of CNL and delegate to LinuxDistro.
* environment-views: fix bug where missing recipe/repo breaks env commands
When a recipe or a repo has been removed from Spack and an environment
is active, it causes the view activation to crash Spack before any
commands can be executed. Further, the error message it not at all clear
in explaining the issue.
This forces view regeneration to always start from scratch to avoid the
missing package recipes, and defaults add_view=False in main for views activated
by the `spack -e` option.
* add messages to env status and deactivate
Warn users that a view may be corrupt when deactivating an environment
or checking its status while active. Updated message for activate.
* tests for view checking
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* switch from bool to int debug levels
* Added debug options and changed lock logging to use more detailed values
* Limit installer and timestamp PIDs to standard debug output
* Reduced verbosity of fetch/stage/install output, changing most to debug level 1
* Combine lock log methods; change build process install to debug
* Changed binary cache install messages to extraction messages
* [M4] Add missing compiler flag on Cray Compiler
The new version of the Cray Compiler are based on Clang, which means we
need to add the same LDFLAG as other clang environments.
* bugfix: make compiler preferences slightly saner
This fixes two issues with the way we currently select compilers.
If multiple compilers have the same "id" (os/arch/compiler/version), we
currently prefer them by picking this one with the most supported
languages. This can have some surprising effects:
* If you have no `gfortran` but you have `gfortran-8`, you can detect
`clang` that has no configured C compiler -- just `f77` and `f90`. This
happens frequently on macOS with homebrew. The bug is due to some
kludginess about the way we detect mixed `clang`/`gfortran`.
* We can prefer suffixed versions of compilers to non-suffixed versions,
which means we may select `clang-gpu` over `clang` at LLNL. But,
`clang-gpu` is not actually clang, and it can break builds. We should
prefer `clang` if it's available.
- [x] prefer compilers that have C compilers and prefer no name variation
to variation.
* tests: add test for which()
Apple's gcc is really clang. We previously ignored it by default but
there was a regression in #17110.
Originally we checked for all clang versions with this, but I know of
none other than `gcc` on macos that actually do this, so limiting to
`apple-clang` should be ok.
- [x] Fix check for `apple-clang` in `gcc.py` to use version detection
from `spack.compilers.apple_clang`
* MacOS build tests
- Run on PR that modify the YAML file of the workflow
- Don't clone Spack, since we are in the Spack repo now
* Try to add opengl to configuration to build jupyter
* fixup
Spack did not support usage of the `--config-scope` option in
combination with an environment: In `lib/spack/spack/main.py`,
`spack.config.command_line_scopes` is set equal to any config scopes
passed by the `--config-scope` option. However, this is done after
activating an environment. In the process of activating an environment,
the `spack.config.config` singleton is instantiated, so later setting of
`spack.config.command_line_scopes` is ignored.
This commit sets command line scopes before activating an environment to
ensure that they are included in the configuration.
Co-authored-by: Tim Fuller <tjfulle@sandia.gov>
* Add new versions of texlive and poppler.
* Add new versions of harfbuzz which also relocated source location to github.
* Update var/spack/repos/builtin/packages/harfbuzz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Restore deleted url line in harfbuzz.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Addition of Chainmap to satisfy Maestro dependency.
* Additional versions and dependencies for Maestro.
* Updated URL to point to pypi.
* Updates to chainmap hashes.
* Updates to pull version from PyPi.
* Corrections to flake8 errors.
* Stricter restrictions on Python versioning.
Maestro actually supports Python 3.5 and later.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Only install chainmap for Python2 versions.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removal of setuptools python cond.
* Removal of version constaints on setuptools.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
GCC 4.8.5 on rhel6:
```
utext.cpp:572:5: error: 'max_align_t' in namespace 'std' does not name a
type
std::max_align_t extension;
^
utext.cpp: In function 'UText* utext_setup_67(UText*, int32_t,
UErrorCode*)':
utext.cpp:587:73: error: 'max_align_t' is not a member of 'std'
spaceRequired = sizeof(ExtendedUText) + extraSpace -
sizeof(std::max_align_t);
^
utext.cpp:587:73: note: suggested alternative:
In file included from
/projects/spack/opt/spack/gcc-4.4.7/gcc/6ln2t7b/include/c++/4.8.5/cstddef:42:0,
from utext.cpp:19:
/projects/spack/opt/spack/gcc-4.4.7/gcc/6ln2t7b/lib/gcc/x86_64-unknown-linux-gnu/4.8.5/include/stddef.h:
425:3: note: 'max_align_t'
} max_align_t;
^
utext.cpp:598:57: error: 'struct ExtendedUText' has no member named
'extension'
ut->pExtra = &((ExtendedUText *)ut)->extension;
^
g++ ... loadednormalizer2impl.cpp
g++ ... chariter.cpp
```
The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* Initial version of PySCF.
* Add master branch to xcfun library
* PySCF only compatible with specific commit of xcfun library
* Update var/spack/repos/builtin/packages/py-pyscf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyscf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Revert "PySCF only compatible with specific commit of xcfun library"
This reverts commit 8296005400.
* Revert "Add master branch to xcfun library"
This reverts commit f2b6998931.
* Issues conflict for xcfun library version rather than relying on a random commit.
* Add version xcfun 2.0.0a2 which is needed by PySCF.
* Remove xcfun conflict and express dependency more explictly. Add comment as to why this is necessary.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* wannier90: add versions 3.0.0 and 3.1.0 and 'shared variant'
Added versions 3.0.0 and 3.1.0
Added shared variant
Added url_for_version function as versions less than 3 are from the
wannier.org site and versions 3 and up are from github.com
Added the MPI libraries to the list of libs substituted into the make.sys file
in place of @LIBS
Made it possible to build a shared object version of the library for versions
< 3 by filtering the src/Makefile.2 file (based off of the patch from a src rpm
from RHEL for version 2.0.1)
Create a modules directory in the install prefix root directory and copy the
Fortran .mod files there.
Set the MPIFC variable to the Spack Fortran MPI compiler wrapper.
* abinit: added 'wannier90' variant which enables building abinit with wannier90
Added wannier90 variant
Made abinit depend on the shared object ('shared') variant of
wannier90 if the wannier90 variant is selected
Add configure args for wannier90 libs, includes, and binaries and to
set MPIFC
set the dft-flavor to wannier90 when wannier90 is enabled and only
set the dft flavor to 'atompaw+libxc' if wannier90 is not selected
* Update var/spack/repos/builtin/packages/abinit/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* incorporated bbecker's suggestion for making the strings less ugly!
* incorporated bbecker's suggestion to fix the logic for picking which
"DFT flavor" configure argument.
If the wannier variant is enabled, it passes --with-dft-flavor=wannier90
to configure, otherwise it passes --with-dft-flavor=atompaw+libxc to configure
* Changed to using plain strings
* Fixed version tests
* incorporated @adamjstewart's fix for testing if the major version is > 2
* incorporated @adamjstewart's fix to check if mpi is enabled and
only set the MPIFC variable if it is.
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Only set MPIFC if '+mpi' is set
* incorporated fixes from @adamjstewart including:
- using the string=True argument to filter_file (and removed the unneeded
escapes)
- changing the url to the github location
- fixing the version checks
- building a libwannier.dylib on darwin
* incorporated fixes suggested by @adamjstewart including:
- using the string=True argument to filter_file and cleaned up the escapes
- only pass the MPIFC argument to configure when '+mpi' is set
- chaned the url to the github site for Wannier090
- fixed the version checks
- build a 'libwannier.dylib' file when building the shared variant on darwin
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* moved a configure argument from it's own '+mpi' check to under the lower one
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Cleaned up syntax as suggested by @adamjstewart
It looks *so much better* now! Thanks!
* removed unneeded import of 'find' from 'llnl.util.filesystem' package
as suggested by @adamjstewart
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* incorporated changes from @adamjstewart
changed check to "if '@:2 +shared' in spec:" instead of a nested check of '@:2' and
'+shared'
removed unneeded joins used in filter_file and spliced the list of objs directly into
the filter_file call
used the dso_suffix instead of testing for darwin to determine the name of the
shared library
* removed whitespace from blank line
* fixed bug with '../../wannier90.x: .*' not being treated as a regexp. Thanks Adam!
* fixed missing whitespace when modifying Makefile.2
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: OpenLoops
* install() for openloops
* Working OpenLoops recipe
* Flake-8
* Only copy collection file if required; add clarification to num_jobs
* Add __future__ import just in case
* Fix missing space
* Remove __future__ import
* Changes from review, pt. 1
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Replace print() with write()
* Flake-8
Co-authored-by: iarspider <iarpsider@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* vasp: New package.
* Remove unneeded `#noqa`
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removed a completely needless tty.debug()
* Add compiler conflicts() and minute fixes
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add apcomp package
* add maintainers
* fake8
* Update var/spack/repos/builtin/packages/apcomp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* review suggestions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* icu4c: Add new versions for older releases.
The old URLs for versions 60.1, 58.2 and 57.1 do not work anymore so add the versions available on Github.
The old versions are kept for reference (cf. #15896).
* icu4c: Add versions 66.1 and 67.1.
* icu4c: Fix compilation of versions 58 and 59 with recent glibc.
This matches the current latest version of protobuf in Spack.
Generally the version of py-protobuf and protobuf should match,
but this constraint is not currently recorded in py-protobuf.
For normal users, `-o` or `--no-same-owner` (GNU extension) is
the default behavior, but for the root user, `tar` attempts to preserve
the ownership from the tarball.
This makes `tar` use `-o` all the time. This should improve untarring
files owned by users not available in rootless Docker builds.
* gdk-pixbuf: Add new stable versions.
* gdk-pixbuf: Add a missing dependency with libx11.
Also add a variant disabled by default to make it optional since it is considered deprecated
(cf. 3362e94c25).
* Added new versions to magics and began to set not-so-optional netcdf dependency
* Added enforced netcdf dependency
* Fix also works for version 4.1.0
* llvm-flang Only build offload code if cuda enabled
The current version executes `cmake(*args)` always as part of the post install. If device offload is not part of the build, this results in referencing `args` without it being set and the error:
```
==> Error: UnboundLocalError: local variable 'args' referenced before assignment
```
Looking at prevoous version of `llvm-package.py` this whole routine appears to be only required for offload, some indent `cmake/make/install` to be under the `if`.
* Update package.py
Add comment
The error message was not updated when the behavior of Spack environments
was changed to not automatically activate the local environment in #17258.
The previous error message no longer makes sense.
When Spack installs a package, it stores repository package.py files
for it and all of its dependencies - any package with a Spack metadata
directory in its installation prefix.
It turns out this was too broad: this ends up including external
packages installed by Spack (e.g. installed by another Spack instance).
Currently Spack doesn't store the namespace properly for such packages,
so even though the package file could be fetched from the external,
Spack is unable to locate it.
This commit avoids the issue by skipping any attempt to locate and copy
from the package repository of externals, regardless of whether they
have a Spack repo directory.
On Cray platforms, we rely heavily on the module system to figure out
what targets, compilers, etc. are available. This unfortunately means
that we shell out to the `module` command as part of platform
initialization.
Because we run subcommands in a shell, we can get infinite recursion if
`setup-env.sh` and friends are in some init script like `.bashrc`.
This fixes the infinite loop by adding guards around `setup-env.sh`,
`setup-env.csh`, and `setup-env.fish`, to prevent recursive
initializations of Spack. This is safe because Spack never shells out to
itself, so we do not need it to be initialized in subshells.
- [x] add recursion guard around `setup-env.sh`
- [x] add recursion guard around `setup-env.csh`
- [x] add recursion guard around `setup-env.fish`
* fix binutils deptype for gcc
binutils needs to be a run dependency of gcc
* Fix gcc+binutils build on RHEL7+
static-libstdc++ is not available with system gcc.
Anyway, as it is for bootstraping, we do not really care depending on
a shared libstdc++.
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
Spack was attempting to calculate abspath on the located config.guess
path even when it was not found (None); this commit skips the abspath
calculation when config.guess is not found.
The error message was not updated when the behavior of Spack environments
was changed to not automatically activate the local environment in #17258.
The previous error message no longer makes sense.
* bbcp: Update the URLs to use HTTPS.
The HTTP URLs do not work anymore.
* bbcp: Add missing libnsl dependency.
* bbcp: Rename the git-based version to match the branch name.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pysam: add LDFLAGS to curl
* Update var/spack/repos/builtin/packages/py-pysam/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: vbfnlo
* Add new package: vbfnlo
* Add recipe for looptools
* Add patch for looptools
* LoopTools: patch not needed (fixed by developers without changing version)
* Remove patch file as well
* Update package.py
* Update package.py
* Fix vbfnlo recipe for old version
Co-authored-by: iarspider <iarpsider@gmail.com>
When Spack installs a package, it stores repository package.py files
for it and all of its dependencies - any package with a Spack metadata
directory in its installation prefix.
It turns out this was too broad: this ends up including external
packages installed by Spack (e.g. installed by another Spack instance).
Currently Spack doesn't store the namespace properly for such packages,
so even though the package file could be fetched from the external,
Spack is unable to locate it.
This commit avoids the issue by skipping any attempt to locate and copy
from the package repository of externals, regardless of whether they
have a Spack repo directory.
* new package: ligra
* setup run environment
* tidy up
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* flake8
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`gcc` 9 and above have more warnings that break the `flatcc` build by default, because `-Werror` is enabled. This loosens the build up so that we can build with more compilers in Spack.
- [x] Add `-DFLATCC_ALLOW_WERROR=OFF` to `flatcc` CMake arguments
Co-authored-by: Frank Willmore <willmore@anl.gov>
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
fixes#17396
This prevents the class attribute to be inherited and
saves current maintainers from becoming the default
maintainers of every Cuda package.
We got rid of `master` after #17377, but users still want a way to get
the latest stable release without knowing its number.
We've added a `releases/latest` tag to replace what was once `master`.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Fixes#16478
This allows an uninstall to proceed even when encountering pre-uninstall
hook failures if the user chooses the --force option for the uninstall.
This also prevents post-uninstall hook failures from raising an exception,
which would terminate a sequence of uninstalls. This isn't likely essential
for #16478, but I think overall it will improve the user experience: if
the post-uninstall hook fails, there isn't much point in terminating a
sequence of spec uninstalls because at the point where the post-uninstall
hook is run, the spec has already been removed from the database (so it
will never have another chance to run).
Notes:
* When doing spack uninstall -a, certain pre/post-uninstall hooks aren't
important to run, but this isn't easy to track with the current model.
For example: if you are uninstalling a package and its extension, you
do not have to do the activation check for the extension.
* This doesn't handle the uninstallation of specs that are not in the DB,
so it may leave "dangling" specs in the installation prefix
This PR creates a new spack package for
mumax: GPU accelerated micromagnetic simulator.
This uses the current beta version because
- it is somewhat dated, ~2018
- it is the only one that supports recent GPU kernels
On Cray platforms, we rely heavily on the module system to figure out
what targets, compilers, etc. are available. This unfortunately means
that we shell out to the `module` command as part of platform
initialization.
Because we run subcommands in a shell, we can get infinite recursion if
`setup-env.sh` and friends are in some init script like `.bashrc`.
This fixes the infinite loop by adding guards around `setup-env.sh`,
`setup-env.csh`, and `setup-env.fish`, to prevent recursive
initializations of Spack. This is safe because Spack never shells out to
itself, so we do not need it to be initialized in subshells.
- [x] add recursion guard around `setup-env.sh`
- [x] add recursion guard around `setup-env.csh`
- [x] add recursion guard around `setup-env.fish`
- [x] Remove references to `master` branch
- [x] Document how release branches are structured
- [x] Document how to make a major release
- [x] Document how to make a point release
- [x] Document how to do work in our release projects
* fix binutils deptype for gcc
binutils needs to be a run dependency of gcc
* Fix gcc+binutils build on RHEL7+
static-libstdc++ is not available with system gcc.
Anyway, as it is for bootstraping, we do not really care depending on
a shared libstdc++.
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
Spack was attempting to calculate abspath on the located config.guess
path even when it was not found (None); this commit skips the abspath
calculation when config.guess is not found.
* Add Rivet and YODA
* Add patches
* Flake-8
* Set level for Rivet patches
* Syntax fix
* Fix dependencies of Rivet
* Update var/spack/repos/builtin/packages/rivet/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The latest 0.3.10 version openblas changed how Fortran libraries
are detected, and this broke Fujitsu compiler support.
This (new) openblas patch addresses that issue.
* dtfbplus: New package.
* dftbplus: Addresses @adamjstewart's comments on PR #15191
* dftbplus: Fixes format() calls that slipped in previous commit.
* dftbplus: Appease flake8.
* dftbplus: Change 'url' and misc. fixes.
* Add a resource to do the job of './utils/get_opt_externals'
Also:
* Add url_for_version function
* Add Java to PATH for run environment
* Update `install` method to handle old and new version
Co-authored-by: lu64bag3 <gerald.mathias@lrz.de>
* examl +
* examl style fix
* examl flake8 fix
* Update var/spack/repos/builtin/packages/examl/package.py
using `working_dir`
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
* hbase: refine url , java and version
* Update var/spack/repos/builtin/packages/hbase/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Activate environment in container file
This PR will ensure that the container recipes will build the spack
environment by first activating the environment.
* Deactivate environment before environment collection
For Singularity, the environment must be deactivated before running the
command to collect the environment variables. This is because the
environment collection uses `spack env activate`.
* share/spack/setup-env.fish file to setup environment in fish shell
* setup-env.fish testing script
* Update share/spack/setup-env.fish
Co-Authored-By: Elsa Gonsiorowski, PhD <gonsie@me.com>
* Update share/spack/qa/setup-env-test.fish
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* updates completions using `spack commands --update-completion`
* added stderr-nocaret warning
* added fish shell tests to CI system
Co-authored-by: becker33 <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Elsa Gonsiorowski, PhD <gonsie@me.com>
* [py-mdanalysis] new version and added dependencies
Original commit message:
Author: Andrew Elble <aweits@rit.edu>
Date: Thu Nov 14 08:35:14 2019 -0500
mdanalysis
* [py-mdanalysis] python is type build/run
* [py-mdanalysis] updated numpy version requirement for all listed versions of py-mdanalysis
* [py-mdanalysis] updated biopython version requirement for all listed versions of py-mdanalysis
* [py-mdanalysis] updated py-griddataformats version requirement for all listed versions of py-mdanalysis
* [py-mdanalysis] gsd only required after version 1.17.0 and requires gsd@1.4.0
* [py-mdanalysis] only requires mmtf-python after version 0.16.0 and requires version 1.0.0
* [py-mdanalysis] has required py-joblib since version 0.16
* [py-mdanalysis] updated py-scipy version requirement for all listed versions of py-mdanalysis
* [py-mdanalysis] updated py-matplotlib version requirement for all listed versions of py-mdanalysis
* [py-mdanalysis] has required py-mock since version 0.18.0
* [py-mdanalysis] py-scikit-learn only required after version 0.16.0 and only for +analysis variant
* [py-mdanalysis] Reordered and reformatted for readability
* [py-mdanalysis] flake8 fixes
* [py-mdanalysis] proactively adding version 1.0.0 while I'm here since major release
* [py-mdanalysis] fixing some forgotten colons
* [ruby] fixing path to gcc such that users can use gem to install native gems to their home directory
* [ruby] working on making flake8 happier
* [ruby] Line can't really be split cleanly. Enhancing flake8's calm.
ya learn something new every day...
* [ruby] line break where requested
* [ruby] make raw string
* [ruby] only running for x86_64-linux everything else is untested
* [ruby] finding rbconfig.rb in a cross platform manner
* new package: GraphBlast
* polish
* add cuda_arch setup
* flake8
* the package requires cuda variant and dependency
* add comments
* define cuda_arch
* implement multiple and custom cuda arches
* tidy up, improve
* flake8
* improve style
* add variant description
* use patch method, add new version for latest commit building since master now fails
* remove gcc conflict, tidy up
* also indicate build range for boost
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Co-authored-by: Sinan81 <Sinan81@github>
py-mpi4py installs its header files at a difficult-to-predict location:
$prefix/lib/python-x.y/site-packages/mpi4py/include
With the new `headers` properties, dependent packages have now an easy
way to obtain this location:
spec['py-mpi4py'].headers.directories[0]
* cp2k: variant tuning `lmax` was broken
- `spack install cp2k lmax=6` now works
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cp2k/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: coin3d
* Update package.py
* Flake-8
* Update var/spack/repos/builtin/packages/coin3d/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add link-time dependencies
* Add configure flags for boost; remove version 4.0.0 (doesn't compile)
Co-authored-by: iarspider <iarpsider@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cray: detect frontend compilers automatically
This commit permits to detect frontend compilers
automatically, with the exception of cce.
Co-authored-by: Gregory Becker <becker33.llnl.gov>
* New package: fjcontrib + new variants for fastjet
* Flake-8
* Flake-8 once more
* Update package.py
* Allow choosing which plugins to build
Build all plugins by default.
* Flake-8
* Always build all plugins
* Update package.py
Co-authored-by: iarspider <iarpsider@gmail.com>
[george.hartzell@172-16-193-97 spack-explore-docker]$ spack containerize
Running `spack containerize` with the example `spack.yaml` file fails
with an error that ends like so:
```
[...]
File "/local_scratch/hartzell/tmp/spack-explore-docker/lib/spack/external/ruamel/yaml/scanner.py", line 165, in need_more_tokens
self.stale_possible_simple_keys()
File "/local_scratch/hartzell/tmp/spack-explore-docker/lib/spack/external/ruamel/yaml/scanner.py", line 309, in stale_possible_simple_keys
"could not find expected ':'", self.get_mark())
ruamel.yaml.scanner.ScannerError: while scanning a simple key
in "/local_scratch/hartzell/tmp/spack-explore-docker/spack.yaml", line 26, column 1
could not find expected ':'
in "/local_scratch/hartzell/tmp/spack-explore-docker/spack.yaml", line 28, column 5
```
Indenting the block string fixes the problem for me.
CentOS 7,
```
$ spack --version
0.14.2-1529-ec58f28c2
```
The Geant4 cmake check requires Qt5OpenGL_FOUND, so we must require
the Qt5 +opengl variant. If not, the cmake phase fall through to Qt4
and fails due to a missing Qt4::QtGui target.
In Geant4InterfaceOptions.cmake:
```
if(Qt5Core_FOUND
AND Qt5Gui_FOUND
AND Qt5Widgets_FOUND
AND Qt5OpenGL_FOUND
AND Qt5PrintSupport_FOUND)
```
Ref: https://github.com/Geant4/geant4/blob/master/cmake/Modules/Geant4InterfaceOptions.cmake#L90
(5baee230e93612916bcea11ebf822756cfa7282c, Import Geant4 10.6.0 source tree)
* redis: add config file from source code
* Update var/spack/repos/builtin/packages/redis/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* WarpX: Development Branch
Update the name of our development branch.
* WarpX version: develop keyword
development is not a "newest"-like keyword, but `master`/`develop`/`dev` are.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Renamed: develop version
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This is a general CMake CUDA language hint to use the CXX
compiler has host compiler for NVCC. Seems like a good
default since we do not express the CUDA compiler in Spack
otherwise yet (e.g. no `self.compiler.cuda` or
`self.compiler.cudahostcxx`).
* env: no automatic activation
* Ensure ci rebuild jobs activate the environment (no longer automagic)
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* Start moving toward a json buildcache index
* Add spec and database index schemas
* Add a schema for buildcache spec.yaml files
* Provide a mode for database class to generate buildcache index
* Update db and ci tests to validate object w/ new schema
* Remove unused temporary upload-s3 command
* Use database class to generate buildcache index
* Do not generate index with each buildcache creation
* Make buildcache index mode into a couple of constructor args to Database class
* Use keyword args for _createtarball
* Parse new json index when we get specs from buildcache
Now that only one index file per mirror needs to be fetched in
order to have all the concrete specs for binaries available on the
mirror, we can just fetch and refresh the cached specs every time
instead of needing to use the '-f' flag to force re-reading.
* First fix for SPACK_DEPENDENCIES problem when doing setup
* Get rid of transitive include path in setup.
* Export SPACK_INCLUDE_DIRS into spconfig.py
* add buildcache create test
* add functionality and test to create buildcache from environment
* use env.concretized_user_specs rather than env.roots to get concretized specs, as suggested in review from becker33
* Allow `spack remove -f` and `spack uninstall` to work on matrices
Allow Environment.remove(force=True) to remove the concrete spec from the environment
even when the user spec cannot be removed because it is in a matrix.
* Ascent: ~python default
Packages that build optional python bindings do not build them by default in Spack:
https://spack.readthedocs.io/en/latest/packaging_guide.html#variant-names
This reduces long dependency trees and build times, e.g. for apps just using C/C++/Fortran bindings of a library.
* Conduit: ~python default
Packages that build optional python bindings do not build them by
default in Spack:
https://spack.readthedocs.io/en/latest/packaging_guide.html#variant-names
This reduces long dependency trees and build times, e.g. for apps
just using C/C++/Fortran bindings of a library.
* Separate Apple Clang from LLVM Clang
Apple Clang is a compiler of its own. All places
referring to "-apple" suffix have been updated.
* Hack to use a dash in 'apple-clang'
To be able to use autodoc from Sphinx we need
a valid Python name for the module that contains
Apple's Clang code.
* Updated packages to account for the existence of apple-clang
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added unit test for XCode related functions
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* short-circuit is_activated check when the extendee is installed upstream
* add test for checking activation status of packages with an extendee installed upstream
spack config add <value>: add nested value value to the configuration scope specified
spack config remove/rm: remove specified configuration from the relevant scope
The rose library uses the `strtoflt128` and `quadmath_snprintf`
functions. In order to successfully link the rose library, chill must
also link the GCC libquadmath library to resolve the two functions. This
patch changes the chill build to include this library.
Chill will also not compile unless headers from the gmp and isl
libraries are found in the includes path. Two patches - one each for gmp
and isl - modify the chill build process to add options to specify those
paths. These options follow the similar pattern as seen with BOOSTHOME
and ROSEHOME options which already exist in the chill build process.
Because of the addition of GMPHOME and ISLHOME options, build
requirements for gmp and isl are also added.
* Update package.py
* edit confliction when add package 'meam'
The USER-MEAMC fully replaces the MEAM package, which has been removed from LAMMPS after the 12 December 2018 version.
* gromacs: fix fftw dependency
Only depend on fftw+mpi when gromacs is built with mpi,
and depend on fftw~mpi otherwise.
* gromacs: fix cmake dependency
master branch depends on cmake 3.11 (as specified in CMakeLists.txt
cmake dependency is also bumped to 3.11 when fj compilers are used
in order to fix OpenMP detection.
* Some minor fixes to set_permissions() in file_permissions.py
The set_permissions() routine claims to prevent users from creating
world writable suid binaries. However, it seems to only be checking
for/preventing group writable suid binaries.
This patch modifies the routine to check for both world and group
writable suid binaries, and complain appropriately.
* permissions.py: Add test to check blocks world writable SUID files
The original test_chmod_rejects_group_writable_suid tested
that the set_permissions() function in
lib/spack/spack/util/file_permissions.py
would raise an exception if changed permission on a file with
both SUID and SGID plus sticky bits is chmod-ed to g+rwx and o+rwx.
I have modified so that more narrowly tests a file with SUID
(and no SGID or sticky bit) set is chmod-ed to g+w.
I have added a second test test_chmod_rejects_world_writable_suid
that checks that exception is raised if an SUID file is chmod-ed
to o+w
* file_permissions.py: Raise exception when try to make sgid file world writable
Updated set_permissions() in file_permissions.py to also raise
an exception if try to make an SGID file world writable. And
added corresponding unit test as well.
* Remove debugging prints from permissions.py
* Module index should not be unconditionally overwritten
Uncovered after we switched our CI to generate modules for packages
one-by-one rather than in bulk. This overwrote a complete module index
with an index with a single entry, and broke our downstream Spack
instances that needed the upstream module index.
I get the following error message, if I do not use editline from the system.
```
>> 3090 Undefined symbols for architecture x86_64:
3091 "_tgetent", referenced from:
3092 _terminal_set in libedit.a(terminal.c.o)
3093 "_tgetflag", referenced from:
3094 _terminal_set in libedit.a(terminal.c.o)
3095 "_tgetnum", referenced from:
3096 _terminal_set in libedit.a(terminal.c.o)
...
3110 _terminal_insertwrite in libedit.a(terminal.c.o)
3111 _terminal_clear_EOL in libedit.a(terminal.c.o)
3112 _terminal_clear_screen in libedit.a(terminal.c.o)
3113 _terminal_beep in libedit.a(terminal.c.o)
3114 ...
3115 ld: symbol(s) not found for architecture x86_64
```
* Added unit tests to Github Actions
* Set user e-mail and name for git tests to succeed
* Simplify setup.sh logic
* Replicate Travis script on Github Actions
* Update flags since '.' is not allowed
* Added badge, simplified workflow
* Remove pinning of coverage
* Remove unit tests run on Github Actions from Travis
Bug fix release:
4.0.4 -- June, 2020
-----------------------
- Fix a memory patcher issue intercepting shmat and shmdt. This was
observed on RHEL 8.x ppc64le (see README for more info).
- Fix an illegal access issue caught using gcc's address sanitizer.
Thanks to Georg Geiser for reporting.
- Add checks to avoid conflicts with a libevent library shipped with LSF.
- Switch to linking against libevent_core rather than libevent, if present.
- Add improved support for UCX 1.9 and later.
- Fix an ABI compatibility issue with the Fortran 2008 bindings.
Thanks to Alastair McKinstry for reporting.
- Fix an issue with rpath of /usr/lib64 when building OMPI on
systems with Lustre. Thanks to David Shrader for reporting.
- Fix a memory leak occurring with certain MPI RMA operations.
- Fix an issue with ORTE's mapping of MPI processes to resources.
Thanks to Alex Margolin for reporting and providing a fix.
- Correct a problem with incorrect error codes being returned
by OMPI MPI_T functions.
- Fix an issue with debugger tools not being able to attach
to mpirun more than once. Thanks to Gregory Lee for reporting.
- Fix an issue with the Fortran compiler wrappers when using
NAG compilers. Thanks to Peter Brady for reporting.
- Fix an issue with the ORTE ssh based process launcher at scale.
Thanks to Benjamín Hernández for reporting.
- Address an issue when using shared MPI I/O operations. OMPIO will
now successfully return from the file open statement but will
raise an error if the file system does not supported shared I/O
operations. Thanks to Romain Hild for reporting.
- Fix an issue with MPI_WIN_DETACH. Thanks to Thomas Naughton for reporting.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
* Fix how the Conduit detects that the MPI compiler is the same as the
CC compiler and is more careful when it sets the MPI compilers to be
the Cray PE system compilers.
* Remove unnecessary push of the MPI compilers to the C compilers for Hydrogen.
* Added version 8.2.0, added dependency for documentation build, added variants for documentations
* Renamed variant '+man' to '+html'
Co-authored-by: Alexander Knieps <a.knieps@fz-juelich.de>
- Parallel HDF5 isn't required -- the comment seems to be about a
transitive dependency with pnetcdf.
- Boost usage should respect the variant, not automatically be reenabled
when choosing DTK.
Versions of py-tensorflow between versions 1.1 and 1.14 need a patch to
avoid an import error on the cloud package even if built without support
for the cloud package.
* bazel: Update for use with Fujitsu compiler
* bazel: Fix for use with Fujitsu compiler
* bazel: Fix flake8 error
* bazel: add conflicts setting for use with Fujitsu compiler
* fix flake8 error
* fix flake8 error
In Python 3.8, the reserved "tp_print" slot was changed from a function
pointer to a number, which broke the Python wrapping code in vtk@8
(causing "cannot convert 'std::nullptr_t' to 'Py_ssize_t'" errors in
various places). This is fixed in vtk@9.0.0.
This patch:
1) adds vtk@9.0.0
2) updates depends_on constraints to only use python@3.8: for vtk@9:
vtk@:8 depends on python@2, and vtk@8.0.1:8.9.9 depends on python@:3.7
3) Adds CMake flag VTK_PYTHON_VERSION=3 when using python@3 with vtk@9
* python: adding a distutils fix to improve build compatibility for C++ extension modules (e.g. py-matplotlib)
* python: added C/C++ distutils patches for python@3.6:3.8
Allow Spack to build with ROOT as an external dependency by setting
LD_LIBRARY_PATH: given that the external package was not built by
Spack, dependents would not be able to locate libraries using RPATHs
when running ROOT binaries.
Specified Python to be v2.7 only, as Python3 support is not currently
implemented in chill.
Update chill dependency versions for the following libraries to the
specific versions:
* rose: v0.9.13.0
* bison: v3.4.2
Both rose and iegenlib are build time dependencies, but are also run
time dependencies. Added 'run' to the build type for both dependencies.
cuda: 10.1 and onward, installers will crash if /tmp/cuda-installer.log
exists
Try to help if user owns the file, otherwise try to provide useful
info. Clean up the file post-install to try to avoid the whole issue.
The release number in the README had not been updated since we did the
relicense to Apache-2.0 OR MIT in v0.12.0. LLNL-CODE-811652 is Spack's
new LLNL release number.
* save edits
* tidy up
* Update var/spack/repos/builtin/packages/py-lmfit/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add python version constraints
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Sinan81 <Sinan81@github>
* Add pygelf Python package
* Update ReFrame package version
* Address styling remarks
* Update var/spack/repos/builtin/packages/reframe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/reframe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/reframe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/reframe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/reframe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Address PR remarks
* Remove setuptools runtime dependency
* Address PR remarks
* Address PR remarks
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
bazel uses gcc's -MF option to write dependencies to a
file. Post-compilation, bazel reads this file and makes some
determinations.
"Since gcc is given only relative paths on the command line,
non-system include paths here should never be absolute. If they
are, it's probably due to a non-hermetic #include, & we should stop
the build with an error."
Spack directly injects absolute paths, which appear in this file and
cause bazel to fail the build despite the fact that compilation
succeeded.
This patch disables this failure mode by default, and allows for it
to be turned back on by using the '~nodepfail' variant.
Whenever attempting to use any ncurses functionality within cscope, a
page fault would result within the ncurses library.
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7fad3cf in termattrs_sp () from .../lib/libncursesw.so.6
(gdb) bt
#0 0x00007ffff7fad3cf in termattrs_sp () from .../lib/libncursesw.so.6
#1 0x00007ffff7faa794 in _nc_setupscreen_sp () from .../lib/libncursesw.so.6
#2 0x00007ffff7fa614c in newterm_sp () from .../lib/libncursesw.so.6
#3 0x00007ffff7fa65b9 in newterm () from .../lib/libncursesw.so.6
#4 0x00007ffff7fa2970 in initscr () from .../lib/libncursesw.so.6
#5 0x0000000000403dc2 in main (argc=<optimized out>, argv=0x7fffffffcea8) at main.c:574
This is due to a conflict between libtinfo.so and libtinfow.so. Both are
linked into cscope:
$ ldd $(which cscope)
/bin/bash: .../lib/libtinfo.so.6: no version information available (required by /bin/bash)
linux-vdso.so.1 (0x00007fff5dbcb000)
libncursesw.so.6 => .../lib/libncursesw.so.6 (0x00007f435cc69000)
libtinfo.so.6 => .../lib/libtinfo.so.6 (0x00007f435cc2c000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f435ca29000)
libtinfow.so.6 => .../lib/libtinfow.so.6 (0x00007f435c9e8000)
/lib64/ld-linux-x86-64.so.2 (0x00007f435cca7000)
Specifically linking libtinfow.so instead of libtinfo.so resolves the
issue.
All instances of '...' above represent the path to the installed ncurses
for Spack.
* Changed the 'include' config section to use 'substitute_path_variables' to allow for Spack config variables to be used (e.g. $spack).
* Fixed a bug with 'include' section path expansion and added a test case for 'include' paths with embedded config variables.
* Adding a package for wcs.
* Turning on sbml for wcs.
* The cpp flag needs to be available for wcs.
* Wcs needs SBML to properly define the namespace.
* Flake8 fixes.
* Fixing the help string with the description.
* Changing cpp to use the new variant syntax.
* Fixing flake8 errors.
* Forgot to delete one last fixme comment.
* Spack "develop" needs to link to repo "devel"
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Robert Blake <rob.c.blake.3@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Cray: fix Blue Waters support
* pkg-config env vars needed on Blue Waters
* cray platform: fix support for user-build MPI on cray machines
* reintroduce cray environment cleaning behind cnl version guard
* cray platform: fix support for user-build MPI on cray machines
Co-authored-by: Gregory <becker33@llnl.gov>
* Add new package: py-devlib
* Update var/spack/repos/builtin/packages/py-devlib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add depends
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [hsf-cmaketools] add package
* fix formatting
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [hsf-cmaketools] remove cmake_prefix_path which is set already by spack
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
gcc 9.3.0 and glibc 2.31 (found in the base install of Ubuntu 20.04)
cause the gcc package to error during build with the error:
"size of array 'assertion_failed__####' is negative"
Previous to this fix, the error was resolved for v8.1.0 <= gcc <= v9.2.0
via two patches.
This fix backports those patches for v5.3.0 <= gcc <= v7.4.0
Potentially these patches need to be backported to versions of gcc
before v5.3.0, but other compile issues need to be resolved for earlier
versions of gcc first.
Fixes#16968
* intel-tbb: Fix for #16938 add custom libs method
Override the libs method to look for libraries of form libtbb*
(instead of inherited which looks for libintel-tbb*)
* Fixing pre-existing flake8 issues
* py-flake8: add version 3.8.2
* This version depends on different versions of py-pycodestyle
and py-pyflakes
* When built for python@:3.7, this depends on the
py-importlib-metadata backport library
* py-pycodestyle: add version 2.6.0
* py-pyflakes: add version 2.2.0
Builds can be stopped before the final install phase due to user requests. Those builds
should not be registered as installed in the database.
We had code intended to handle this but:
1. It caught the wrong type of exception
2. We were catching these exceptions to suppress them at a lower level in the stack
This PR allows the StopIteration to propagate through a ChildError, and catches it
properly. Also added to an existing test to prevent regression.
This fixes a fork bomb in `spack versions`. Recursive generation of pools
to scrape URLs in `_spider` was creating large numbers of processes.
Instead of recursively creating process pools, we now use a single
`ThreadPool` with a concurrency limit.
More on the issue: having ~10 users running at the same time spack
versions on front-end nodes caused kernel lockup due to the high number
of sockets opened (sys-admin reports ~210k distributed over 3 nodes).
Users were internal, so they had ulimit -n set to ~70k.
The forking behavior could be observed by just running:
$ spack versions boost
and checking the number of processes spawned. Number of processes
per se was not the issue, but each one of them opens a socket
which can stress `iptables`.
In the original issue the kernel watchdog was reporting:
Message from syslogd@login03 at May 19 12:01:30 ...
kernel:Watchdog CPU:110 Hard LOCKUP
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#110 stuck for 23s! [python3:2756]
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#94 stuck for 22s! [iptables:5603]
Stratimikos is an optional dependency for our project. It depends on
Thyra, and thyra has subpackages that should be enabled based on
tpetra/epetra/epetraext.
* gnuplot: Fix for #16928
Dependency for --with-wx flag mistyped (should be wxwidgets)
* Revert "gnuplot: Fix for #16928"
This reverts commit 2b85814e5c.
* gnuplot: Fix for #16928
Dependency spec for --with-wx flag mistyped (should be wxwidgets, not
wx)
* add an --exclude-file option to 'spack mirror create' which allows a user to specify a file of specs to exclude when creating a mirror. this is anticipated to be useful especially when using the '--all' option
* allow specifying number of versions when mirroring all packages
* when mirroring all specs within an environment, include dependencies of root specs
* add '--exclude-specs' option to allow user to specify that specs should be excluded on the command line
* add test for excluding specs
fixes#12527
Mention that specs can be uninstalled by hash also in
the help message. Reference `spack gc` in case people
are looking for ways to clean the store from build time
dependencies.
Use "spec" instead of "package" to avoid ambiguity in
the error message.
* Adding a module for sbml.
* Adding support for all the languages.
* Update var/spack/repos/builtin/packages/sbml/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Update var/spack/repos/builtin/packages/sbml/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Robert Blake <rob.c.blake.3@gmail.com>
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Unify tests for compiler command in the same file
Tests for the "spack compiler" command were previously
scattered among different files.
* Tests should use mutable_config, since they modify the compiler list
Because of the way abstract variants are implemented, the following
spec matrix does not work as intended:
```
matrix:
- [foo]
- [bar=a, bar=b]
exclude:
- bar=a
```
because abstract variants always satisfy any variant of the same
name, regardless of values.
This PR converts abstract variants to whatever their appropriate
type is before running satisfaction checks for the excludes clause
in a matrix.
fixes#16841
Now that the version number of GCC reached double digits, an update
to the regex is needed to recognize gcc-10 as an executable to be
inspected when searching for compilers.
The current package does not work. Also several ocaml versions do
not compile such as 4.09.0. Updated to the new ocaml version to
4.10.0, which is now a prerequisite.
* Add new version of py-astroid
* Add new version of py-lazy-object-proxy
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Update both main and data packages. Data versions have not changed
in new version.
Provide new python variant to build Python bindings for 10.6.2 and
newer. Add dependencies on python and Boost+python required by variant.
* make_link_relative: added docstring
* make_elf_binaries_relative: added docstring, unit tests
* raise_if_not_relocatable: added docstring, added unit test for exceptional case
* relocate_links: removed unused arguments, added docstring and comments
Also fixed a possible bug that was issuing spurious
warning when a file was relocated successfully
* relocate_text: added docstring and comments, renamed arguments
* relocate_text_bin: added docstring and comments, renamed arguments, unit tests
- add future-proofing for wmake rules locations:
Accept wmake/rules/{ARCH}{COMP} or wmake/rules/{ARCH}/{COMP}
- compiler option is now '-spack' instead of 'RpathOpt'
which now seems to be a bit harsh on the eyes.
Now have compilations such as 'linux64GccDPInt32-spack',
which is moderately easier to read.
- add OpenFOAM 1912, patch 200506
STYLE: adjust for new flakey8 indentation rules
* openmpi: get rid of implicit system dependencies
* Python 2 compatibility.
* Rename pbspro to openpbs and revert packages.yaml.
* Remove virtual package 'sendmail'.
Problem: when calling `static_to_shared_library` on the `cray` arch, it
produces a non-sensical compiler command with no input files. For
example, when installing lua@5.2.4, it produced:
'gcc -lm -ldl -o /big-long-spack-path/liblua.so.5.2.4'
Solution: do the same thing on `cray` that is done for `linux`
fixes#16725
The dap, jna, pnetcdf, netcdf4, and ncgen4 variants added in #16047
are _not_ supported by the configure script for netcdf-cxx4 package
(These appear to be configure args for netcdf-c package).
* account for schema validation errors where the associated instance doesn't have a line number
* fix unrelated flake error (but it must be fixed because this PR touches this file and the flake rules have been updated since the last edit to this file)
Fujitsu-MPI wrapper commands aren't recognized from 'FindMPI'
function of 'cmake'. If we are using the Fujitsu compiler and Fujitsu MPI,
specify the MPI path information explicitly.
Flux requires `build` for python and many of the python packages because
it builds python bindings. Beyond the bindings, the Flux front-end
commands now use python too, hence the `run` type. Finally, Flux's
`pymod` module is linked against the python interpreter, so the package
requires a `link` dependency on python too.
* IWYU: fix 0.14 build
The CMake patch used for 0.13 hadn't been applied to the master when
0.14 was released, and this version of IWYU requires C++14 or higher.
* Flake8
```
'/var/folders/fy/x2xtwh1n7fn0_0q2kk29xkv9vvmbqb/T/s3j/spack-stage/spack-stage-valgrind-3.15.0-mtir7ubjz7mqmjbb7bogze2qm35hl4ze/spack-src/configure' '--prefix=/ornldev/code/spack/opt/spack/clang-11.0.0-apple/valgrind/mtir7ub' '--enable-only64bit' '--build=amd64-darwin'
1 error found in build log:
43 checking host system type... x86_64-pc-darwin
44 checking for a supported CPU... ok (x86_64)
45 checking for a 64-bit only build... yes
46 checking for a 32-bit only build... no
47 checking for a supported OS... ok (darwin)
48 checking for the kernel version... unsupported (18.7.0)
>> 49 configure: error: Valgrind works on Darwin 10.x, 11.x, 12.x, 13.x, 14.x, 15.x, 16.x and 17.x (Mac OS X 10.6/7/8/9/10/11 and macOS 10.12/13)
```
* ffr: add flag to use fixed format in which the length of one line of the source code is 255 when building with Fujitsu compiler.
* ffr: changed to elif.
The dev branch of UnifyFS now depends on the latest release of
GOTCHA, and will future releases.
This updates our spackage to depend on the correct version of GOTCHA
depending on the version of UnifyFS being installed.
* Create VMD recipe
This is a new recipe to install VMD on Spack-managed hosts.
* Fix lint errors.
* Use plain Package
As per peer-review:
- Use Package to build
- Use configure to create a Makefile
- Use install to copy files to prefix directory
* Move VMD package to correct path, duh...
* Restructure description so first short paragraph can be used by module files.
* Add an empty line as suggested by peer-review. That's how you separate paragraphs.
* Remove extra spaces.
* Use setup_build_environment since that's where you're supposed to export OS variblaes. Thanks to peer-review for spotting this.
* Create VMD recipe
This is a new recipe to install VMD on Spack-managed hosts.
* Fix lint errors.
* Use plain Package
As per peer-review:
- Use Package to build
- Use configure to create a Makefile
- Use install to copy files to prefix directory
* Move VMD package to correct path, duh...
* Add Cubist (#16069)
* Add Cubist
* enhance recipe
* Not using OS module anymore
* remove white space
* Fix build shell
* make Flake8 happy
* use bash shell for build
* Convert it To MakefilePackage as per peer-review
* dbcsr: expose all options, check openblas feats (#16034)
* dbcsr: expose all options, check openblas feats
* dbcsr: use Ninja to build, ensure serialized tests
* dbcsr: add myself as maintainer
* MPark.Variant: GCC 7.3.1 Conflict (#16081)
* MPark.Variant: GCC 7.3.1 Conflict
Due to an ICE in this specific patch-release of GCC, compile
errors in downstream packages should be avoided with a clean
conflict.
* Fix superfluous spaces
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Fix typo
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Move VMD package to correct path, duh...
* Add an empty line as suggested by peer-review. That's how you separate paragraphs.
* New matlab versions (#16086)
* Add new version 1.1.1 (#16087)
* New package bonniepp added (#16091)
* openbabel: fix compilation errors (#16090)
- Disable maeparser as it is broken with CMake
- Added missing dependencies
* singularity: updated maintainer list (#16093)
* New version xrootd-4.11.3 (#16092)
* I added Gaussian 16. I also execute bsd/install to fix scripts instead of filtering them.
* revert VMD so only Gaussian is in my PR.
* revert VMD so only Gaussian is in my PR.
* revert VMD so only Gaussian is in my PR.
* I added myself as a package maintainer.
Co-authored-by: asmaahassan90 <31959389+asmaahassan90@users.noreply.github.com>
Co-authored-by: Tiziano Müller <tiziano.mueller@chem.uzh.ch>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Amjad Kotobi <amjadkotbi@gmail.com>
Co-authored-by: athanasio <athanasio@users.noreply.github.com>
Co-authored-by: Carlos Arango Gutierrez <arangogutierrez@gmail.com>
Allows `all` to be configured non-buildable in packages.yaml.
The following config would only allow zlib to be built by Spack, all other packages would have to be found as externals.
```
packages:
all:
buildable: False
zlib:
buildable: True
```
The openssl build process can use the wrong perl for
various reasons, including:
* Wrong value in PERL env var
* The build process first looks for `perl5`, which the
spack system does not provide, but some other
distributions provide it. That way, the build process can
end up using the wrong perl.
Stop all of these problems by explicitly setting PERL to
the to be used dependency.
* Updated versions of COSMA.
* Added an empty line for formatting.
* Switched to sha256.
* Renamed gpu variant to cuda. Extending the CudaPackage base class.
This change also adds a code path through the spack ci pipelines
infrastructure which supports PR testing on the Spack repository.
Gitlab pipelines run as a result of a PR (either creation or pushing
to a PR branch) will only verify that the packages in the environment
build without error. When the PR branch is merged to develop,
another pipeline will run which results in the generated binaries
getting pushed to the binary mirror.
Providing only $padding or ${padding} results in an attempt to
substitute a padding of maximum system path length, while leaving
room for the parts of the install path spack generates. Providing
$padding-<len> or ${padding-<len>} simply substitutes padding of
the specified length.
* Fixing the build directory for cardioid.
* These imports are no longer needed due to deletions.
Co-authored-by: Robert Blake <rob.c.blake.3@gmail.com>
* [maloc] created template
* [maloc] added some build dependencies
* [maloc] added homepage and description
* [maloc] added doc variant in an attempt to disable documentation. Still builds doc though... at least when there is a system doxygen found. Having doxygen being a dependency here can bring python in and cause issues for dependents, even when listed as only a build dependency. Maybe this is a bug?
* [maloc] flake8 and fixme removal
* [maloc] shorted description
The pkg-config file of newer versions of at-spi2-core includes
dependencies for xtst, recordproto, inputproto and fixesproto, so they
have to be available at runtime as well.
Packages built with lmod core_compiler are placed in `Core`.
Other packages may belong in `Core`. For example, python may be built with a proprietary compiler for performance, but belong on the `Core` directory.
With this PR, lmod config can include a `core_specs` list. Any package that satisfies a spec in that list is placed in `Core`, regardless of its compiler or dependencies.
* Adding varient for python interface in HELICS
* Append python interface to pythonpath
* cleaning up blank lines
* Update var/spack/repos/builtin/packages/helics/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Clarify comments about QMCPACK-to-QE converter.
* Allow hdf5=serial with QE 6.4.1 + qmcpack, but apply filter_file.
* Ammend comments about the use of the filter_file.
This improves the documentation for `spack external find` in several ways:
* Provide a code example of implementing `determine_spec_details` for a package
* Explain how to define executables to look for (and also e.g. that they are treated as regular expressions and so can pull in unexpected files).
* Add the "why" for a couple of constraints (i.e. explain that this logic only works for build/run deps because it examines `PATH` for executables)
* Spread the docs between build customization and packaging sections
* Add cross-references
* Add a label so that `spack external find` is linked from the command reference.
Modifications:
- [x] Travis now uses `bionic` as a default (`xenial` used for Python 3.5, `trusty` for Python 2.6)
- [x] Shell unit tests have been factored into their own run
- [x] `kcov` is built only for tests that upload coverage results
Overall with this we shave 3-4 mins. on each run and add an additional run of about 3 min. For some reason `kcov` 38 fails forwarding output when used with Python unit tests, so I used v34 for that and v38 (latest) for shell testing. Previously we were using v25.
-[x] `z3` needs a dependency on `py-setuptools`
-[x] `z3` has a run dependency on `python`, so we might as well make
building with the python bindings default
`clingo` has some undocumented dependencies:
- `bison`, for which the default macos version is too old
- `re2c`, which isn't always available
Also, the python installation was not set up properly. Clingo by default
does an install in `~/.local`, so we need to disable that and add a few
other options to get things right.
- [x] add `bison` dependency
- [x] add `re2c` dependency
- [x] make `doxygen` dependency optional (and patch if needed)
- [x] add options to fix `python` build
- [x] make python build optional but on by default
* Add pmi support (required by ucx, ofi, and gni backends)
* Add support for ucx backend
* Add dependency on MPI for pmi=simplepmi, slurmpmi, or slurmpmi2
* Remove charmpp as an MPI provider since the changes in this PR can
add MPI as a dependency (mentioned previously)
* Install into transport_protocol-OS-arch subdirectory to match
default charmpp installation behavior (which helps dependents find it)
SKX includes AVX-512 extensions that consumer Skylake processors do
not have. Therefore, map Skylake to the prior arch to work on
these systems. Skylake-X processors will still map as the
skylake_avx512 spack arch and get the correct optimzations.
* Add maintainers.
Add variants for building with default earlier api versions.
* Update var/spack/repos/builtin/packages/hdf5/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add naromero77 as a maintainer for QMCPACK Spack package.
* Add QMCPACK 3.9.2
* Remove QE-to-QMCPACK wave function converter from QMCPACK Spack package. Already been moved to QE Spack package.
* Fix dependency of geant4 (amends #16497)
* Update geant4-data dependencies
* Reviewer comments (part 1/2)
* Reviewer comments (part 2/2)
* Update var/spack/repos/builtin/packages/geant4-data/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- add docstrings and make parameter names consistent in `relocate.py`
- Make `replace_prefix_*` and other functions private (as they are implementation details)
- remove unused function _replace_prefix_nullterm()
- Add unit tests for `relocate.py` functions
- add patchelf to Travis and use it during tests
- add hello_world fixture with a compiled binary, so we can test relocation
After migrating to `travis-ci.com`, we saw I/O issues in our tests --
tests that relied on `capfd` and `capsys` were failing. We've also seen
this in GitHub actions, and it's kept us from switching to them so far.
Turns out that the issue is that using streams like `sys.stdout` as
default arguments doesn't play well with `pytest` and output redirection,
as `pytest` changes the values of `sys.stdout` and `sys.stderr`. if these
values are evaluated before output redirection (as they are when used as
default arg values), output won't be captured properly later.
- [x] replace all stream default arg values with `None`, and only assign stream
values inside functions.
- [x] fix tests we didn't notice were relying on this erroneous behavior
This adds the `url` alternative `urls` to `package.all_urls`. With
this addition, one can find again new versions with
`spack versions <package>` for packages that are populated with
from mixin mirror `urls`.
Example: `util-macros` from x.org mixin.
* Non-interactive mode for spack checksum; allow passing 'package@version' to spack checksum
* Flake8 fixes
* Update checksum.py
Fix typo
* Update spack-completion script
* Automatically set non-interactive mode if more than one version passed
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add documentation and update spack-completion
* Flake8
* Rename option
* Update spack-completion
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update checksum.py
* Update stage.py
* Update create.py
Use batch mode when adding a new package
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This fixes some errors with setting up test configuration. These
errors do not cause current Spack tests to fail but do create
red herring issues elsewhere (see #15666). Fixing these errors
leads to more errors in tests that depended on the original
misconfigured state, so those are also addressed here.
This is an update to #16003 which accounts for some unit tests with
conflicting config/mutable_config fixtures. These conflicts were
not exposed until the mutable_config fixture was fixed. Details are
included below. The change which builds on #16003 is prefixed with
"(new)".
* For tests that use the real Spack package repository, the config
needs to avoid using MPI providers that are not intended to be
installed by Spack. Without this, it is possible that Spack tests
which concretize the MPI virtual will end up trying to use an
implementation that it shouldn't (e.g. one that is always
provided externally). See #15666 for an example.
* The mutable_config test fixture was not initializing the scope
roots to the right directories (so the resulting config was empty).
* The current_host fixture in the concretize.py tests was using the
config fixture rather than mutable_config, and was polluting the
config cache for other tests.
* One test in concretize.py was clearing a nonexistent cache
(PackagePrefs._packages_config_cache). This reference has been
removed.
* The test 'test_preferred_compilers' was was depending on cross
test config pollution to succeed. The initial spec before
concretization has been updated to updated to be explicit about
the desired result.
* (new) For tests that use install_mockery and mutable_config,
replace install_mockery with a separate install_mockery_mutable_config
fixture that is exactly the same as install_mockery but uses the
mutable_config fixture to avoid conflicts.
* Adapt to the latest Acts developments
A long time ago, the Acts project (whose name was then capitalized ACTS) used
to maintain multiple software repositories:
- The heart of the tracking toolkit was located in the `acts-core` repository
- Fast simulation extensions were located in the `acts-fatras` repository
- Advanced usage examples were located in the `acts-framework` repository
This multi-repository organization, however, has been a source of constant
pain, which is why the various projects were gradually merged into a single
mono-repo, called `acts`. Today, with the integration of `acts-framework`,
this merging process is reaching completion.
The present pull request adapts the Acts package to this evolution by...
- Renaming the package to `acts`, reflecting the new repository name
- Renaming the `test` variant to `unit_tests`, reflecting current CMake naming
- Adding the new build variants that were inherited from `acts-framework`
- Acknowledging the change of semantics of the `examples` variant, and only
supporting the new ones (as the former variant was almost unused)
- Liberally using alphabetical order to make the package code more readable
- Recording a large number of conflicts, some of which are introduced by the
merging of `acts-framework` and some of which already existed before
- Using the new capitalization of "Acts"
* Add acts v0.23
* Update dd4hep version requirement
* Add acts v0.22.1 bugfix
Fixed#15884.
Spack asks every package linked into an environment to tell us how
environment variables should be modified when a spack environment is
activated. As part of this, specs in an environment are symlinked into
the environment's view (see #13249), and the package calculates
environment modifications with *the default view as the prefix*.
All of this works nicely for pointing the user's environment at the view
*if* every package is successfully linked. Unfortunately, right now we
only track what specs "should" be in a view, not which specs actually
are. So we end up calculating environment modifications on things that
aren't linked into thee view, and the exception isn't caught, so lots of
spack commands end up failing.
This fixes the issue by ignoring and warning about specs where
calculating environment modifications fails. So we can still keep using
Spack even if the current environment is incomplete.
We should probably also just avoid computing env modifications *entirely*
for unlinked packages, but right now that is a slow operation (requires a
lot of YAML parsing). We should revisit that when we have some better
state management for views, but the fix adopted here will still be
necessary, as we want spack commands to be resilient to other types of
bugs in `setup_run_environment()` and friends. That code is in packages
and we have to assume it could be buggy when we call it outside of builds
(as it might fail more than just the build).
* openfoam: correspond to build with Fujitsu compiler.
* openfoam: add rules for Fujitsu compiler (on linuxARM64)
- the Fujitsu compiler is a clang derivative, so use a modified
version of the clang rules if upstream does not supply anything
If using mpirun, the R sessions can be started with a wrapper script
that helps set up the R session cluster. Put this wrapper in the PATH so
it is easily accessible.
Add a `spack external find` command that tries to populate
`packages.yaml` with external packages from the user's `$PATH`. This
focuses on finding build dependencies. Currently, support has only been
added for `cmake`.
For a package to be discoverable with `spack external find`, it must define:
* an `executables` class attribute containing a list of
regular expressions that match executable names.
* a `determine_spec_details(prefix, specs_in_prefix)` method
Spack will call `determine_spec_details()` once for each prefix where
executables are found, passing in the path to the prefix and the path to
all found executables. The package is responsible for invoking the
executables and figuring out what type of installation(s) are in the
prefix, and returning one or more specs (each with version, variants or
whatever else the user decides to include in the spec).
The found specs and prefixes will be added to the user's `packages.yaml`
file. Providing the `--not-buildable` option will mark all generated
entries in `packages.yaml` as `buildable: False`
Cray has two machine types. "XC" machines are the larger
machines more common in HPC, but "Cluster" machines are
also cropping up at some HPC sites. Cluster machines run
a slightly different form of the CrayPE programming environment,
and often come without default modules loaded. Cluster
machines also run different versions of some software, and run
a linux distro on the backend nodes instead of running Compute
Node Linux (CNL).
Below are the changes made to support "Cluster" machines in
Spack. Some of these changes are semi-related general upkeep
of the cray platform.
* cray platform: detect properly after module purge
* cray platform: support machines running OSs other than CNL
Make Cray backend OS delegate to LinuxDistro when no cle_release file
favor backend over frontend OS when name clashes
* cray platform: target detection uses multiple strategies
This commit improves the robustness of target
detection on Cray by trying multiple strategies.
The first one that produces results wins. If
nothing is found only the generic family of the
frontend host is used as a target.
* cray-libsci: add package from NERSC
* build_env: unload cray-libsci module when not explicitly needed
cray-libsci is a package in Spack. The cray PrgEnv
modules load it implicitly when we set up the compiler.
We now unload it after setting up the compiler and
only reload it when requested via external package.
* util/module_cmd: more robust module parsing
Cray modules have documentation inside the module
that is visible to the `module show` command.
Spack module parsing is now robust to documentation
inside modules.
* cce compiler: uses clang flags for versions >= 9.0
* build_env: push CRAY_LD_LIBRARY_PATH into everything
Some Cray modules add paths to CRAY_LD_LIBRARY_PATH
instead of LD_LIBRARY_PATH. This has performance benefits
at load time, but leads to Spack builds not finding their
dependencies from external modules.
Spack now prepends CRAY_LD_LIBRARY_PATH to
LD_LIBRARY_PATH before beginning the build.
* mvapich2: setup cray compilers when on cray
previously, mpich was the only mpi implementation to support
cray systems (because it is the MPI on Cray XC systems).
Cray cluster systems use mvapich2, which now supports cray
compiler wrappers.
* build_env: clean pkgconf from environment
Cray modules silently add pkgconf to the user environment
This can break builds that do not user pkgconf.
Now we remove it frmo the environment and add it again if it
is in the spec.
* cray platform: cheat modules for rome/zen2 module on naples/zen node
Cray modules for naples/zen architecture currently specify
rome/zen2. For now, we detect this and return zen for modules
named `craype-x86-rome`.
* compiler: compiler default versions
When detecting compiler default versions for target/compiler
compatibility checks, Spack previously ran the compiler without
setting up its environment. Now we setup a temporary environment
to run the compiler with its modules to detect its version.
* compilers/cce: improve logic to determine C/C++ std flags
* tests: fix existing tests to play nicely with new cray support
* tests: test new functionality
Some new functionality can only be tested on a cray system.
Add tests for what can be tested on a linux system.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Since #9481 Python's None is not permitted as a value for
MV variants. The string 'none' is used instead.
Add the same fix for the amgx and lammps packages
* CMake: fix https://github.com/spack/spack/issues/16453 with a patch addressing both libhugetlbfs and icpc warnings on Cray XC40 systems
* Including CMake v3.17.2 in the patched versions
* libspatialite
* flake8
* Added proper version constrain
* Update var/spack/repos/builtin/packages/libspatialite/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/libspatialite/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Otherwise, if I run `xenv` after `spack load py-xenv` it fails with:
```
Traceback (most recent call last):
File "/home/vavolkl/spack/opt/spack/linux-centos7-broadwell/gcc-8.3.0/py-xenv-develop_2018-12-20-lqbxakapsepqo5w3sjhhokj5o7c5jei2/bin/xenv", line 6, in <module>
from pkg_resources import load_entry_point
ModuleNotFoundError: No module named 'pkg_resources'
```
* [gaudi] fixes and patches
* Update var/spack/repos/builtin/packages/gaudi/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/gaudi/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [gaudi] add older versions and fold +tests into +optional
* [gaudi] set run environment
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
If spack is checked out in a git worktree (see [1]), all git-related
commands fail because the `spack_is_git_repo()`-check is not thorough
enough.
When developing in a feature-branch in a seperate worktree, this is
annoying as all unittests regarding git-related spack commands fail,
cluttering the test results with false-positives.
[1]: https://git-scm.com/docs/git-worktree
Change-Id: I94b573a2c0e058e9ccc169e7ee6561626fbb06fd
We can control the shared/static build of CMake and the default in
Spack is to build shared libraries. The old, uncontrolled default
of this package is a static build.
* Revise description of patch variant.
* Add qmcpack variant. Apply QE-to-QMCPACK wave function converter patch.
* Clean-up, document, and re-organize.
* ELPA patches did not nead when=`+patch` variant.
* Need to be more precise here with QE version numbers.
* satisfies seems to be necessary here in order to get correct behaviour.
* Buglet with zlib link line.
* Update var/spack/repos/builtin/packages/quantum-espresso/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/quantum-espresso/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix for QE-to-QMCPACK wave function converter w.r.t. QE 6.3. Also adjust comments to reflect changes in code.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* For tests that use the real Spack package repository, the config
needs to avoid using MPI providers that are not intended to be
installed by Spack. Without this, it is possible that Spack tests
which concretize the MPI virtual will end up trying to use an
implementation that it shouldn't (e.g. one that is always
provided externally). See #15666 for an example.
* The mutable_config test fixture was not initializing the scope
roots to the right directories (so the resulting config was empty).
* The current_host fixture in the concretize.py tests was using the
config fixture rather than mutable_config, and was polluting the
config cache for other tests.
* One test in concretize.py was clearing a nonexistent cache
(PackagePrefs._packages_config_cache). This reference has been
removed.
* The test 'test_preferred_compilers' was was depending on cross
test config pollution to succeed. The initial spec before
concretization has been updated to updated to be explicit about
the desired result.
* dev-build: --drop-in <shell>
Add a `--drop-in <shell>` option to `spack dev-build`.
This option will automatically run a
`spack build-env <spec> -- <shell>` at the end of a `dev-build`, e.g.
to quickly drop-and-devel into a build phase of a package.
Example usage:
```
spack dev-build --before cmake --drop-in bash openpmd-api@develop
```
* build_env: drop in unit test
Co-authored-by: Greg Becker <becker33@llnl.gov>
mfem variant hypre is now rolled into variant mpi - so update spec accordingly
mfem@4.0.1-xsdk+superlu-dist is broken and unsupported - so disable it
with the addition of py-petsc4py@3.13.0 - conretizer gets confused and is not picking py-petsc4py@3.12.0 as a compatible dependency with petsc@3.12. So manually specify it.
Also depends_on('py-libensemble@0.5.2+petsc4py ^py-petsc4py@3.12.0' causes concretizer to hang forever
Generally speaking, errors that are encountered when attempting to load
command extensions now terminate the running Spack instance.
* Added new exceptions `spack.cmd.PythonNameError` and
`spack.cmd.CommandNameError`.
* New functions `spack.cmd.require_python_name(pname)` and
`spack.cmd.require_cmd_name(cname)` check that `pname` and `cname`
respectively meet requirements, throwing the appropriate error if not.
* `spack.cmd.get_module()` uses `require_cmd_name()` and passes through
exceptions from module load attempts.
* `spack.cmd.get_command()` uses `require_cmd_name()` and invokes
`get_module()` with the correct command-name form rather than the
previous (incorrect) Python name.
* Added New exceptions `spack.extensions.CommandNotFoundError` and
`spack.extensions.ExtensionNamingError`.
* `_extension_regexp` has a new leading underscore to indicate expected
privacy.
* `spack.extensions.extension_name()` raises an `ExtensionNamingError`
rather than using `tty.warn()`.
* `spack.extensions.load_command_extension()` checks command source
existence early and bails out if missing. Also, exceptions raised by
`load_module_from_file()` are passed through.
* `spack.extensions.get_module()` raises `CommandNotFoundError` as
appropriate.
* Spack `main()` allows `parser.add_command()` exceptions to cause
program end.
Tests:
* More common boilerplate has been pulled out into fixtures including
`sys.modules` dictionary cleanup and resource-managed creation of a
simple command extension with specified contents in the source file
for a single named command.
* "Hello, World!" test now uses a command named `hello-world` instead of
`hello` in order to verify correct handling of commands with hyphens.
* New tests for:
* Missing (or misnamed) command.
* Badly-named extension.
* Verification that errors encountered during import of a command are
propagated upward.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Add more checksummed versions
* Remove all versions not supported by autotools build method
* Add old build system for older versions
* Add suggested changes
* add amgx package
* add amgx variants for mkl and magma support
* fix typo in cmake option
* flake8 fix formatting
* Apply suggestions from code review - use mkl virtual provider
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Apply suggestions from code review - fix copypasta
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add additional variants to netcdf-fortran
* Fix duplicate variant
* Clean up variants based on review feedback
* Addtional variant changes
* Convert jna variant to single line
* Fix proper version constraints for jna variant
* cp2k: prettify arch-file, call pkg-config directly
this allows to re-use the arch-file without having to load the complete
Spack environment, for example after a dev-build
* cp2k: use consistency check instead of blas lib enum
this makes using other BLAS/LAPACK implementations possible without
explicitly adding support for them
* cp2k: add basic support for Cray and XL Compilers, correct Intel fp mode
* cp2k: add myself as maintainer
* cp2k: use "master" to denote the git version
* cp2k: use spack_cc/fc/cxx when possible, set CXX explicitly
* cp2k: set __MKL when using the MKL, not just the Intel compiler
* cp2k: drop self. when referencing spec where possible
* cp2k: add forgotten elpa+openmp dep
* cp2k: set C++14 for recent versions
* Add new version: "Abseil LTS branch, Feb 2020, Patch 1"
* Build shared libraries by default with new version
* Older versions do not support building shared libraries
* Update bat and make the url dynamic
- Now depending on the version it will calculate
the url
- This also fixes a weird issue that was reported
on Darwin, back when I reported that Rust wasn't
linking properly on Darwin #15887 on the comment
by hartzell i was also experiencing this issue
* Remove unnecessary stuff
- Removes the need for LLVM
- Removes the need for version calculation.
* Keep the versions on 1 line
* Pass flake8 tests
* Add new package exa
* Format and fix a silly typo
* Fix SHA256 SUM and make URL calculation dynamic
* Remove unnecessary URL calculation
* Update package.py
* Keep the version on 1 line
* Pass flake8 checks
* Add ShengBTE
Adding a new package; ShengBTE. I tried adding it a MakefilePackage, and use build_directory = 'Src', but it was as if the build_directory gets ignored and make complains about target not found. and using the make funtion here instead of os.system, I get errors that config.f90 is not found which is already available under Src as well.
* more enhancmenets
fix lint; use mkl spec; use build_directory variable
* one more fix
* Use Makefile template
* Update var/spack/repos/builtin/packages/shengbte/package.py
use mkl instead of intel-mkl as a dependency
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/shengbte/package.py
update recipe as suggested by reviewer
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* enhance recipe
remove white space; changes as suggested by reviewer
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixed SSL pathing for older versions of Python (i.e. @:3.6.999).
* Fixed an issue where the 'python~ssl' variant wasn't properly being respected.
* Improved the '~ssl' patch by making it functional instead of diff-based (enables 3.X.Y patches).
* Fixed comment formatting to satisfy 'flake8' style requirements.
On Power architectures cuDNN will install in a target directory. This
sets cuDNN_ROOT to point to the subdirectory to help dependents use
this install.
The singularity info should actually suggest where you might find the info about the post-install steps.
Co-authored-by: george.hartzell <george.hartzell@sana.com>
This PR introduces trivial refactoring in:
- `get_existing_elf_rpaths`
- `get_relative_elf_rpaths`
- `get_normalized_elf_rpaths`
- `set_placeholder`
mainly to be more consistent with practices used in other
parts of the code and to simplify functions locally. It also
adds or reworks unit tests for these functions and extends
their docstrings.
Co-authored-by: Patrick Gartung <gartung@fnal.gov>
Co-authored-by: Peter J. Scheibel <scheibel1@llnl.gov>
Building the `py-jupyter` stack on macOS with AppleClang breaks on
the `py-qtconsole` -> `py-qtconsole` -> `qt +opengl` package build
environment setup with:
```
==> Error: AttributeError: Query of package 'mesa' for 'libs' failed
...
==> Error: Failed to install qt due to ChildError: AttributeError: Query of package 'mesa' for 'libs' failed
```
This tries to add more library targets build by `mesa` to avoid this.
* LLVM: Python Dependency
Effort to expose the linked python library when building LLVM.
This might fix the forward propagation of libintl that comes
with the static python library build on darwin.
* LLDB Py: Remove Ignored Old Flags
Changed in LLVM 10.0+
Packages in Spack are classes, and we need to be able to execute class
methods on mock packages. The previous design used instances of a single
MockPackage class; this version gives each package its own class that can
spider depenencies. This allows us to implement class methods like
`possible_dependencies()` on mock packages.
This design change moves mock package creation into the
`MockPackageMultiRepo`, and mock packages now *must* be created from a
repo. This is required for us to mock `possible_dependencies()`, which
needs to be able to get dependency packages from the package repo.
Changes include:
* `MockPackage` is now `MockPackageBase`
* `MockPackageBase` instances must now be created with
`MockPackageMultiRepo.add_package()`
* add `possible_dependencies()` method to `MockPackageBase`
* refactor tests to use new code structure
* move package mocking infrastructure into `spack.util.mock_package`,
as it's becoming a more sophisticated class and it gets lots in `conftest.py`
The variants table in `spack info` is cramped, as the *widest* it can be
is 80 columns. And that's actually only sort of true -- the padding
calculation is off, so it still wraps on terminals of size 80 because it
comes out *slightly* wider.
This change looks at the terminal size and calculates the width of the
description column based on it. On larger terminals, the output looks
much nicer, and on small terminals, the output no longer wraps.
Here's an example for `spack info qmcpack` with 110 columns.
Before:
Name [Default] Allowed values Description
==================== ==================== ==============================
afqmc [off] on, off Install with AFQMC support.
NOTE that if used in
combination with CUDA, only
AFQMC will have CUDA.
build_type [Release] Debug, Release, The build type to build
RelWithDebInfo
complex [off] on, off Build the complex (general
twist/k-point) version
cuda [off] on, off Build with CUDA
After:
Name [Default] Allowed values Description
==================== ==================== ========================================================
afqmc [off] on, off Install with AFQMC support. NOTE that if used in
combination with CUDA, only AFQMC will have CUDA.
build_type [Release] Debug, Release, The build type to build
RelWithDebInfo
complex [off] on, off Build the complex (general twist/k-point) version
cuda [off] on, off Build with CUDA
This allows horovod to be built with frameworks=pytorch,tensorflow.
I tracked down the crash I observed in #15719, where loading torch
before tensorflow would cause a crash in:
google::protobuf::internal::(anonymous
namespace)::InitSCC_DFS(google::protobuf::internal::SCCInfoBase*)
The solution is to make tensorflow compile against the protobuf
version Spack provides, instead of allowing it to use it's own.
It's likely we'll want to go after some of the others
that are listed in third_party/systemlibs/syslibs_configure.bzl
in the future.
* Update var/spack/repos/builtin/packages/py-sqlparse/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add new package py-sqlparse
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The major building blocks in many software stacks:
- CPython
- CMake (libuv)
do not build on macOS with GCC. The main problem is that some macOS
framework includes pull in objective-c code and that code does get
misinterpreted as (invalid) C by GCC by default.
Last month VTK-m releases its lastest version named `v1.5.1`. This new
release only contains bugfixes related to compiler error / warnings.
- Depends on CMake >= 3.12
- Set VTKm_NO_ASSERT=ON by default
- add maintainers
Signed-off-by: Vicente Adolfo Bolea Sanchez <vicente.bolea@kitware.com>
Update compiler config with bootstrapped compiler when it was already installed and added config defaults to code so mutable_config test fixture works.
hwloc depends on MPI when netloc is enabled. Note that OpenMPI depends on
netloc, so hwloc cannot use OpenMPI as the MPI provider when netloc is
enabled (this would result in a cyclic dependency).
To specify an environment for a comment, the user can specify
"spack -e <env>". The documentation incorrectly specified "-E" (which
is actually used to ignore any implicit use of environments).
While building _visit_, I ran into an undefined symbol at link time. I tracked
the missing dependency to _libsm_ needing to know about _libuuid_ at link time.
* phist: add int64 variant and resulting conflicts and dependencies
* phist: use Trilinos TPLs as soon as they are in the spec, not just if +trilinos isexplicitly set
and remove a redundant depends-statement
* phist: use int as gotype for Trilinos dependency if ~int64
* phist: new version 1.9.0
* phist: remove trailing whitespace
* phist: updated checksum (version tag was moved)
* Fix: Flex Reconfigure
Learn the `flex` package how to reconfigure itself when needed.
Fix#11551
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* Autoreconf: only when actually desired
Co-authored-by: Andrew W Elble <aweits@rit.edu>
If the Spack compiler wrapper encounters any "-isystem" option, then
when adding include directories for Spack dependencies, Spack will
use "-isystem" instead of "-I". This prevents Spack-generated "-I"
options from overriding the "-isystem" options generated by the build
system. To ensure that build-system "-isystem" directories are
searched first, Spack places all of its inserted "-isystem"
directories after.
The new ordering of -isystem includes is:
* -isystem from build system (not system directories)
* -isystem from Spack
* -isystem from build system (for directories like /usr/include)
The prior order of "-I" arguments is preserved (although as of this
commit Spack no longer generates -I if -isystem is detected):
* -I from build system (not system directories)
* -I from Spack (only if there are no "-isystem" options)
* -I from build system (for directories like /usr/include)
* New package: sumo
This PR adds the sumo package, as well as the fox package as a
dependency. It also updates and adds some fixes for openscenegraph.
For fox, the patch is for the development version. That patch should not
be necessary in future versions as it has been applied upstream. The
stable version is 1.6.57 and is marked as preferred. This is the version
needed for sumo.
Added dependencies for openscenegraph as well as set constraints on qt
versions.
* Update var/spack/repos/builtin/packages/sumo/package.py
I had intended to set this version constraint but somehow did not.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add dependency types to sumo recipe
- googletest: 'test'
- swig: 'build'
- java: 'build', 'run'
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Since #16132, we've consolidated the setting of FORCE_UNSAFE_CONFIGURE to
`autotools.py`, so we don't need to use it in packages like `coreutils`,
in our commands, or in our container recipes.
- [x] Remove FORCE_UNSAFE_CONFIGURE from packages
- [x] Remove FORCE_UNSAFE_CONFIGURE from container recipes
- [x] Remove FORCE_UNSAFE_CONFIGURE from `spack ci` command
* [mfem] A few updates: add 'strumpack' variant; add 'zlib'
variant (same as 'gzstream'); fix optmization flag
for v4.0.
* [mfem] flake8 fix
* [mfem] Add version 4.1
* [mfem] Add/tweak some 'conflicts' directives.
* [gslib] Add new release versions + 'develop' version.
* [petsc] Restrict hdf5 version to <= 1.10.99 since 1.12.0 fails
* [metis] Use the original metis url for v4.0.3.
* [conduit] Remove restrictions to the used hdf5 variant to allow
building with other packages that use hdf5, e.g. petsc.
* [mfem] Few updates:
* Replace the 'gzstream' variant with 'zlib' variant.
* Do not add system library paths with -L flags.
* Allow '+pumi+shared' variant.
* Update the 'test_builds.sh' script.
* [occa] Add version 1.0.9.
* [mfem] Some OCCA and RAJA updates.
* [gslib] Fix the build for new versions of the library.
* [mfem] Add 'gslib' variant for GSLIB.
* [mfem] Add 'cuda' variant.
* [mfem] Add 'libceed' variant + a few more tweaks.
* [mfem] Add 'umpire' variant.
* [ceed] Add a draft for v3.0. Not tested. Just made sure that
concretization works for 'ceed' and 'ceed+cuda'.
* [nek] Fix Nek5000/NekCEM
* [nek] Add Nek5000-v19 & polishing Nek packages
* [flake8] Fix flake8 failure
* petsc: use of HDF5 does not care about +hl+fortran
* [petsc] Temporarily allow any hypre version with petsc@develop.
[ceed] Remove the requirement for hypre@develop.
* [libceed] Do not explicitly set NVCCFLAGS for v0.5 and later.
* [laghos] Add version 3.0, pointing to dev branch for now.
Do not set CXX at the make command line.
Simplify the dependecy directives a little.
[ceed] Use laghos v3.0 for ceed v3.0.0.
* [laghos] Keep the injection of CXX in the makefile for laghos
versions <= 2.0.
* [nekcem] Recovert hash-versions used by older versions of the
'ceed' package.
* [occa] Disable hip autodetection because it fails on some machines.
* [laghos] Update v3.0 with the actual release source.
* [suite-sparse] Explicitly add the c11 flag to CFLAGS.
* Update package.py (#15749)
* [magma] Add forgotten specification of the 'cuda_arch' variant.
* [ceed] Use magma v2.5.3 for ceed v3.0.
* libceed-0.6
* mfem: depend on libceed 0.6:, not 0.6.0:
* [libceed] Add 'magma' variant -- enable MAGMA backend.
* [ceed] In v3.0, use '+magma' variant of libceed when cuda is enabled.
* Initial package for Remhos (needs to be updated with actual sha256
* Adding Remhos to CEED-3.0, for now @develop
* petsc: add 3.13.0 (using petsc-lite) and 3.12.5
* ceed: update to petsc@3.13.0:3.13.99
* Temporary fix
* [nekcem] Add hash-version for ceed v3.0.
* [nek5000] Simplify source urls.
* [nektools] Use the same sources and versions as in nek5000.
* [ceed] Update Nek-related package versions.
* libceed: add v0.6 portabilty fix
* libceed: better v0.6 portabilty fix
* Adding Remhos 1.0 release in CEED-3.0
* Updating hash for Remhos-1.0
* [petsc] Add cuda variant.
* [libceed] Flake8 fix.
* [petsc] Add cuda variant.
* [ceed] Fix the OCCA version to 1.0.9. Enable petsc+cuda when
compiling ceed@3.0.0+cuda.
* nek5000: fix python 2.7+ syntax
* [laghos] Fix testing.
* [remhos] Fix testing.
* [remhos] For testing use the 'tests' target instead of 'test'.
* Add/update the maintainers for ceed, libceed, mfem, laghos, and remhos.
* [ceed] Remove unnecessary dependencies.
* libceed: activate AVX when supported
Co-authored-by: Thilina Rathnayake <thilinarmtb@gmail.com>
Co-authored-by: Jed Brown <jed@jedbrown.org>
Co-authored-by: Stan Tomov <tomov@eecs.utk.edu>
Co-authored-by: Tzanio <tzanio@llnl.gov>
This commit sets the `FORCE_UNSAFE_CONFIGURE` environment variable to 1 in autotools builds.
We see a lot of builds popping up and complaining about `FORCE_UNSAFE_CONFIGURE`. This behavior is not actually part of `autoconf` per se. It comes from this patch to `mknod.m4`, which is used by a lot of autoconf builds:
* https://lists.gnu.org/archive/html/bug-gnulib/2010-07/msg00282.html
Which originated from this problem that someone had on AIX:
* https://lists.gnu.org/archive/html/bug-gnulib/2010-07/msg00279.html
The gist of the problem seems to be that they want to check whether `mknod` can do something as root, but instead of checking whether they're running as root and using `su` or something to test this, they just made it harder to run `configure` as root.
This seems very ad hoc and this is one of many checks that are run as root in `configure`. Many of them run before this check, so it's not clear that the `FORCE_UNSAFE_CONFIGURE` thing is even preventing bad things from happening.
So:
1. This only happens in `autotools` builds, so we should go ahead and put it into `autotools.py` instead of in the global build environment, and
2. The variable does too little and provides a false sense of security in the first place, so we'll just disable it and avoid the nuisance. If we really feel strongly about this we can put some warnings in Spack about running as root, but at the top level, not in the middle of an already running script like `configure`.
* Throw an error at spack install invocation instead of most of the way through the build process when cuda_arch is unspecified.
* Clean-up of CMake booleans. No actual change.
* Use CMake variables for hwloc and libelf installation directories and avoid injecting extra flags into CMAKE_CXX_FLAGS
* Conflict should only exist for +cuda variant.
* Biopandas python package
* Update var/spack/repos/builtin/packages/py-biopandas/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-biopandas/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* remove scipy dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
At least the v2.0.2.0 tar ball contains compiled object files etc, which
cause the build to fail on other architectures (ppc64le in particular), so
this patch adds a `make clean` after configuring first.
* SourceForge: Mirror Mixin
Add a mixing class for direct `CNAME`s to sourceforge mirrors.
Since the main gateway servers are often down, this could reduce
timeouts and fetch errors for sourceforge.net hosted software.
* SourceForge: unspectacular mirror replacement
add mirrors to all sourceforge packages with trivial
download logic.
tested fetch of latest version of each of these packages
with various mirrors before committing.
* SourceForge: xz
the author homepage is chronocially overrun and this is the offical
upload with many mirrors.
* macOS: Fix emacs Linking
Fix linking issue of emacs on macOS (clang and gcc).
Applies the same work-around as conda-forge:
b051f6c928/recipe/build.sh
Homebrew avoids this by linking against the system ncurses lib:
https://github.com/Homebrew/homebrew-core/blob/master/Formula/emacs.rb
* ncurses: fix outdated variant comment
this comment was build on the assumption that gnutls
triggers a termlib dependency in emacs. that's not the
case, ncurses itself depends on termlib when build with
this feature.
libpng still has its sourceforge page but is actively been
developed on github.
since the sourceforge urls are too often down (as seen in
my nightly CI/CD tests), just switch the download source to
GitHub instead.
`DYLD_LIBRARY_PATH` can frequently break builtin macOS software when
pointed at Spack libraries. This is because it takes *higher* precedence
than the default library search paths, which are used by system software.
`DYLD_FALLBACK_LIBRARY_PATH`, on the other hand, takes lower precedence.
At first glance, this might seem bad, because the software installed by
Spack in an environment needs to find *its* libraries, and it should not
use the defaults. However, Spack's isntallations are always `RPATH`'d,
so they do not have this problem.
`DYLD_FALLBACK_LIBRARY_PATH` is thus useful for things built in an
environment that need to use Spack's libraries, that don't set *their*
RPATHs correctly for whatever reason. We now prefer it to
`DYLD_LIBRARY_PATH` in modules and in environments because it helps a
little bit, and it is much less intrusive.
This commit removes the DYLD_LIBRARY_PATH variable from the default
modules.yaml for darwin. The rationale behind deleting this
environment variable is that paths in this environment variable take
precedence over the default locations of libraries (usually the
install path of the library), which can lead to linking errors in some
circumstances. For example, executables intended to link with Apple's
system BLAS and LAPACK will instead link to a spack-installed
implementation (e.g., OpenBLAS), causing runtime errors.
These errors are resolved by instead relying on paths set in
DYLD_FALLBACK_LIBRARY_PATH, which is lower in precedence than default
locations of libraries.
provided (#15662).
Prior to this fix, the checked Spec object would not be populated, and
concretization would fail.
Co-authored-by: Marc Allen <mrcall@amazon.com>
Mesa links against libtinfo so needs to depend on ncurses. It also needs
a little help finding the library directory so an LDFLAGS configure
option is added.
* MPark.Variant: GCC 7.3.1 Conflict
Due to an ICE in this specific patch-release of GCC, compile
errors in downstream packages should be avoided with a clean
conflict.
* Fix superfluous spaces
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Fix typo
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add Cubist
* enhance recipe
* Not using OS module anymore
* remove white space
* Fix build shell
* make Flake8 happy
* use bash shell for build
* Convert it To MakefilePackage as per peer-review
* Update var/spack/repos/builtin/packages/neo4j/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add new package: neo4j
* refine neo4j package
* fix flake8 warning
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* update: memsurfer with python3
* flake8 compliance
* Update var/spack/repos/builtin/packages/memsurfer/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/memsurfer/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/memsurfer/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* removed build_type preferences at adamjstewart's suggestion
* Added build/run dependency on python3.7
as suggested by adam stewart
* more flake8 horror!
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`spack test` has a spurious '[+] ' in the output:
```
lib/spack/spack/test/install.py .........[+] ......
```
Output is properly suppressed:
```
lib/spack/spack/test/install.py ...............
```
* sonLib package as required by the HAL toolkit
* cleanup
* Update var/spack/repos/builtin/packages/sonlib/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/sonlib/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Makes the following changes:
* (Fixes#15620) tty configuration was failing when stdout was
redirected. The implementation now creates a pseudo terminal for
stdin and checks stdout properly, so redirections of stdin/out/err
should be handled now.
* Handles terminal configuration when the Spack process moves between
the foreground and background (possibly multiple times) during a
build.
* Spack adjusts terminal settings to allow users to to enable/disable
build process output to the terminal using a "v" toggle, abnormal
exit cases (like CTRL-C) could leave the terminal in an unusable
state. This is addressed here with a special-case handler which
restores terminal settings.
Significantly extend testing of process output logger:
* New PseudoShell object for setting up a master and child process
and configuring file descriptor inheritance between the two
* Tests for "v" verbosity toggle making use of the added PseudoShell
object
* Added `uniq` function which takes a list of elements and replaces
any consecutive sequence of duplicate elements with a single
instance (e.g. "112211" -> "121")
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The performance improvements done in #14693 where leaving the DB in an inconsistent state when specs were removed from it. This PR updates the DB internal state whenever the DB is written to a file.
Note that we still cannot properly enumerate installed dependents, so there is a TODO in this code. Fixing that will require the dependents dictionaries in specs to be re-keyed (either by hash, or not keyed at all -- a list would do). See #11983 for details.
Reading the database repeatedly can be quite slow. We need a way to speed
up Spack when it reads the DB multiple times, but the DB has not been
modified between reads (which is nearly all the time).
- [x] Add a file containing a unique uuid that is regenerated at database
write time. Use this uuid to suppress re-parsing the database
contents if we know a previous uuid and the uuid has not changed.
- [x] Fix mutable_database fixture so that it resets the last seen
verifier when it resets.
- [x] Enable not rereading the database immediately after a write. Make
the tests reset the last seen verifier in between tests that use the
database fixture.
- [x] make presence of uuid module optional
Removed the code that was converting the old index.yaml format into
index.json. Since the change happened in #2189 it should be
considered safe to drop this (untested) code.
If a user invoked "spack env activate example-henv", Spack would
mistakenly interpret the "-h" from "example-henv" as the "-h" option.
This commit allows users to create and activate environments with
"-h" in the name.
This issue existed for bash shell support as well as csh support, and
this commit addresses both, along with some other unrelated csh
support issues.
* only override spec prefix for non-external packages
* add test that environment shell modifications respect explicitly-specified prefixes for external packages
* add clarifying comment
spack.util.environment_after_sourcing_files compares the local
environment against a shell environment after having sourced a
file; but this ends up including the default shell profile and
rc, which might differ from the local environment.
To change this, compare against the default shell environment,
expressed here as 'source /dev/null'.
According to my nightly CI/CD tests, x.org is another large provider
of software in common build chains that is often down.
Added a hand-selected amount of mirrors that is well up-to-sync.
Tested with `util-macros` that has a quite "recent" patch release.
Other packages to follow in an individual PR.
Makes the following changes:
* (Fixes#15620) tty configuration was failing when stdout was
redirected. The implementation now creates a pseudo terminal for
stdin and checks stdout properly, so redirections of stdin/out/err
should be handled now.
* Handles terminal configuration when the Spack process moves between
the foreground and background (possibly multiple times) during a
build.
* Spack adjusts terminal settings to allow users to to enable/disable
build process output to the terminal using a "v" toggle, abnormal
exit cases (like CTRL-C) could leave the terminal in an unusable
state. This is addressed here with a special-case handler which
restores terminal settings.
Significantly extend testing of process output logger:
* New PseudoShell object for setting up a master and child process
and configuring file descriptor inheritance between the two
* Tests for "v" verbosity toggle making use of the added PseudoShell
object
* Added `uniq` function which takes a list of elements and replaces
any consecutive sequence of duplicate elements with a single
instance (e.g. "112211" -> "121")
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* MAINT: Charliecloud OSX error
* raise an appropriate error when attempting to build
Charliecloud on Mac OSX, since it will otherwise fail
with a more confusing configure stage link check failure
* Update var/spack/repos/builtin/packages/charliecloud/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* MAINT: PR 16049 revision
* remove an unused import
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Moved link to the right place in the docs
* Fixed a few minor issues in extensions docs
Fixed a typo, added a subsubsection for better
navigation, reworded "modules in Python" as
"Python packages"
* pfft: fix to handle 'precision' variant in fftw
pfft had been checking for +double, etc. in fftw spec, which no longer
are present (replaced by Multivalued variant precision).
* pfft: fix to handle 'precision' variant in fftw
pfft had been checking for +double, etc. in fftw spec, which no
longer are present (replaced by Multivalued variant precision).
(amended to use more idiomatic checks as suggested by @alalazo)
sourceware.org is often quite overrun and times out or results in
certificate errors.
Since libffi, bzip2, elfutils, etc. are quite fundamental in
build chains, lets add some official mirrors.
libffi, bzip2, elfutils, lvm2, valgrind: add mirrors
* Patch Mathematica
Mathematica installer moves all files and directories from installation directory to a backup one. The problem is that it also moves .spack to this backup location. Once it's done it does not move .spack back where it was.
My patch creates a copy of .spack to /tmp then moves it back right before exiting the install call.
* Make lint happy
* Use Spack native copy()
As suggested in peer-review let's:
- Copy .spack to stage directory so I don't have to use random
- Use Spack native copy() to do these operations
* Use join_path to create paths
As per peer-review suggestion:
- Use join_path to create paths
- Use copy_tree since we're copying a directory that could have sub-directories
* Update package.py to include py-notebook 6.0.3 and sha
* Update package.py
* [py-notebook] updated py-tornado version requirements
* [py-notebook] reworked and reordered for readability
* [py-notebook] updated version requirement for py-jupyter-client
* [py-notebook] updated version requirements for py-jupyter-core
Co-authored-by: ehdeec <ehdeec@rit.edu>
* new package: BART
This PR adds the BART (Berkeley Advanced Reconstruction Toolset)
package.
Despite the presence of CMake files, this package builds with a
Makefile. It looks like the project is moving away from cmake. The patch
for MKL has been committed upstream so should only be necessary for this
version of BART. The Makefile patch is meant for working with Spack and
would not be useful upstream. The bart scripts are still setup to use
bart with the subcommands being individual binaries. This patches those
to use the single binary with built-in subcommands and assumes that
spack is providing the TOOLBOX environment variable and setting PATH.
* Update var/spack/repos/builtin/packages/bart/package.py
Yes, '==' make more sense for a single string.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* The python dependencies are run time only.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New patch release SLEPc 3.13.1
* Update var/spack/repos/builtin/packages/slepc/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The performance improvements done in #14693 where leaving the DB in an inconsistent state when specs were removed from it. This PR updates the DB internal state whenever the DB is written to a file.
Note that we still cannot properly enumerate installed dependents, so there is a TODO in this code. Fixing that will require the dependents dictionaries in specs to be re-keyed (either by hash, or not keyed at all -- a list would do). See #11983 for details.
* new package: py-youtube-dl + fixes for dependencies
This PR adds the py-youtube-dl program. In addition, there are a couple
of dependency packages that needed to be updated.
* ffmpeg
This is needed by py-youtube-dl. However, the spack ffmpeg recipe does
not include a lot of options, specifically, a dependency on openssl for
working with the https protocol.
- Added updated version.
- Added variants for the different licensing options.
- Added "meta" variants for X and drawtext. These turn on/off several
options.
- Set variants and dependencies for many options. The defaults are based
on the configuration settings in ffmpeg.
- Set dependencies that were missing or that will likely get pulled in
from the system.
* libxml2
The ffmpeg+libxml2 variant initially failed to build. The issue is that
libxml2 sets the headers property to
include_dir = self.spec.prefix.include.libxml2
The ffmpeg configure looks for prefix.include and fills in the rest.
This could probably be patched in ffmpeg but the headers property in the
libxml2 recipe is not consistent with the environment module or the
pkgconfig file, both of which set the headers path to prefix.include.
This PR sets the libxml2 headers property to
include_dir = self.spec.prefix.include
A spot check of a few libxml2 dependents did not rreveal any problems
with this change.
* Comment out libxml2 dependency in ffmpeg
The header property issue of the spack libxml2 package will need to be
resolved in another PR before libxml2 can be enabled in ffmpeg.
* new package: openmm
* dependency adjustments
* 1. modify dependencies
2. openmm dynamically compiles cuda kernels during runtime,
attempt to set up an environment that will work.
* Update var/spack/repos/builtin/packages/openmm/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This adds a boolean 'libtirpc' variant to the hdf package.
Default is false, which will reproduce previous behavior (which
was to rely either on system xdr headers/library, or have hdf use
it's builtin xdr lib/headers (which are only for 32 bit))
If true, a dependency is added on 'libtirpc', and the LIBS and
CPPFLAGS are updated in the configure command to find the libtirpc
library and xdr.h header.
This is needed as RHEL8 (and presumably other distros have or will be)
removed xdr.h from glib-headers package,which was breaking the previous
behavior of using system xdr headers.
* opencv: assorted fixes
1. depends on blas when +lapack
2. set cuda nvcc flags for cuda_arch
3. let cuda/contrib builds work
4. depends on hdf5 when cuda/contrib
5. depends on ant when +java
6. allow protobuf version to be different
7. let opencv recompile it's protoc files.
* ant is a build-time dependency
* register +cuda~contrib as impossible.
The old api is found in version 0.3.0 which uses a different release
name, so the url function was updated to properly find the older
releases. Also, this removes the boost constraint on the 0.3.0 version
which does not need it.
* meson: Add 0.54.0
This change also improves the rpath patch we are using. Instead of never
removing the rpath, we now only keep it if running within Spack to
guarantee consistent behavior of Meson.
* meson: Make ninja a hard dependency
* Update doxygen package
* Add new version releases 1.8.16 and 1.8.17
* Add mscgen as an optional dependency.
* Update doxygen package.py
Fix typo in mscgen dependency specification
* Remove whitespace for flake8
* primer3: move to github, add 2.5.0, fix 2.3.7
- The Primer3 project moved to GitHub.
- update the URL
- compare the tarballs 2.3.7 from Sourceforge and github, no
significant differences (e.g. the Sourceforge tarball contained a
couple of "tmp" files).
- update the signature for the 2.3.7 tarball.
- @2.3.7 doesn't build with gcc@8.4.0, there's a dubious pointer/int
comparison that causes an error. It was fixed upstream in newer
versions, apply simple patch to this version so that it continues to
be build-able with newer compilers. See:
- https://github.com/primer3-org/primer3/issues/2
- https://github.com/primer3-org/primer3/issues/3
- Add info for @2.5.0, which builds cleanly.
* Flake8 cleanup
* [gtk-doc] created template
* [gtk-doc] using custom url to standardize on dotted version
* [gtk-doc] added description and homepage
* [gtk-doc] added dependencies and added pdf variant
* [gtk-doc] commented out pdf variant
* [gtk-doc] cleaned up leftover fixmes
* [gtk-doc] flake8
* [gtk-doc] readded url
* [gtk-doc] python packages are build and run dependencies
* py-torch: Fix v1.4.0 by adding v1.4.1
version/tag hack so that 1.4.0 is still workable.
see pytorch/pytorch#35149
* delete/disable fbgemm support in 1.4.0
* moved conflict
* Skip collection of compiler link paths if compiler does not define a verbose flag
* modules config bug: allow user to configure a compiler without an explicit entry for loaded modules
Newer versions of glib require Meson, so this PR adds support for that
using a hybrid approach. glib@5.28: will be built using Meson, older
versions still make use of Autotools.
The most recent change to the openjdk package set expand=False for all versions
of the package. This means only the unexpanded archive will be installed, which is not correct.
* Update dependencies and support variant for Fortran Intermediate Representation.
* Add Cmake flags that toggle Fortran Intermediate Representation on/off. Exclude Flang tests for now.
* f18+fir variant needs next release of llvm or master.
* Only build tests if you are pass in --test to spack install
* New package: py-tensorboard
* some basic dependencies based on requirements.txt
remove the older version that doesn't build
* requested changes
* add additional dependencies
* more dependency changes
* py-onnx: only use py-typing if python < 3.5
avoids a 'Callable has no attribute __abc_registry' error. See
onnx/onnx#2199 and python/typing#573
* add version 1.6.0 while we're here.
Add new version 4.5.0 and update quarantine variable list to also
include LD_PRELOAD in addition to LD_LIBRARY_PATH (to avoid side-effect
when the first depends on the latter).
- Add info for version 0.68.3
- Add variant to package that enables the "extended" features. It
works with both of the existing versions (not sure how far back it
goes, but confirmed that it's valid fro 0.53).
Tested on OS X with go@1.14.1.
* Add ACTS v0.21
* Update var/spack/repos/builtin/packages/acts-core/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* slepc: updates for @devel
- +arpack now works with int64
- +blopex add conflict with int64
- switch to using --with-arpack-lib [from --with-arpack-lib] with current slepc
- use updated blopex with current slepc
* slepc: conflict with blopex should be in all versions
* slepc: add new downloads
* slepc: add whitespace around operator
Co-authored-by: Satish Balay <balay@mcs.anl.gov>
* New package(s): py-pydeps and py-stdlib-list
* requested changes
* Update var/spack/repos/builtin/packages/py-stdlib-list/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cryptsetup: restrict the version of automake if @2.2.1
fixes#15706
* Update var/spack/repos/builtin/packages/cryptsetup/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add capability for detecting build number for Arm compilers
* Fixing fleck8 errors and updating test_arm_version_detection function for more detailed Arm compielr version detection
* Ran flake8 locally and corrected errors
* Altering Arm compielr version check to remove else clause and be more consistent with other compielr version checks. Added test case so both the 'if' and 'else' conditionals of the Arm compiler version check have a test case
Co-authored-by: EC2 Default User <ec2-user@ip-172-31-7-135.us-east-2.compute.internal>
* Fixed building coreutils on Darwin
* Bump nano version to 4.9
* Coreutils: Add program prefix g so we don't conflict with Apple utilities
* Fix intendation
* Make format more spack like
* Removed unnecesary changes
* Merge branch 'develop' of github.com:DiegoMagdaleno/spack into develop
Fix linking libgit2 on Darwin
* Revert "Merge pull request #3 from spack/develop"
This reverts commit 58dbbdb82b, reversing
changes made to dd7a413f48.
* Revert "Revert "Merge pull request #3 from spack/develop""
This reverts commit f956aa7b13.
* Revert "Merge branch 'develop' of github.com:DiegoMagdaleno/spack into develop"
This reverts commit 50321f7986.
spack.util.environment_after_sourcing_files compares the local
environment against a shell environment after having sourced a
file; but this ends up including the default shell profile and
rc, which might differ from the local environment.
To change this, compare against the default shell environment,
expressed here as 'source /dev/null'.
* only override spec prefix for non-external packages
* add test that environment shell modifications respect explicitly-specified prefixes for external packages
* add clarifying comment
* update version: intel packages daal, ipp, mkl-dnn, mkl, mpi, parallel-studio, pin, tbb and makes url parameter consistent and always use single quote.
* Fixes a typo with one of the sha256 checksum..
* Adds version entries for new versions of Intel packages.
Co-authored-by: Robert Mijakovic <robert.mijakovic@lrz.de>
* Patching unqlite to be able to build a shared library
* Correcting a whitespace for flake8 to pass
* added comment about PR on unqlite
* extra commit to force github to merge
* Update flit package to v2.1.0 and add dependencies
* flit: comment out bash dependency
The host system should have bash available and compiling bash through
spack failed for me. I'm not sure if binutils and coreutils should
be listed as dependencies as well.
* Add new version of py-pyelftools
* py-pyelftools: add py-setuptools as a build dependency
* Address review comments
HDF5 1.12 broke backward compatibility, so we're preferring version 1.10
for now. Packages that need the new API should specify:
depends_on("hdf5@1.12:")
to be explicit. We can eventually change the preference, but at the
moment most libraries have not udpated to use the new HDF5.
* Added v3 of Laghos
Added v3 of Laghos as per
https://github.com/CEED/Laghos/blob/v3.0/README.md
* Update var/spack/repos/builtin/packages/laghos/package.py
Changed develop->master as per PR
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Made Metis Dependency Explicit
Added explicit metis dependency
* Folded @develop Laghos Deps in to @3.0:
Theoretically there will be a difference between develop and 3.0: in the
future but currently there is not
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add myself as a maintainer
* This was a regression that occured in previous PR. Flang has been excised from LLVM for now until f18 is merged upstream.
* Libraries only needed when a GPU backend is present.
* Add version 18-08-9-1
* Add variant to allow setting the sysconfdir: See below
About sysconfdir:
slurm has a server and a client.
To use the correct communication channel, the client needs
to be able to read the correct config. This config is in
PREFIX/etc.
Let's assume one has the server part installed as a system
package. This generally is a good idea, so that the server
gets started during boot. This means, that the config is
in /etc/slurm.
If one now wants to use the client part (library!) via
spack, one has a problem: spack's slurm looks in
SPACK-PACKAGE-PREFIX/etc for the config.
There needs to be a way to let the spack installed package
use the system's config.
So add a variant to override the path during build:
sysconfdir=/etc/slurm.
This is much like what happened in #15307 for munge.
If a user invoked "spack env activate example-henv", Spack would
mistakenly interpret the "-h" from "example-henv" as the "-h" option.
This commit allows users to create and activate environments with
"-h" in the name.
This issue existed for bash shell support as well as csh support, and
this commit addresses both, along with some other unrelated csh
support issues.
Currently, to force Spack to use an external MPI, you have to specify `buildable: False`
for every MPI provider in Spack in your packages.yaml file. This is both tedious and
fragile, as new MPI providers can be added and break your workflow when you do a
git pull.
This PR allows you to specify an entire virtual dependency as non-buildable, and
specify particular implementations to be built:
```
packages:
all:
providers:
mpi: [mpich]
mpi:
buildable: false
paths:
mpich@3.2 %gcc@7.3.0: /usr/packages/mpich-3.2-gcc-7.3.0
```
will force all Spack builds to use the specified `mpich` install.
* geant4: new version 10.6 plus simplifications
Add new 10.6.0 release, migrating download of source to use Geant4's
public release repo on CERN GitLab. Change versioning scheme to use
clearer and standard semantic scheme.
Update geant4-data and g4XXX data packages with new versions. Migrate
geant4-data as a BundlePackage of the g4XXX packages, installing links
to each under a single directory under share for geant4-data. Ensure
each g4XXX package exports the environment variable pointing to its
location expected by Geant4.
Remove "data" variant from Geant4 package and always use geant4-data.
Simplify cxxstd variant transport to dependencies.
* g4<DATA>: Use self to resolve correct prefix
* geant4, data: Fix flake8 errors
* g4photonevaporation: flake8 fix
* geant4: vecgeom version depends_on
Geant4 major.minor versions have specific dependencies on vecgeom
versions. Add missing vecgeom version for geant4 10.5, and match
version requirements for vecgeom in geant4 depends_on.
* geant4: c++17 patch specific for 10.4.3
* geant4: simplify geant4-data setup
* geant4: Use new define_from_variant function
* geant4: fix flake8 errors
* helics: add new package
* Remove FIXME boilerplate
* Use open @master: verison range for git dependency and remove mpi fix branch version
* Add blank line after spack import
* py-onnx: depends on cmake >= 3.1
* Update var/spack/repos/builtin/packages/py-onnx/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Aded Option to Disable Shared Lua library
Added option to disable generation of shared object library for lua to
avoid build issues on static only platforms
* Fixed Flake8 Issue with Lua Spackage
Fixed indentation issue with lua spackage
The current implementation of `spack-python` will leave an extra shell
around while it runs. That shell should really replace itself with
spack.
- [x] add exec to spack-python script
* Add initial attempt at intel-mpi-benchmarks package
* Add more checksummed versions
* Changes to how makefile is handled
* First working install version. Needs tuning to support building specific benchmarks
* Add variant for building specific benchmarks rather than all of them
* Minor syntax change
* New package: gdrcopy
provides the userspace libraries for gdrcopy.
* Update var/spack/repos/builtin/packages/gdrcopy/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* XIOS: add new versions
Patch has been removed because it was not applied to any previously
existing versions and it actually breaks the new versions added by this
PR.
* Sort versions from newest to oldest
This allows the llvm build to support:
* clang cuda
* libomptarget for:
* current host
* cuda
* bitcode compilation of libomptarget device runtime for inlining by
bootstrapping libomptarget
* split dwarf information support as an option for debug builds, if you need a
debug build, for the love of all that's good in the universe use this flag
* adds necessary dependencies for shared library builds and libomp and
libomp target to build correctly
* new version of z3 to make it sufficient to build recent llvm
The actual change is much smaller than the diff, this is because it's been formatted with black. I realize this kinda sucks right now, but I'm hoping it will make future updates here less painful.
the gcc package.py includes patches for a sanitizer related bug that look
like they've been fixed in gcc 8.4.0, which caused `spack install` to fail.
This PR excludes patching gcc >= 8.4.0 and < 9.0.0.
* py-statsmodels: update to 0.102 and fix dependencies
* Update var/spack/repos/builtin/packages/py-statsmodels/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update URL for new py2neo versions
* Use pypi for py-py2neo
* Add version 4.3.0
* Update py2neo dependencies
* Apply suggestions from code review
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* NETCDF: Remove maxdims maxvars variant
I'm not sure of the correct protocol to do this, so decided to make a stab and hopefully it works or I'm told the correct way...
The `maxdims` and `maxvars` variants for the NetCDF package were, to the best of my knowledge, only ever used for the Exodus library in the SEACAS package. In versions of NetCDF prior to 4.4.0, Exodus required that the `NC_MAX_DIMS` and `NC_MAX_VARS` be increased over the default values. This requirement was removed in 4.4.0 and later.
I do not know of any way to make a variant depend on the version and since the `maxdims` and `maxvars` variants are integer values and not boolean, then every build of NetCDF will have these variants. Typically `maxdims=1024 maxvars=8192` and the build will patch the `netcdf.h` include file for every build even though it is (almost) never needed.
The SEACAS package has a NetCDF version requirement of >4.6.2, so it no longer specifies the `maxdims` or `maxvars` variant and I could find no other package in spack that uses this variant either, so removal should not break anything *in* spack. However, there is no guarantee that some other external package doesn't use the variant, so I'm not sure of the correct way to remove the variant.
For this PR, I simply removed the variants. If there is a way to specify use of the variant tied to a specific version, I couldn't find it anywhere...
* Address review comment
Removed `is_integral` and `import numbers` since `is_integral` was only place it was used.
* Add blank line for flake8
* magma now extends CudaPackage class, taking care of the gcc conflicts
* enforce +cuda; thus cuda is dependency via CudaPackage class
* add conflict
* use cuda_arch to set GPU_TARGET build option
* get rid of unnecessary constraint
* flake8
* impose cuda version dependency found empirically
* add variant description
* add conflict
Co-authored-by: Sinan81 <Sinan81@github>
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
* build python bindings within qscintilla package via extend_path trick
* add todo
* reflect new setup also in py-pyqt4 package
* get rid of qscintilla dependency
* also tweak qgis for the new setup
* generalize the building of python bindings
* generalize building of pythong bindings to all qt versions
* add qsci_api variant
* add qsci_variant for pyqt4 package as well; add comment
* pyqt dependency should build with +qsci_api variant enabled
* fix bugs
* improve style
* reflect recent changes
* flake8
* improve style
* more flake8
* more flake8
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
* Add some comments explaining the choice of flag_handler.
* Fix QMCPACK install method.
* Add support for ppconvert. This requires a custom build method.
* Fix QMCPACK setup_run_environment. Nexus should be properly supported now.
* Cleaner way to check for intel-mkl in spec.
* Remove build method and use build_targets property instead.
* Additional fixed for install method. Effectively restoring the original install method.
* Add the missing backslash to fix directory names.
* Update var/spack/repos/builtin/packages/qmcpack/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qmcpack/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qmcpack/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Omit these conflicts on mkl variants for now, will hopefully be supportted with new concretizer in a couple of months.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
intel moved the repository around.
github changes the prefix inside the tar according to the
repository name.
So all sha256 have changed!
I verified that the tar contents for 2019.4 did not change
except for the prefix.
* TensorFlow: Clean up/simplify the installation, make sure the headers are
installed so that horovod can find them successfully. Fix the 2.0.* builds.
* Backport of 837c8b6b upstream
"Remove contrib cloud bigtable and storage ops/kernels."
Allows 2.0.* releases to build with '--config=nogcp'
* comment regarding tensorflow issue #31187
Co-authored-by: Andrew W Elble <aweits@skl-a-00.rc.rit.edu>
## Summary
This PR updates and improves the Spack package for [UPC++](https://upcxx.lbl.gov).
I'm an LBL employee and developer on the UPC++ team, as well as the maintainer of this Spack package.
### Key Improvements:
* Adding new 2020.3.0 release and support for use of develop/master branches
- Our build infrastructure underwent a major change in this release, switching from a hand-rolled Python2 script to a bash-based autoconf work-alike.
- The new build system is NOT using autotools (nor does it support some of the more esoteric autoconf options), but the user interface for common builds is similar.
* Add explicit support for an MPI optional dependency
- New `mpi` variant enables use of the MPI-based spawner (most relevant on loosely coupled clusters), and the (unofficial) mpi-conduit backend
- This variant is OFF by default, since UPC++ works fine without MPI on many systems, increasing the likelihood first-time Spack users get a working build without needing to correctly setup MPI
* Add support for post-install testing using the test support deployed in the new build infrastructure
* Fix or workaround a few bugs observed during testing
### Status
The new package has been validated with a variety of specs across over seven different systems, including: NERSC cori, ALCF Theta, OLCF Summit, an in-house Linux cluster, and macOS laptops (Mojave and Catalina).
Expose serial/parallel build (MPI), CUDA/OpenMP backends, Clang, and
Ascent bindings.
Interestingly, `warpx +ascent` currently leads to an infinite loop in
the Spack concretizer.
Removed provider_index use of 'import from' and refactored a few routines to a further subclassing of _IndexBase for implementing user defined bindings of provider specs.
Git-based conditions database for HEP and other experiments.
Use latest release version and current master to support Linux and
macOS. Add core known dependencies and conflicts related to C++17
support.
cxxstd variant added to help transitive dependencies, and for future
support for newer standards in future.
* relocate: removed import from statements
* relocate: renamed *Exception to *Error
This aims at consistency in naming with both
the standard library (ValueError, AttributeError,
etc.) and other errors in 'spack.error'.
Improved existing docstrings
* relocate: simplified search function by un-nesting conditionals
The search function that searches for patchelf has been
refactored to remove deeply nested conditionals.
Extended docstring.
* relocate: removed a condition specific to unit tests
* relocate: added test for _patchelf
Our unit tests run many times. Any unit test which actually installs
a package (which involves fetching code on the internet) is a severe
bug because it runs an installation many times (i.e. re-downloading
the same package for each version of Python that we run unit tests
for).
This reverts commit 25893f1, which added tests that install real
packages.
If the Python used by Spack does not include Setuptools, then
'spack test' will fail because Spack's vendored pytest dependency
imports and uses Setuptools in some of its functions. It turns out
that Spack doesn't use the functionality those methods enable, so
this PR removes those functions and thereby allows 'spack test' to
run without Setuptools.
For any Spack test using Spack's YAML configuration, avoid using real
Spack configuration that has been cached by other tests and Spack
startup logic. Previously this was only done for tests using
'mutable_config' (i.e. those which expected to *change* the
configuration of Spack), but in fact all tests that read Spack config
should use it.
This was an issue when running tests within an environment, because
compiler configuration ends up being queried earlier, and the user's
real config "leaks" into the cache. Outside an environment, the cache
is never set until tests touch it, so we weren't seeing this issue.
`spack test` has a spurious '[+] ' in the output:
```
lib/spack/spack/test/install.py .........[+] ......
```
Output is properly suppressed:
```
lib/spack/spack/test/install.py ...............
```
Update WarpX for recent developments:
- add openPMD I/O (default: ON)
- remove electro-static solver option (now a runtime option)
- enable tiny profiler by default
- depend on new CXX std support in make scripts for C++14
- WarpX only supports 2D and 3D in cartesian dims
* davix: add cxxstd variant
Davix is written in C++, so add this variant to allow dependents to specify
this so a consistent ABI is used.
* davix: fix flake8 errors
Reading the database repeatedly can be quite slow. We need a way to speed
up Spack when it reads the DB multiple times, but the DB has not been
modified between reads (which is nearly all the time).
- [x] Add a file containing a unique uuid that is regenerated at database
write time. Use this uuid to suppress re-parsing the database
contents if we know a previous uuid and the uuid has not changed.
- [x] Fix mutable_database fixture so that it resets the last seen
verifier when it resets.
- [x] Enable not rereading the database immediately after a write. Make
the tests reset the last seen verifier in between tests that use the
database fixture.
- [x] make presence of uuid module optional
* Add gmsh v4.5.4 with new options
This adds OpenCASCADE as an alternative to the oce package.
A new variant 'privateapi' is added to enable the gmsh private API.
* Make oce conflict with opencascade in gmsh
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded.
This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded.
This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
This makes sure that a package's fetch_options are used when fetching
new versions to checksum. This allows working around problems with
slow servers or those requiring a cookie to be set.
Bug: Spack hangs on some Cray machines
Reason: The TERM environment variable is necessary to run bash -lc "echo $CRAY_CPU_TARGET", but we run that command within env -i, which wipes the environment.
Fix: Manually forward the TERM environment variable to env -i /bin/bash -lc "echo $CRAY_CPU_TARGET"
When trying to use an upstream Spack repository, as of f2aca86 Spack
was attempting to write to the upstream DB based on a new metadata
directory added in that commit. Upstream DBs are read-only, so this
should not occur.
This adds a check to prevent Spack from writing to the upstream DB
fixes#15449
Before this PR a call to pkg.url_for_version was modifying
class attributes determining different results for subsequents
calls and an error when the urls was empty.
This recovers the old behavior of replace_prefix_bin that was
modified to work with elf binaries by prefixing os.sep to new prefix
until length is the same as old prefix.
Testing the install StopIteration exception resulted in an attribute error:
AttributeError: 'StopIteration' object has no attribute 'message'
This PR adds a unit test and resolves that error.
The new build process, introduced in #13100 , relies on a spec's dependents in addition to their dependencies. Loading a spec from a yaml file was not initializing the dependents.
- [x] populate dependents when loading from yaml
The distributed build PR (#13100) -- did not check the install status of dependencies when using the `--only package` option so would refuse to install a package with the claim that it had uninstalled dependencies whether that was the case or not.
- [x] add install status checks for the `--only package` case.
- [x] add initial set of tests
This change stores packages' configure arguments during build and makes
use of them while refreshing module files. This fixes problems such as in
#10716.
* Add version 6.20.{00,02}, don't yet mark it preferred
* It needs zstd
* It needs numpy (at least for 6.20.00:6.20.03)
* Reorder python dependencies a bit
* Add mlp variant, default False
Older versions always include mlp, so no conflicts there.
* Disable tmva, because it needs mlp
* tmva needs mlp, so add conflict
* Add sources and resources for each version of Rust
* install bootstrapping compiler into stage
* Add libgit2
* Install bootstrapping compiler correctly
* implement full rust bootstrap
* Remove support for Rust pre-1.14
Also add lots of comments
* Support only Rust 1.17 and newer
* Remove < 1.23 versions of Rust
* Change the layout of rust_releases for maintainability
* Remove LLVM variant
* Address flake8 issues
* Make libgit2 curl variant default False, conflict 0.28 and newer
* Remove binutils dependency
* Add ARM64 while we're at it
* flake8
* use the 'python' routine rather than relying on the correct python to be picked up
Bug: Spack hangs on some Cray machines
Reason: The TERM environment variable is necessary to run bash -lc "echo $CRAY_CPU_TARGET", but we run that command within env -i, which wipes the environment.
Fix: Manually forward the TERM environment variable to env -i /bin/bash -lc "echo $CRAY_CPU_TARGET"
* petsc: add checksum for 3.12.4
* petsc: constrain hdf5 to <= 1.10.x
Current petsc will error when being build with hdf5 1.12, so this ensures that
a compatible hdf5 will be used. Fix suggested by @balay.
- [x] move some logic for handling virtual packages from the `spack
dependencies` command into `spack.package.possible_dependencies()`
- [x] rework possible dependencies tests so that expected and actual
output are on the left/right respectively
Hpctoolkit master was recently updated to test for and allow old
binutils <= 2.33.1 and/or new binutils 2.34. Older hpctoolkit up to
2030.03.01 will forever require :2.33.1.
Adjust the libunwind dependency for safety with the current
concretizer.
Fabtests provides runtime analysis tools and examples of libfabric.
As with other projects that are tightly version-bound, e.g.
`py-adios` and `adios`, the fact that releases stem from the same
repo does not imply they should be the same package.
Remove resources, which complicate the libfabric build, and update
the fabtests package accordingly.
When trying to use an upstream Spack repository, as of f2aca86 Spack
was attempting to write to the upstream DB based on a new metadata
directory added in that commit. Upstream DBs are read-only, so this
should not occur.
This adds a check to prevent Spack from writing to the upstream DB
* Trilinos: Add more variants
+ Provide three new variants to allow building trilinos without netcdf, matio,
or glm.
+ No change to defaults.
* Fix style issue.
* py-tuiview: Source has moved to github
* py-tuiview: Explicitly require +python on gdal dependency
* py-tuiview: Versions up to 1.1.99 are qt4 only
* py-tuiview: Add version 1.2.6, which is qt5 only
* Explicit version range on gdal dependency
* adol-c:updating sources location
* Update var/spack/repos/builtin/packages/adol-c/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* try extend path to solve PyQt5.sip not found issue
* disable private sip installation in sippackage class
* undo manual PyQt5 dir creation in py-sip site-packages dir
* fix typo
* fix typo
* also apply fix to PyQt4
* tidy up
* flake8 and tidy up
* tidy and undo hardcoding of python_include_dir
* replace hardcoded python inc dir
* fix minor issues
* rethink include dir variable name
* improve style
* add new versions
* implement new sip setup to qsci installation
* set sip-incdir correctly for the new setup
* setup extend_path thing before qsci python bindings
* take care of conflict
* flake8
* also extend for PyQt4
* improve style
* improve style
* SipPackage build sys should depend on py-sip
* consolidate extend_path fixes into SipPackage
* fix typo
* fix bugs
* flake8
* revert sip doc to pre-resource setup
* import os module
* flake8
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
* CGNS: Version update
The CGNS library has several new versions. There was a format-changing change in 3.4.0 which was removed for 3.4.1. The change was then added again and released with a change to the major version (4.0.0). Note that 4.0.0 should be close to the functionality of 3.4.0.
* CGNS: Add shared variant
Added the `shared`variant to make the CMake build correctly pick up the RPATH settings.
* Added a package for the DiHydrogen distributed linear algebra library.
* Updated recipe to provide cuda architecture constaints.
* Addressed reviewer comments
* Fixed flake 8
* Previous qt changes broke the openspeedshop gui build. This puts back the changes that caused the breakage.
* Update the qt version to be more robust.
Co-authored-by: Galarowicz, James <jgalarowicz@newmexicoconsortium.org>
* Add patches when building with NAG
* Make libxml2 support optional. Also include conflict for
@:3.2~hydra+libxml2 since @:3.2~hydra does not require libxml2
support
* Add '--disable-silent-rules' to get more verbose output during
the build
* Implemented working file filtering to replace spack compiler wrapper with real compiler.
* Using string=True instead of re.escape. Using self.prefix.lib instead of appending /lib.
Co-authored-by: Wyatt Spear <wspear@cs.uoregon.edu>
* Add new vecgeom versions, add cuda support, automate target options
* Add ROOT, GDML, and external VecCore support to VecGeom
* Address reviewer comments
* Update vecgeom for CUDA
* Update versions
fixes#11555
Every path in CPATH is equivalent to a -I path to the compiler,
while every path in *_INCLUDE_PATH is equivalent to -isystem.
The latter avoids the noise due to warnings coming from 3rd party
libraries that a project depends on.
Added INCLUDE env variable (Intel Fortran, .mod files)
Add a 'define_from_variant` helper function to CMake-based Spack
packages to convert package variants into CMake arguments. For
example:
args.append('-DFOO=%s' % ('ON' if '+foo' in self.spec else 'OFF'))
can be replaced with:
args.append(self.define_from_variant('foo'))
The following conversions are handled automatically:
* Flag variants will be converted to CMake booleans
* Multivalued variants will be converted to semicolon-separated strings
* Other variant values are converted to CMake string arguments
This also adds a 'define' helper method to convert any variable to
a CMake argument. It has the same conversion rules as
'define_from_variant' (but operates directly on values rather than
requiring the user to supply the name of a package variant).
* Buildcache: Install into non-default directory layouts
Store a dictionary mapping of original dependency prefixes to dependency hashes
Use the loaded spec to grab the new dependency prefixes in the new directory layout.
Map the original dependency prefixes to the new dependency prefixes using the dependency hashes.
Use the dependency prefixes map to replace original rpaths with new rpaths preserving the order.
For mach-o binaries, use the dependency prefixes map to replace the dependency library entires for libraries and executables and the replace the library id for libraries.
On Linux, patchelf is used to replace the rpaths of elf binaries.
On macOS, install_name_tool is used to replace the rpaths and dependency libraries of mach-o binaries and the id of mach-o libraries.
On Linux, macholib is used to replace the dependency libraries of mach-o binaries and the id of mach-o libraries.
Binary text with padding replacement is attempted for all binaries for the following paths:
spack layout root
spack prefix
sbang script location
dependency prefixes
package prefix
Text replacement is attempted for all text files using the paths above.
Symbolic links to the absolute path of the package install prefix are replaced, all others produce warnings.
PR #15212 added a new connect_timeout option that can be overridden
using fetch_options but had to specified per-version. This adds a new
per-package variable that can be used to override fetch_options for
all versions in the package. This includes connect_timeout as well
as 'cookie' (e.g. for the jdk package).
Packages can combine package-level fetch_options with per-version
fetch_options, in which case the version fetch_options completely
override the package-level fetch_options.
This commit includes tests for the added behavior.
* Change py-merlinwf to py-merlin to match PyPi.
Change py-merlin to py-merlin-info.
Move to py-merlin_info.
Add py-merlin-info back in.
* Update dependent packages for the new merlin name.
* Remove non-working pyre and the associated packages, exchanger,
py-pythia and py-mlerin-info from citcoms.
* Remove blank line.
fixes#15449
Before this PR a call to pkg.url_for_version was modifying
class attributes determining different results for subsequents
calls and an error when the urls was empty.
* scr: add develop, legacy branches; version 2.0.0
squash! scr: add develop and legacy versions
* filo: package for SCR component
* spath: package for SCR component
* axl: update for versions 0.3 and 0.2
* scr: build with components
* spath: structure of +mpi if/else
* 👌 capitalization of ecp-veloc
* scr: branches are always greater than any version
* Parallel-netcdf: update package.
* Add a temporary patch for version 'develop'.
* Rename version 'develop' to 'master'.
* Drop the patch for 'master'.
includes fixes for likwid-mpirun, better support for ARM and POWER,
other bugfixes.
For full support of ARM and POWER, #14183 has to be merged, too.
Added TomTheBear as maintainer. He is the current main developer of
LIKWID.
* new package: GunRock
* Update var/spack/repos/builtin/packages/gunrock/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* improve
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Sinan81 <Sinan81@github>
* add --skip-unstable-versions option to 'spack mirror create' which skips sources/resource for packages if their version is not stable (i.e. if they are the head of a git branch rather than a fixed commit)
* '--skip-unstable-versions' should skip all VCS sources/resources, not just those which are not cachable
* New package: bat
Add package for bat:
A cat(1) clone with wings.
* Update copyright date
* Embiggen comment re build env settings
Provide a bit more explanatory text about why setup_build_environment
needs to set LLVM_CONFIG_PATH and LIBCLANG_PATH.
Co-authored-by: George Hartzell <ghartzell@audentestx.com>
* eigen: updated url to point to gitlab
fixes#13890
Eigen migrated from bitbucket to gitlab
* eigen: simplified package (no dependencies other than stdlib)
* Added TODO list for future improvements
* libunwind: remove version 2018.10.12, add stable branch
Finish cleaning up the libunwind version numbers. The 2018.10.12
snapshot number didn't fit well with spack's ordering (my bad), and
1.4-rc1 is a near identical replacement.
Add a version for the 1.4-stable branch.
Add a variant for zlib compressed symbol tables (develop branch only).
Adjust packages caliper and hpctoolkit to adapt to the changes.
Add myself as maintainer.
* Flake
* Settle on renaming 'develop' to 'master' (to match the branch name)
and name the 'v1.4-stable' branch as '1.4-head'. 'stable' or
'1.4-stable' is a better name, but '1.4-head' (an infinity version)
sorts better.
* explicitly link against libtinfo
* Update to v15.9
* fixed indentation
* fixed url definition
* added url vor current version again
* fixed indentation
* moving url_version to the bottom
* Add version 0.5.14
* Add variant to allow setting the localstatedir: See below
* Add bzip2 dependency
* Add myself to maintainers (I just think, I can care for
this package)
About localstatedir:
munge has a server and a client.
They communicate via unix domain sockets.
This socket is in PREFIX/var.
This package provides the client, the server, and
development part (headers, libraries).
Let's assume one has the server part installed as a system
package. This generally is a good idea, so that the server
gets started during boot. This means, that the socket is in
the system's /var.
If one now wants to use the client part (library!) via
spack, one has a problem: spack's munge looks in
SPACK-PACKAGE-PREFIX/var for the socket.
There needs to be a way to let the spack installed package
use the system's socket.
So add a variant to override the path during build:
localstatedir=/var.
Allows spack.config InternalConfigScope and Configuration.set() to
handle keys with trailing ':' to indicate replacement vs merge
behavior with respect to lower priority scopes.
Lists may now be replaced rather than merged (this behavior was
previously only available for dictionaries).
This commit adds tests for the new behavior.
* Add Checksum for 1.4.1.4 Release
Add checksum for 1.4.1.4 release. Mark myself as maintainer. List develop version.
* Update var/spack/repos/builtin/packages/libhio/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
It was reported that UnifyFS had a bug with the --enable-mpi-mount
config option, which corresponds to the auto-mount variant. This bug
was fixed in the UnifyFS dev branch, however remains broken for the
0.9.0 version.
This adds a patch to the unifyfs package to fix the auto-mount
variant when installing with version 0.9.0.
This also removes the openssl dependency as unifyfs does not directly
depend on it. This was said to be a non-explicit dependency in #15258.
However, if it is needed, it is likely a non-explicit dependency of
one of unifyfs's dependencies and should be added there.
Fixes: #15292
* lammps: add most recent stable release
* Update var/spack/repos/builtin/packages/lammps/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Add package for ripgrep:
ripgrep is a line-oriented search tool that recursively searches
your current directory for a regex pattern. ripgrep is similar to
other popular search tools like The Silver Searcher, ack and grep.
* [opensubdiv] created stub for opensubdiv
* [opensubdiv] added homepage and description
* [opensubdiv] removed boilerplate
* [opensubdiv] working on dependencies and variants
* [opensubdiv] fixed syntax error
* [opensubdiv] defined spec
* [opensubdiv] added dev version
* [opensubdiv] building on CudaPackage
* [opensubdiv] always build with open gl and depends_on('gl')
* [opensubdiv] applying cuda flags
* [opensubdiv] worked on doc variant
* [opensubdiv] added some x11 libraries
* [opensubdiv] depends on glfw
* [opensubdiv] locating glew
* [opensubdiv] added openmp variant
* [opensubdiv] flake8 fixing
* [opensubdiv] fixed develop version name
* [opensubdiv] fixed description to not need @
Testing the install StopIteration exception resulted in an attribute error:
AttributeError: 'StopIteration' object has no attribute 'message'
This PR adds a unit test and resolves that error.
* ADIOS2: Version `master`
Rename branch version to supported, real development branch `master`.
The old name is legacy Spack when alternative development branch
names were not yet supported.
* ADIOS: Simplify via spec Variable
use the already defined local variable `spec` to shorten
lines
This recovers the old behavior of replace_prefix_bin that was
modified to work with elf binaries by prefixing os.sep to new prefix
until length is the same as old prefix.
Since 1.9.0 is broken for Darwin, which impacts many developers, and
the fix is still a RC, let's keep the previous release as default.
This avoids distruption for OSX developers and CI.
Removed the code that was converting the old index.yaml format into
index.json. Since the change happened in #2189 it should be
considered safe to drop this (untested) code.
Spack's fflags are meant for both f77 and fc. Therefore, they must
be passed as FFLAGS and FCFLAGS to the configure scripts of
Autotools-based packages.
* @develop needs the full git repo to use "git describe"
properly
* If not specifying the cxxstd variant, let cmake use its
default
* Improve fmt dependencies: fairlogger < 1.6.2 does not
work with fmt >= 6.
* Small other stuff
Just `depend_on('ncurses')`, pkgconfig seems to cause the right thing
to happen if the package is built +termlib or ~termlib (tested both
ways on a CentOS 7 system).
Additional details in: https://github.com/spack/spack/issues/15281
Tested by building all of the releases of tmux on a CentOS 7 box.
The distributed build PR (#13100) -- did not check the install status of dependencies when using the `--only package` option so would refuse to install a package with the claim that it had uninstalled dependencies whether that was the case or not.
- [x] add install status checks for the `--only package` case.
- [x] add initial set of tests
* add preliminary afqmc support in qmcpack
* afqmc updates
* fix spack typos
* edit AFQMC to only allow up 3.7 or above
* added NCCL library support for AFQMC build
* add CMAKE args for BUILD_AFQMC_WITH_NCCL
* update for just AFQMC support. No AFQMC+GPU support
* remove nccl for afqmc
* flake8 whitespace fix
* Update flag_handler for 'netcdf-fortran'.
* Refactoring.
* Enable old versions of netcdf-fortran for NAG.
* Disable parallel 'make check' for versions before 4.5.0.
* Fix shared libraries built with NAG instead of conflicting it.
* Add 'skosukhin' as a maintainer of 'netcdf-fortran'.
This PR adds some fixes for the tcsh package.
- Adds new version
- adds list_url so fetching works for current and old tarballs
- sets ncurses dependency explicitly to `ncurses+termlib`
If `+termlib` is not set then it will link against the system libtinfo.
* Quantum-Espresso: qe-6.5 fails to detect MKL for FFT
qe-6.5 fails to detect MKL for FFT if BLAS_LIBS is set due to
an unfortunate upsteam change in their autoconf/configure:
- qe-6.5/install/m4/x_ac_qe_blas.m4 only sets 'have_blas'
but no 'have_mkl' if BLAS_LIBS is set (which seems to be o.k.)
- however, qe-6.5/install/m4/x_ac_qe_fft.m4 in 6.5 unfortunately
relies on x_ac_qe_blas.m4 to detect MKL and set 'have_mkl'
- qe-5.4 up to 6.4.1 had a different logic and worked fine with
BLAS_LIBS being set
However, MKL is correctly picked up by qe-6.5 for BLAS and FFT if
MKLROOT is set (which SPACK does automatically for ^intel-mkl).
Thus, do not set BLAS_LIBS when compiling qe-6.5 with intel-mkl.
* replace all '^intel-mkl' by '^mkl' to match other packages which also provide MKL
e.g. intel-parallel-studio+mkl as mentioned by @adamjstewart in #15276
* Filter problematic flag from std_cmake_args
Including '-DCMAKE_VERBOSE_MAKEFILE:BOOL=ON' in the cmake args causes
bits of cmake's output to end up in the autoconf-generated configure
script. See https://github.com/spack/spack/issues/15274 and
* Don't build in parallel (sigh)
* Constrain hackery for newer versions...
... to newer versions.
* [WIP] new package: qgis
* add maintainer
* further improvements
* reflect improvements to qscintilla installation paths
* comment out qtkeychain dependency as the package is created
* uncomment qtkeychaing dependency, since this package is now created
* a comment on webkit
* specify versions of dependencies, add variant
* fix variant description
* fix proj dependency logic
* adjust conflicts and dependencies so that one can compile qgis@2 with qt4, python2.7
* minor improvement
* fix some build errors, improve dependency specs
* qsci python bindings will be build by py-pyqt
* cmake variable QSCINTILLA_LIBRARY should point to library itself not the parent folder
* turn grass off explicitly, fix typo, turn qspatialite off explicitly
* fix typo
* specify more cmake options that doesn"t seem to be set properly, and use spack provided pkg-config
* fix libzip
* fix build issue with sqlite variant, add runtime dependencies
* add more runtime python package dependencies
* reflect variant name change in sqlite
* add maintainer, correct typo
* add TODO's
* add more versions
* improve style
* add latest versions
* netcdf -> netcdf-c
* add variants as shown in cmake config
* add conflict: v3.8.1 won't build if qt@5.13:
* change preferred version to latest long term release, 3.10.3
* add a zillion of build options
* improve style
* add descriptions for variants
* remove and already implemented compilation tip
* add "when" statements for optional dependencies
* make flake8 happy
* Update var/spack/repos/builtin/packages/qgis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qgis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qgis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qgis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qgis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* convert conflicts to depends_on
* undo str conversion for path objects
* fix flake8 E131
* fix flake8 E128, E124
* improve style
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Caliper depends on python3.
The package needs to be told where to find it.
* More flake8 formatting edits.
* Change explicit python3 to spec['python'].command.path
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Removing defunct import for flake8
* Flake8 trailing whitespace warning.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR adds 'http://' to the homepage setting of a few packages that do
not have it set. Not having that set can cause problems with some wiki
apps when embedding the homepage value into markdown syntax.
- bowtie2
- exuberant-ctags
- perl-want
- samtools
- add version info for v1.8.0
- v1.8.0 adds a new source file, `file.c`, which need to be included
in our hardcoded list of objects to link.
I discovered this while demonstrating "how easy it is to add a new
package to Spack", only to scratch my head when it failed `spack
install` but worked when I ran `make` in the stage dir. Finally
looked at tree/package.py and it All Became Clear.
Perhaps someone should rewrite this to use MakefilePackage, but the
Makefile starts off with a bunch of twisty turny "uncomment these
lines to run on this platform", so it might not be worth it.
* Added package py-versioneer
* Update python version
* Added python2 to versioneer
* Added python2 to versioneer
* Update var/spack/repos/builtin/packages/py-versioneer/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Cleaning comments
* Removed temporarily @warner as a maintainer, waiting for answer
* Removed line
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* gromacs: add v2019.6
* Update var/spack/repos/builtin/packages/gromacs/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- Tell make about the source code path
- Install actual header files, not a wrapper with wrong paths
- Add a patch to prevent compiler warnings
- Improve description
* Fixups for jupyter
This PR fixes a few things for some jupyter related packages.
py-ipython:
- make the python depends_on statements reflect needs of different
versions
- remove an unneeded conflicts directive
py-ipywidgets:
- add new version
- set version constraints for py-widgetsnbextension
py-jupyter-console
- add new version
- set python dependencies for versions as needed
- set version constraint for py-ipython
- set version constraints for py-prompt-toolkit
py-pyqt5
- build with py-sip
py-qtconsole
- add dependency on py-pyqt5
* Update var/spack/repos/builtin/packages/py-jupyter-console/package.py
Tweak version ranges.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-jupyter-console/package.py
Tweak version range.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Make py-pyqt5 a run dependency
Also, make formatting more consistent.
* Fix site_packages_dir
Change reference of site_packages_dir to self.site_packages_dir. Oddly,
this did not show up as a problem until I regenerated the module.
* Restore py-pyqt5 to previous state
* Explicitly set path to site_packages_dir
This change prevents an error when regenerating the py-pyqt5 module
file.
* Fix flake8 errors
* Make sure prefix is in join_path
* Fix flake8 errors
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: py-librosa
This PR adds the py-librosa package, along with new dependency packages
and some updates of existing dependency packages.
- new package: py-audioread
- new package: py-resampy
- new package: py-soundfile
- update package: py-numba
- update package: py-llvmlite
py-numba:
- add updated version
- adjust constraints
py-llvmlite:
- add updated version
- adjust constraints
- fix version specifications for llvm
- add environment function to set PIC
* Update var/spack/repos/builtin/packages/py-numba/package.py
Ah, yes, I see that `setuptools` is listed in the `install_requires` array. I missed that before.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Fix dependency
- Add dependency of py-soundfile depends on libsndfile
- Add new libsndfile package
* Add py-pytest-runner build dep
* Make numpy a variant for py-soundfile
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When building gcc7 and gcc8 on RHEL6 with Spack and installing it as
a spack-available compiler, OpenBLAS will fail to compile because GCC
generates newer instructions than rhel6's `as` assembler knows about
(e.g. "vpermpd").
Building gcc8 with binutils succeeds, and it generates a GCC that can
then successfully build OpenBLAS. This is also expected to work for
gcc7 on RHEL6.
* Added version 0.12.0 to fix issue #15218
* Added dependencies specs with compatible versions
* Switched py-scipy dependency to variant (default F)
* Removed variant py-scipy and didn't add py-dask
* Fixed typo: missing '
* Update var/spack/repos/builtin/packages/py-pyfftw/package.py
Fixing typos from version ranges in dependencies.
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
* Update var/spack/repos/builtin/packages/py-pyfftw/package.py
Removed repeating dependency option.
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
* Update var/spack/repos/builtin/packages/py-pyfftw/package.py
Limited version of py-numpy dependency to <2.0.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Adding an option to build the C api for Umpire.
This is useful if you need to link to a C code and you're using
a compiler suite that doesn't support fortran.
* Also updating the versions while I'm here.
* Adding conflict: Fortran requires C.
To ease transition and confusion, default to C-bindings being
present. This shouldn't hurt anyone who is upgrading an existing
installation.
connect_timeout can be used to increase the time Spack waits for the
server to answer. This can be used to work around slow connections or
servers.
Fixes#14700
* CudaPackage: add support for Tesla K80 and older CUDA
* Flake8 fixes
* Fix cuda_arch when no arch is set
* Fine-tune cuda_arch=37,50 supported CUDA versions
* CUDA 6.5+ supports SM_37
* Add @svenevs as a maintainer
The new build process, introduced in #13100 , relies on a spec's dependents in addition to their dependencies. Loading a spec from a yaml file was not initializing the dependents.
- [x] populate dependents when loading from yaml
LLVM is the only package that explicitly sets the "termlib" variant
of ncurses and it specifies +termlib. ncurses defaults to ~termlib;
if a package depends on LLVM and ncurses, there is a concretizer bug
that incorrectly detects a constraint conflict (see #267). Setting
+termlib as the default is a stopgap measure to avoid this conflict.
If other packages were to explicitly request ~termlib in the future,
the same issue would come up again (and could not be resolved by
adjusting the default of "termlib").
Setting +termlib on ncurses moves some symbols into a separate
"libtinfo". Not all packages may be able to detect libtinfo properly
so may require an update; vim, samtools, and libedit have been
updated to use ncurses+termlib (in the case of libedit, the only
necessary action was to add a newer version where the build system
was updated to check libtinfo).
* Flake8 OK
* Update var/spack/repos/builtin/packages/py-basis-set-exchange/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-basis-set-exchange/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Added missing dependencies proposed by @adamjstewart
* Without py-versioneer
* Added py-versioneer
* Python2 for bse
* Python build error
* Update var/spack/repos/builtin/packages/py-basis-set-exchange/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Removed py-versioneer, according to @adamjstewart
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Buildcache command: add install option -o/--otherarch
This will allow matching specs from other archs, for example
installing macOS buildcaches on linux hosts.
* spack commands --update-completion
- When compiling qrupdate with `FFLAGS=-fdefault-integer-8` it can be perfectly used for larger problem dimensions.
- Improved the readability of the file with the added rules.
args.specs is a list, which results in output like this:
```
eval `spack load --sh ['libxml2', 'xz']`
```
We want this instead:
```
eval `spack load --sh libxml2 xz`
```
This change stores packages' configure arguments during build and makes
use of them while refreshing module files. This fixes problems such as in
#10716.
* Update py-bx-python package
- add update to py-bx-python
- switch to pypi downloads
- set dependencies
* Update var/spack/repos/builtin/packages/py-bx-python/package.py
I had initially pulled version 0.8.6 and then updated that to 0.8.8 but missed the change in the python specs between those two versions.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Fix version 0.7.4
- set dependency on python2
- add dependency on py-python-lzo
- add py-python-lzo package
- set py-numpy dependency to correspond to latest version that works
with python2
* Add constraint for py-six dependency
* Update var/spack/repos/builtin/packages/py-bx-python/package.py
Ah, I had that `when` clause in and then took it out as it did not seem to be needed. I guess it is always better to be more explicit.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Remove py-numpy constraint
Let the concretizer catch the conflict with python2 and py-numpy
versions.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixes & additional variant(s) for HepMC3
* Syntax
* Restore recipe for HepMC2
* Remove FIXME
* Update package.py
* Apply suggestions from code review
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Emit a sensible error message if compiler's target is overly specific
fixes#14798fixes#13733
Compiler specifications require a generic architecture family as
their target. This commit improves the error message that is
displayed to users if they edit compilers.yaml and use an overly
specific name.
* Add extra version of py-jsonschema
* Update dependencies
* Update dependencies + flake8
* Add py-pyrsistent package
* Update package.py
* Update var/spack/repos/builtin/packages/py-jsonschema/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update package.py
* Apply suggestions from code review
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Apply suggestions from code review
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add a variant to QE that suppresses upstream patching. Need in order to do ddependency patching.
* QE variant fails to build often. Set default variant to False as a user friendly change.
* QMCPACK converter patch collides with internal QE patches. Deactivate internal patches when performing dependency patching.
* Clearer description of QE patch variant that is also flake8 compliant.
* Add extra version of py-sqlalchemy
* Update package.py
* Update package.py
* Update package.py
* Update package.py
* Update package.py
* Apply suggestions from code review
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The hashing logic looks for function calls that are Spack directives.
It expects that when a Spack directive is used that it is referenced
directly by name, and that the directive function is not itself
retrieved by calling another function. When the hashing logic
encountered a function call where the function was determined
dynamically, it would fail (attempting to access a name attribute
that does not happen to exist in this case).
This updates the hashing logic to filter out function calls where the
function is determined dynamically when looking for uses of Spack
directives.
* Add extra version of py-terminado
* Update package.py
* Update var/spack/repos/builtin/packages/py-terminado/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of py-widgetsnbextension
* Update dependency version
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/py-widgetsnbextension/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of py-urllib3
* Update package.py
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/py-urllib3/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of vc
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/vc/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Spack now requires an exact match of the compiler version
requested by the user. A loose constraint can be given to
Spack by using a version range instead of a concrete version
(e.g. 4.5: instead of 4.5).
* Mark conflicts with binutils on darwin
* Explicitly require binutils bootstrapping and mark conflict with nvptx
* Disable gold variant by default on darwin
* igv: adding package igv
* removing some remaining initial boilerplate
* changing path construction to be more correct
* adding in type for java dep, also forgot about prefix.bin etc
* Added IRPF90 package
* PEP8
* SHA256
* Update var/spack/repos/builtin/packages/py-irpf90/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Sometimes one needs to preserve the (relative order) of
mtimes on installed files. So it's better to just copy
over all the metadata from the source tree to the install
tree. If permissions need fixing, that will be done anyway
afterwards.
One major use case are resource()s:
They're unpacked in one place and then copied to their
final place using install_tree(). If the resource is a
source tree using autoconf/automake, resetting mtimes
uncorrectly might force unwanted autoconf/etc calls.
libfabric used to install fabtests only when installed
using --test. fabtests has tools that are useful on a
running system, so they should be installed always.
* Rewrote the build/install part to always install
fabtests alongside libfabric.
* Updated a few fabtests resources.
* Updated the test related stuff. Works for most versions
now.
* Include tcp and udp fabrics so that the test suite works.
Change hpctoolkit's dependency on libunwind from 2018.10.12 to 1.4:.
In libunwind, 2018.10.12 is going away in favor of 1.4-rc1 (they're
nearly identical commits).
Remove the 'gpu' version. This was a temporary branch that is now
folded into master.
* py-notebook: make py-setuptools a run dependency
The py-setuptools dependency in py-notebook needs to be a run
dependency. The following message is received if it is not in the run
environment.
Traceback (most recent call last): File "/opt/ssoft/apps/2020.1/linu
x-centos7-sandybridge/gcc-9.2.0/py-notebook-6.0.1-6usbn4c/bin/jupyter-notebook",
line 6, in <module>
from pkg_resources import load_entry_point
Module NotFoundError: No module named 'pkg_resources'
* Remove extraneous whitespace
If the mimetype returned from `file -h -b --mime-type` contains slashes
in its subtype, the tuple returned from `spack.relocate.mime_type` will
have a size larger than two, which leads to errors.
Change-Id: I31de477e69f114ffdc9ae122d00c573f5f749dbb
* Add extra version of py-jedi
* Update package.py
* Update package.py
Correct dependency types
* Update var/spack/repos/builtin/packages/py-jedi/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-jedi/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-jedi/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-parso package
* Remove boilerplate from py-parso
* Flake-8
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of py-isort
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/py-isort/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-isort/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-isort/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-isort/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of py-dask
* Update package.py
* Add extra dependencies for py-dask+distributed
* Update package.py
* Update var/spack/repos/builtin/packages/py-heapdict/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-heapdict/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-distributed/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-distributed/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update package.py
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/py-distributed/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-distributed/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update var/spack/repos/builtin/packages/py-distributed/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Flake-8
* Add patch step for py-distributed
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Fixes#9394Closes#13217.
## Background
Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`. This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.).
The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method.
Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`). The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`).
This PR adds support for distributed (single- and multi-node) parallel builds. The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages.
## Approach
### File System Locks
Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks. The runs can be a combination of interactive and batch processes affecting the same file system. Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed.
Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes. The resulting file contains the failing spec to facilitate manual debugging.
### Priority Queue
Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies.
Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix. Consequently, packages can no longer get away with an install method consisting of `pass`, for example.
## Caveats
- This still only parallelizes a single-rooted build. Multi-rooted installs (e.g., for environments) are TBD in a future PR.
Tasks:
- [x] Adjust package lock timeout to correspond to value used in the demo
- [x] Adjust database lock timeout to reduce contention on startup of concurrent
`spack install <spec>` calls
- [x] Replace (test) package's `install: pass` methods with file creation since post-install
`sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!`
- [x] Resolve remaining existing test failures
- [x] Respond to alalazo's initial feedback
- [x] Remove `bin/demo-locks.py`
- [x] Add new tests to address new coverage issues
- [x] Replace built-in package's `def install(..): pass` to "install" something
(i.e., only `apple-libunwind`)
- [x] Increase code coverage
* ENH: add catch2 CMake install
* add a variant allowing catch2 to be installed
via CMake, which is useful for generating a .cmake
config file for consumption by other projects
* Catch2: Simplify Package
- CMake install is also single-header for new releases
- testing triggered by Spack's test mechanism
- default to CMake build (better than simple copy, which is
just for old releases to be installed)
* Catch: Remove Variant
We can control all installs with CMake to be quick and complete.
Old versions prior to 1.7.0 will be manually installed, as the
`make install` target is missing in those.
Releases 1.7.0-1.9.3 do not expose control over test builds.
* openPMD-api: Catch Lost single_header
... variant is gone :)
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Finish the HepMC3 spackage update
- Update CMake requirement per latest master
- Account for Python variant, add python dependency if used
- Account for ROOT I/O variant, add ROOT dependency if used
* Please flake8
* Update and simplify julia package
The current Spack Julia package potentially installs a few julia
packages, with the installation being controlled by variants. There are
a couple of problems with this.
First, Julia handles packages very differently from systems such as R
and Python. Julia requires write access to the repository directories in
order for user installs of packages to work. If spack installs julia
packages then there will be a repository, DEPOT_PATH, in the
installation directory. If spack is used on an individual basis this
would work but would mean that package data is written to the spack
installation directory after installation. If spack is used to provide
packages for end users then user installs of julia packages will fail
due to lack of write access to the repository in the installation
directory. It seems best for spack to just install julia without any
julia packages, and drop the configuration for those packages.
Second, having spack install package as variants seems to be counter to
how spack works for other extendable systems, like R and Python. Julia
should be an extendable package in spack but it is not clear how to make
that work. As pointed out above, installing user packages requires write
access to the julia repositories, including the one in the install
directory. Essentially, a user package installation will try to update
metadata for *all* julia repositories. Furthermore, each of those
repositories, assuming one per package with spack, would need to have
the Project.toml files merged to present the package stack to julia.
Again, it seems best for spack to just install julia itself and not try
to install julia packages, at least at this time. A good discussion on
this can be found at
https://discourse.julialang.org/t/how-does-one-set-up-a-centralized-julia-installation/13922.
This PR does the following:
- adds versions 1.2.0 and 1.3.1
- removes variants that correspond to julia packages
- changes python to build dependency as it seems to only be needed for
LLVM
- the new versions can use Python-3
- removes dependencies for packages
- adds a conflict statement for Intel compiler, with comment
- add a setup_build_environment method to find GCC libraries
- make formatting consistent
- adds JULIA_CPU_TARGET option to correspond with target to help with
running julia image on hardware older than build host
- added intel build options, for when they can be used
- removed code for installing packages
- removed code for julia config that was needed for packages
Note that versions below 0.5.1 fail to build, with or without these
changes. It is not clear why that is.
* Update var/spack/repos/builtin/packages/julia/package.py
Yes, need to use correct grammar even in the midst of numbers and symbols. Good catch!
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* More cleanup of Julia package
This commit does more cleanup and sets more constraints.
- Removed release-0.4 and release-0.5. I am not sure if those are
actually useful but they are quite old and there are released versions
from the same timeframe.
- Remove the binutils variant.
- Made cmake a build dependency for versions >= 1.
- Added git as a dependency for @master.
- Limit curl dependency to released versions.
- Do not use external curl for master. When I checked, using the
external version failed but the internal curl worked.
- Versions <= 0.5.0 need an older version of openssl.
- Set conflicts directive for cxx variant.
- Added conflicts directive for needing +mkl with Intel compiler.
- Removed configuration settings as these prevented julia from working
properly in all cases that I looked at.
* Fix flake8 error
Remove 'import sys' that is no longer used.
* More dependency tweaks
This commit sets further version constraints on dependencies. It really
looks like julia requires its internal dependencies more over time.
- curl only up to 0.5.0
- openssl only up to 0.5.0
- override with system curl up to version 0.5.0
* Fix spec for curl certs
Only depending on curl through 0.5.0.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of py-cryptography
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/py-cryptography/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-cryptography/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-cryptography/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Flake-8
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-hpcbench: new package
* obey the flake8
* address comments, fix versions.
* Update var/spack/repos/builtin/packages/py-hpcbench/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of py-graphviz
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/py-graphviz/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of py-ipywidgets
* Update var/spack/repos/builtin/packages/py-ipywidgets/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update var/spack/repos/builtin/packages/py-ipywidgets/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipywidgets/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipywidgets/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipywidgets/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipywidgets/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipywidgets/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of py-defusedxml
* Update package.py
* Update var/spack/repos/builtin/packages/py-defusedxml/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of py-logilab-common
* Update package.py
* Update var/spack/repos/builtin/packages/py-logilab-common/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
From ROOT cmake output:
```
-- Checking for module 'davix>=0.6.4'
-- Found davix, version 0.6.8
CMake Warning at cmake/modules/SearchInstalledSoftware.cmake:960 (message):
Davix versions 0.6.8 to 0.7.0 have a bug and do not work with ROOT, please
upgrade to 0.7.1 or later.
```
* an argument 'buf_size' of 'h5fget_file_image_c' should be intent(out).
* correct format errors
* some modifications based on the comments from the reviewer
* Add new version of cairo
* Update var/spack/repos/builtin/packages/cairo/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add new version of ccache; update URL
* Update var/spack/repos/builtin/packages/ccache/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix patch applicability
* Combine patches for missing qt3 headers and remove krell variant
The variant should have always been applied.
* Restrict QT patches to actual applicable versions/situations
- I researched the associated patches so now their `when=` should more
closely match when they're actually needed.
- I sorted the patch order so they're grouped by version requirement
- I renamed the patches so they're listed by version requirements
* Added new default tau version: 2.29. Added explicit zlib build requirement. Set up environment to use use elf and libz
* Changed zlib to link dependency. Removed elf library path load (wasn't able to reproduce the need for this)
* Add extra version of py-matplotlib
* Update dependency
* Update package.py
* Update var/spack/repos/builtin/packages/py-matplotlib/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-matplotlib/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add extra version of py-lxml
* Extra variants for py-lxml
* Update var/spack/repos/builtin/packages/py-lxml/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-lxml/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-lxml/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-lxml/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Use separate build directory for gzip
At least on mac systems (perhaps because of a case sensitivity issue?)
gzip fails to build inside the source tree:
```
config.status: linking /var/folders/fy/x2xtwh1n7fn0_0q2kk29xkv9vvmbqb/T/spack-stage/s3j/spack-stage-gzip-1.10-iatwtuk2l5xgwmuh4pwu5bf27yezpydj/spack-src/GNUmakefile to GNUmakefile
config.status: executing depfiles commands
==> Executing phase: 'build'
==> [2020-02-14-09:32:45.502913] 'make' '-j12'
make: GNUmakefile: Too many levels of symbolic links
make: stat: GNUmakefile: Too many levels of symbolic links
make: *** No rule to make target `GNUmakefile'. Stop.
```
* Simplify build directory and add gmake dependency
Libmng only needs gzip to compress man files for distribution, so it
builds fine without it. The spack
gzip currently fails to compile.
```
config.status: linking /var/folders/fy/x2xtwh1n7fn0_0q2kk29xkv9vvmbqb/T/spack-stage/s3j/spack-stage-gzip-1.10-iatwtuk2l5xgwmuh4pwu5bf27yezpydj/spack-src/GNUmakefile to GNUmakefile
config.status: executing depfiles commands
==> Executing phase: 'build'
==> [2020-02-14-09:32:45.502913] 'make' '-j12'
make: GNUmakefile: Too many levels of symbolic links
make: stat: GNUmakefile: Too many levels of symbolic links
make: *** No rule to make target `GNUmakefile'. Stop.
```
* Modify Flang NVidia GPU variant to make use of built-in CudaPackage
* Add OpenMP Offload patch if March 2019 compiler is selected.
* Flang parallel build has a race condition.
* llvm-flang now uses built-in CudaPackage.
* Add variant for different build releases.
* Fix OpenMP target offload for NVidia GPUs.
* Additional commong flags that are needed with comments.
* NVidia BC required for libomp target requires special treatment. Use clang built in previous step to re-compile libomptarget.
* Add a new package: Metall
* Fix errors in metall/package.py
* Update var/spack/repos/builtin/packages/metall/package.py
Change to https style URL
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update in metall/package.py. Change Metall to depend on Boost always
* Update in metall/package.py. Change to install Boost with the default variants
* Update metall/package.py. Removed a comment
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Buildcache creation change the way prefix is copied to workdir.
* install_tree copies hardlinked files
* tarfile creates hardlinked files on extraction.
* create a temporary tarfile from prefix and extract it to workdir
* Use temp tarfile to move workdir to prefix to preserve hardlinks instead of copying
Add the OpenBLAS variant `+consistentFPCSR`, by default `False`, which adds the compile definition `CONSISTENT_FPCSR=1` as documented in OpenBLAS `Makefile.rule`.
This PR adds an updated version to the r-rhtslib package as well as fix
the build.
- add patches to use compiler flags from R
- add variables for bzip2 and xz dependencies
- use the spack Makeconf file when building the in-tree htslib
- make patchelf available to allow R to remove reference to temporary
installation directory in htslib shared object
- Add new version of r-rsamtools as the r-rsamtools and r-rhtlib
packages are closely paired.
* Fix run environment
Trying to install Avizo, i get "Error: NameError: name 'run_env' is not defined". Correcting it to be just "env"
* fix identation
Starting with 2020, the tar files are named v2020.0.tar.gz,
v2020.1.tar.gz, etc, not 2020_U1.tar.gz.
https://github.com/intel/tbb/releases
The previous commit (7a10478708) fixed the checksum mismatch, but
didn't update url_for_version (my bad).
UnifyFS no longer has an option to depend on numa. This removes the
numa variant, dependency, and associated conflict.
This commit also
- renames the `pmpi` variant to the more appropriate `auto-mount`
- changes the preferred version to the most recent release
It's often useful to run a module with `python -m`, e.g.:
python -m pyinstrument script.py
Running a python script this way was hard, though, as `spack python` did
not have a similar `-m` option. This PR adds a `-m` option to `spack
python` so that we can do things like this:
spack python -m pyinstrument ./test.py
This makes it easy to write a script that uses a small part of Spack and
then profile it. Previously thee easiest way to do this was to write a
custom Spack command, which is often overkill.
WarpX removed the `dev` branch in favor of a simpler,
`master`-centric development model.
`master` is the new development branch and there is no
stable branch anymore (we use tags and release branches
instead).
* Hydrogen now depends on `aluminum +nccl` vs. `aluminum +mpi_cuda`
* Hydrogen: Simplify Mac OS OpenMP-detection logic
* Aluminum: Add Mac OS OpenMP-detection logic
* LBANN: depend on conduit@0.4.0: instead of conduit@master
Fixes#10019
If multiple instances of a package were installed in a single
instance of Spack, and they differed in terms of dependencies, then
"spack find" would not distinguish specs based on their dependencies.
For example if two instances of X were installed, one with Y and one
with Z, then "spack find X ^Y" would display both instances of X.
This PR creates the r-watermelon package, along with dependencies.
- new package: r-fdb-infiniummethylation-hg19
- new package: r-illuminahumanmethylation450kanno-ilmn12-hg19
- new package: r-lumi
- new package: r-methylumi
- new package: r-roc
- new package: r-txdb-hsapiens-ucsc-hg19-knowngene
- updated package: r-matrixstats, new version needed as a dependency
This PR adds the r-pscbs package along with new dependencies and updates.
- new package: r-aroma-light
- new package: r-r-cache
- updated package: r-r-oo
This PR adds the r-copula package and dependencies.
- new package: r-adgoftest
- new package: r-gsl
- new package: r-pspline
- new package: r-stabledist
* New package - r-rmariadb
This PR creates the r-rmariadb package. It also includes an update to
the r-dbi package as a newer version of that is needed.
* Update var/spack/repos/builtin/packages/r-rmariadb/package.py
Argh, copy/paste. I wish the mirror would list itself as the archive site as well, but it just mirrors that data field from CRAN site. Thanks for catching that, I will make sure to look for that in the future.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Use mariadb-client
Use mariadb-client so people can set a preferred provider.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Fixes#14850. Commit 6b1958219 added versions 2020 and 2020.1 for
intel-tbb as part of updating several intel packages but added the
wrong sha256 sums for the github/01org repository.
Also, version 2020 is 2020, not 2020.0.
Add patch makefile-debug to restore the debug targets.
Python depends on gettext. Packages that depend on gettext and Python
together will encounter a concretizer bug which incorrectly detects
a constraint conflict. This sets the default value of +libxml2 in
Python to be the same as gettext so that packages which depend on
both (like mesa) can successfully concretize without adding manual
constraints.
Using `sys.executable` to run Python in a sub-shell doesn't always work in a virtual environment as the `sys.executable` Python is not necessarily compatible with any loaded spack/other virtual environment.
- revert use of sys.executable to print out subshell environment (#14496)
- try instead to use an available python, then if there *is not* one, use `sys.executable`
- this addresses RHEL8 (where there is no `python` and `PYTHONHOME` issue in a simpler way
When removing packages from a view, extensions were being deactivated
in an arbitrary order. Extensions must be deactivated in preorder
traversal (dependents before dependencies), so when this order was
violated the view update would fail.
This commit ensures that views deactivate extensions based on a
preorder traversal and adds a test for it.
Despite trying very hard to keep dicts out of our hash algorithm, we seem
to still accidentally add them in ways that the tests can't catch. This
can cause errors when hashes are not computed deterministically.
This fixes an error we saw with Python 3.5, where dictionary iteration
order is random. In this instance, we saw a bug when reading Spack
environment lockfiles -- The load would fail like this:
```
...
File "/sw/spack/lib/spack/spack/environment.py", line 1249, in concretized_specs
yield (s, self.specs_by_hash[h])
KeyError: 'qcttqplkwgxzjlycbs4rfxxladnt423p'
```
This was because the hashes differed depending on whether we wrote `path`
or `module` first when recomputing the build hash as part of reading a
Spack lockfile. We can fix it by ensuring a determistic iteration order.
- [x] Fix two places (one that caused an issue, and one that did
not... yet) where our to_node_dict-like methods were using regular python
dicts.
- [x] Also add a check that statically analyzes our to_node_dict
functions and flags any that use Python dicts.
The test found the two errors fixed here, specifically:
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...pack/spack/spec.py:1495:28']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28']
```
and
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...ack/architecture.py:359:15']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15']
```
This commit introduces a `--no-check-signature` option for
`spack install` so that unsigned packages can be installed. It is
off by default (signatures required).
VSX alitvec extensions are supported by PowerISA from v2.06 (Power7+), but might
not be listed in features.
FMA has been supported by PowerISA since Power1, but might not be listed in
features.
This commit adds these features to all the power ISA family sets.
* update version: intel packages daal, ipp, mkl-dnn, mkl, mpi, parallel-studio, pin, tbb and makes url parameter consistent and always use single quote.
* Fixes a typo with one of the sha256 checksum..
This PR adds a new version of llvm and fixes the dependency specs.
- This package depends on libtinfo in all cases so change the ncurses
dependency to reflect that
- if +lldb is in the spec but +python is not then do not build the lldb
python support
- build lldb python support only if +python is in the spec with +lldb
- install the llvm python bindings if +python is in the spec
- install the clang python bindings if +clang and +python are in the spec
- Fixes for conflicts with ~clang
- Fix typo in conflict of compiler-rt and flang
- More robustly handle compiler version switching between QT4 and 5, and
mac/linux, and gcc/intel/clang
- Remove assumption about intel linker being in path
* Convert libmng to use CMake rather than autoconf
The autoconf script failed to recognize the intel compiler; it was
harwired to use gcc.
* Simplify cmake logic and remove unused variant
Add an optional 'submodules_delete' field to Git versions in Spack
packages that allows them to remove specific submodules.
For example: the nervanagpu submodule has become unavailable for the
PyTorch project (see issue 19457 at
https://github.com/pytorch/pytorch/issues/). Removing this submodule
allows 0.4.1 to build.
* Octave: moved the short description in its own paragraph
* Octave: patch mkoctfile.in.cc to avoid using compiler wrappers
* Added a check to ensure mkoctfile works correctly
* update libarchive and fix version of libarchive cmake dependency
* (at least) libarchive 3.3.3 dependency from cmake 3.15.0
* cmake depends on libarchive 3.1.0 if not specified differently
currently it is applied to cmake <3.15.0
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Initialize _cached_specs at the file level and check for spec in it before searching mirrors in try_download_spec.
* Make _cached_specs a set to avoid duplicates
* Fix packaging test
* Ignore build_cache in stage when spec.yaml files are downloaded.
* draco: update versions
+ Added versions 7.3.0 and 7.4.0.
+ Change several variants to be default TRUE since most consumers need these
variants enabled (eospac, lapack, parmetis, superlu-dist). Change variant name
for `+superlu_dist` to use hyphen instead of underscore. This makes the
variant name consistent with the spackage name for `superlu-dist`.
+ Clean up `depends_on` instructions and avoid specifying `type` when possible.
+ Provide patch files that are necessary for some machines (mostly Cray
machines).
* Remove trailing whitespace.
* Revert variant name to use underscore.
* add maintainer information.
`spack -V` previously always returned the version of spack from
`spack.spack_version`. This gives us a general idea of what version
users are on, but if they're on `develop` or on some branch, we have to
ask more questions.
This PR makes `spack -V` check whether this instance of Spack is a git
repository, and if it is, it appends useful information from `git
describe --tags` to the version. Specifically, it adds:
- number of commits since the last release tag
- abbreviated (but unique) commit hash
So, if you're on `develop` you might get something like this:
$ spack -V
0.13.3-912-3519a1762
This means you're on commit 3519a1762, which is 912 commits ahead of
the 0.13.3 release.
If you are on a release branch, or if you are using a tarball of Spack,
you'll get the usual `spack.spack_version`:
$ spack -V
0.13.3
This should help when asking users what version they are on, since a lot
of people use the `develop` branch.
This PR adds a new command to Spack:
```console
$ spack containerize -h
usage: spack containerize [-h] [--config CONFIG]
creates recipes to build images for different container runtimes
optional arguments:
-h, --help show this help message and exit
--config CONFIG configuration for the container recipe that will be generated
```
which takes an environment with an additional `container` section:
```yaml
spack:
specs:
- gromacs build_type=Release
- mpich
- fftw precision=float
packages:
all:
target: [broadwell]
container:
# Select the format of the recipe e.g. docker,
# singularity or anything else that is currently supported
format: docker
# Select from a valid list of images
base:
image: "ubuntu:18.04"
spack: prerelease
# Additional system packages that are needed at runtime
os_packages:
- libgomp1
```
and turns it into a `Dockerfile` or a Singularity definition file, for instance:
```Dockerfile
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-bionic:prerelease as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs build_type=Release" \
&& echo " - mpich" \
&& echo " - fftw precision=float" \
&& echo " packages:" \
&& echo " all:" \
&& echo " target:" \
&& echo " - broadwell" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " concretization: together" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unecessary deps and strip executables
RUN cd /opt/spack-environment && spack install && spack autoremove -y
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:18.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN apt-get -yqq update && apt-get -yqq upgrade \
&& apt-get -yqq install libgomp1 \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
```
* Add binary_distribution::get_spec which takes concretized spec
Add binary_distribution::try_download_specs for downloading of spec.yaml files to cache
get_spec is used by package::try_install_from_binary_cache to download only the spec.yaml
for the concretized spec if it exists.
Previously, the install stage would compile in things that were
disabled during the build_ext phase. This would also result in the
build pulling in locally installed versions of libraries that were
disabled. The install process doesn't honor the same command-line
flags that build_ext does, but does call build_ext again. Avoid the
whole issue by just writing the options to setup.cfg
Also, add the Imagemagick dependency for tests.
The Spec parser currently calls `spec.traverse()` after every parse, in
order to set the platform if it's not set. We don't need to do a full
traverse -- we can just check the platforrm as new specs are parsed.
This takes about a second off the time required to import all packages in
Spack (from 8s to 7s).
- [x] simplify platform-setting logic in `SpecParser`.
`filename_for_package_name()` and `dirname_for_package_name()`
automatically construct a Spec from their arguments, which adds a fair
amount of overhead to importing lots of packages. Removing this removes
about 11% of the runtime of importing all packages in Spack (9s -> 8s).
- [x] `filename_for_package_name()` and `dirname_for_package_name()` now
take a string `pkg_name` arguments instead of specs.
* `Environment.__init__` is now synchronized with all writing operations
* `spack uninstall` now synchronizes its updates to any associated environment
* A side effect of this is that the environment is no longer updated piecemeal as specs are uninstalled - all specs are removed from the environment before they are uninstalled
* pumi: sim version check, meshes via submodule, ctest
* Apply suggestions from code review
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* pumi: update comment on master version string
* pumi: description of simmodsuite_version_check variant
* pumi: add white space to variant description
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This commit makes two fundamental corrections to tests:
1) Changes 'matches' to the correct 'match' argument for 'pytest.raises' (for all affected tests except those checking for 'SystemExit');
2) Replaces the 'match' argument for tests expecting 'SystemExit' (since the exit code is retained instead) with 'capsys' error message capture.
Both changes are needed to ensure the associated exception message is actually checked.
* git: add version 2.25.0 and fixup pcre dependency
pcre2 became optional in 2.14 and the default in 2.18. I noticed this
as git was compiling against the system pcre2 (spack was
specifying pcre as the dependency).
* missed a chunk from my internal repo
Updates to environments were not multi-process safe, which prevented them from taking advantage of parallel builds as implemented in #13100. This is a minimal set of changes to enable `spack install` in an environment to be parallelized:
- [x] add an internal lock, stored in the `.spack-env` directory,
to synchronize updates to `spack.yaml` and `spack.lock`
- [x] add `Environment.write_transaction` interface for this lock
- [x] makes use of `Environment.write_transaction` in `install`,
`add`, and `remove` commands
- `uninstall` is not synchronized yet; that is left for a future PR.
* Set netcdf-fortran to build serially with Intel compiler
This PR turns off parallel builds when the Intel compiler is used.
Builds with the Intel compiler will fail otherwise.
* Change how parallel build is handled
Use patch from netcdf-fortran project to turn off parallel buildi for
version 4.5.2.
* diffutils: Changed the handling of undeclared functions from warning to error.
* diffutils: Change the handling of warnings or error
* Delete '-Werror=implicit-function-declaration'
* Add '-Qunused-arguments'
Replace the deprecated ADIOS1 backend default with ADIOS2 default.
Disable sz since we do not need it and it conflicts with supported
version ranges between ADIOS2 and ADIOS1 if someone enables both.
* intel-tbb: Fix install names on Darwin
Intel-TBB's libraries on Darwin are installed with "@rpath" prefixed
to their install names. This was found to cause issues building the root
package on Darwin due to libtbb not being found when running some of the
generated tools linking to it.
Follow example from other packages with the same issue and fixup up install
names for intel-tbb post install.
* intel-tbb: fix flake8 errors
* Dirty hack to fix#14148
* A better way of checking if a package is taken from system
* Update var/spack/repos/builtin/packages/qt/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update qt/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Spack commands referring to upstream-installed specs by hash have
been broken since 6b619da (merged September 2019), which added a new
Database function specifically for parsing hashes from command-line
specs; this function was inappropriately attempting to acquire locks
on upstream databases.
This PR updates the offending function to avoid locking upstream
databases and also updates associated tests to catch regression
errors: the upstream database created for these tests was not
explicitly set as an upstream (i.e. initialized with upstream=True)
so it was not guarding against inappropriate accesses.
* Add the py-merlinwf package
* Fix importlib-resources package name for spack naming convention.
* Add build to dependencies and add updated versions.
* Remove pytest-runner dependency.
* Fix typo.
* Add the py-tabulate dependency.
* Add sha256 for version 1.0.0
* Change to maestro version 1.1.5.
* Increase to version 1.0.4.
* Bump maestrowf version and prepare for new pypi version.
* Add sha256sum for version 1.1.5
* Add version 1.1.1.
Update maestrowf version to 1.1.7
* Add versions 1.0.5, 1.1.0, 1.1.1 and potential 1.2.0.
* Add version 1.2.0 and when on maestrowf@1.1.6.
* Add version 1.2.2 , remove 1.2.1 and 1.1.0.
* Update var/spack/repos/builtin/packages/py-merlinwf/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-merlinwf/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Remove mysql variant until new mysql interface module is enabled.
The mysql code may be removed.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Unified environment modifications in config files
fixes#13357
This commit factors all the code that is involved in
the validation (schema) and parsing of environment modifications
from configuration files in a single place. The factored out
code is then used for module files and compiler configuration.
Attributes were separated by dashes in `compilers.yaml` files and
by underscores in `modules.yaml` files. This PR unifies the syntax
on attributes separated by underscores.
Unit testing of environment modifications in compilers
has been refactored and simplified.
* Get py-torch to build caffe2
This PR gets the py-torch package to build with caffe2, and closes
issue #14576. If building on a machine with CUDA but no GPU the build
will try to build with all compute capabilities. Older compute
capabilities are not supported so the build will fail. The list of
capabilities can be passed to the build using values set in the
cuda_arch variant. Likewise, conflicts are also set to catch if the
unsupported capabilities are listed in cuda_arch.
This PR also sets version constraints on using an external mkldnn for
newer versions. Currenly, only versions up to 0.4 use an external mkldnn
library. Also, the cuda variant is set to True, which restores
previous behavior.
* Update var/spack/repos/builtin/packages/py-torch/package.py
Fix typo.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Adjust conflicts
This commit adjusts the conflicts. There is an issue with the
cuda_arch=20 conflicts directive as there is a conflicting dependency
with any version >=1.1 and a cuda_arch=20 dependency specified in
CudaPackage that gets trapped first.
* Use a common message for conflicts
This commit adds a variable to contain the bulk of the message stringi
for the cuda_arch conflicts. This is used along with a version string
in the conflicts directives messages.
* Fix the strings
- Use a multiline string for the cuda_arch_conflict variable.
- No need for format() in the msg value.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The perl binary can also be called `perlX.Y.Z` if using a development
build or simply using the versioned binary.
We were also dropping all sbang arguments, since `exec $interpreter_v`
was only using the first element of the `interpreter_v` array.
Using `sys.executable` to run Python in a sub-shell doesn't always work in a virtual environment as the `sys.executable` Python is not necessarily compatible with any loaded spack/other virtual environment.
- revert use of sys.executable to print out subshell environment (#14496)
- try instead to use an available python, then if there *is not* one, use `sys.executable`
- this addresses RHEL8 (where there is no `python` and `PYTHONHOME` issue in a simpler way
Openblas target is now determined automatically upon inspection of
`TargetList.txt`. If the spack target is a generic architecture family
(like x86_64 or aarch64) the DYNAMIC_ARCH setting is used
instead of targeting a specific microarchitecture.
Instead of another script, this adds a simple argument to `spack
commands` that updates the completion script. Developers can now just
run:
spack commands --update-completion
This should make it simpler for developers to remember to run this
*before* the tests fail. Also, this version tab-completes.
* Try to switch to a newer fork of ftgl
* Allow ROOT to be more flexible about ftgl versions
* Turn ftgl into a CMakePackage
* Update ROOT ftgl dep since 2.1.3 isn't a thing anymore
* Please flake8
* Try to bring back the doc variant
* Comment it out instead of removing it
* Fix root+x breakage from #11129
* Separate out +opengl breakage
* Not strictly X11-related, but more breakage from #11129
* Another X11 breakage found while building 6.08.x
* Don't put system headers in SPACK_INCLUDE_DIRS + deduplicate
* xextproto is only a dependency in +x builds
Previously the `spack load` command was a wrapper around `module load`. This required some bootstrapping of modules to make `spack load` work properly.
With this PR, the `spack` shell function handles the environment modifications necessary to add packages to your user environment. This removes the dependence on environment modules or lmod and removes the requirement to bootstrap spack (beyond using the setup-env scripts).
Included in this PR is support for MacOS when using Apple's System Integrity Protection (SIP), which is enabled by default in modern MacOS versions. SIP clears the `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH` variables on process startup for executables that live in `/usr` (but not '/usr/local', `/System`, `/bin`, and `/sbin` among other system locations. Spack cannot know the `LD_LIBRARY_PATH` of the calling process when executed using `/bin/sh` and `/usr/bin/python`. The `spack` shell function now manually forwards these two variables, if they are present, as `SPACK_<VAR>` and recovers those values on startup.
- [x] spack load/unload no longer delegate to modules
- [x] refactor user_environment modification calculations
- [x] update documentation for spack load/unload
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This PR adds a `--format=bash` option to `spack commands` to
auto-generate the Bash programmable tab completion script. It can be
extended to work for other shells.
Progress:
- [x] Fix bug in superclass initialization in `ArgparseWriter`
- [x] Refactor `ArgparseWriter` (see below)
- [x] Ensure that output of old `--format` options remains the same
- [x] Add `ArgparseCompletionWriter` and `BashCompletionWriter`
- [x] Add `--aliases` option to add command aliases
- [x] Standardize positional argument names
- [x] Tests for `spack commands --format=bash` coverage
- [x] Tests to make sure `spack-completion.bash` stays up-to-date
- [x] Tests for `spack-completion.bash` coverage
- [x] Speed up `spack-completion.bash` by caching subroutine calls
This PR also necessitates a significant refactoring of
`ArgparseWriter`. Previously, `ArgparseWriter` was mostly a single
`_write` method which handled everything from extracting the information
we care about from the parser to formatting the output. Now, `_write`
only handles recursion, while the information extraction is split into a
separate `parse` method, and the formatting is handled by `format`. This
allows subclasses to completely redefine how the format will appear
without overriding all of `_write`.
Co-Authored-by: Todd Gamblin <tgamblin@llnl.gov>
The gpg2 command isn't always around; it's sometimes called gpg. This is
the case with the brew-installed version, and it's breaking our tests.
- [x] Look for both 'gpg2' and 'gpg' when finding the command
- [x] If we find 'gpg', ensure the version is 2 or higher
- [x] Add tests for version detection.
- [x] Factored to a common place the fixture `testing_gpg_directory`, renamed it as
`mock_gnupghome`
- [x] Removed altogether the function `has_gnupg2`
For `has_gnupg2`, since we were not trying to parse the version from the output of:
```console
$ gpg2 --version
```
this is effectively equivalent to check if `spack.util.gpg.GPG.gpg()` was found. If we need to ensure version is `^2.X` it's probably better to do it in `spack.util.gpg.GPG.gpg()` than in a separate function.
Despite trying very hard to keep dicts out of our hash algorithm, we seem
to still accidentally add them in ways that the tests can't catch. This
can cause errors when hashes are not computed deterministically.
This fixes an error we saw with Python 3.5, where dictionary iteration
order is random. In this instance, we saw a bug when reading Spack
environment lockfiles -- The load would fail like this:
```
...
File "/sw/spack/lib/spack/spack/environment.py", line 1249, in concretized_specs
yield (s, self.specs_by_hash[h])
KeyError: 'qcttqplkwgxzjlycbs4rfxxladnt423p'
```
This was because the hashes differed depending on whether we wrote `path`
or `module` first when recomputing the build hash as part of reading a
Spack lockfile. We can fix it by ensuring a determistic iteration order.
- [x] Fix two places (one that caused an issue, and one that did
not... yet) where our to_node_dict-like methods were using regular python
dicts.
- [x] Also add a check that statically analyzes our to_node_dict
functions and flags any that use Python dicts.
The test found the two errors fixed here, specifically:
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...pack/spack/spec.py:1495:28']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28']
```
and
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...ack/architecture.py:359:15']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15']
```
Rework Spack's continuous integration workflow to be environment-based.
- Add the `spack ci` command, which replaces the many scripts in `bin/`
- `spack ci` decouples the CI workflow from the spack instance:
- CI is defined in a spack environment
- environment is in its own (single) git repository, separate from Spack
- spack instance used to run the pipeline is up to the user
- A new `gitlab-ci` section in environments allows users to configure how
specs in the environment should be mapped to runners
- Compilers can be bootstrapped in the new pipeline workflow
- Add extensive documentation on pipelines (see `pipelines.rst` for further details)
- Add extensive tests for pipeline code
* Update and fix samtools
This PR adds samtools-1.10 and sets the htlib directory so that the
spack built htslib can be used. This PR also arranges the dependencies
so that the htslib sequence is grouped on its own. Finally, the bzip2
dependency is removed and python and perl run dependencies are added.
* Fix samtools when built with ncurses+termlib
* The CI flake8 tests require lowercase variable
Interestingly, this did not show up when I ran `spack flake8` locally.
* Reorder GNU mirrors (#14395)
As @adamjstewart commented in #14395, GNU suggests to use
their mirror. So reorder the mirror to the top.
GNU Doc: https://www.gnu.org/prep/ftp.en.html
* Use spack.util.url.join for URLs in GNU mirrors (#14395)
One should not use os.path.join for URLs. This does only
work on POSIX systems.
Instead use spack.util.url.join.
So every part in spack uses the same url joining method.
Unfortunately UCX 1.7.0 is appearing in RPMS before it's officially released.
There's a problem with Open MPI 4.0.x where x < 3 and this version of UCX,
namely that the UCT BTL fails to compile.
See https://github.com/open-mpi/ompi/issues/7128
This patch works around the problem by disabling the build of the UCT BTL
for releases 4.0.0 to 4.0.2.
add hppritcha (me) as maintainer
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
* Add +cfitsio variant to wcslib dependency
* Replace ncurses dependency with readline dependency
casacore explicitly may depend on readline, not ncurses
* Add workaround for casacore's readline dependency
casacore optionally depends upon readline, but it's CMakeLists.txt provides no
user control over whether or not readline becomes a dependency. As readline is
often present by default on systems, it's better for this package to explicitly
depend on readline in order to prevent linking to whatever system version of the
library happens to be found during the build process. This should be considered
a workaround until casacore's CMakeLists.txt is fixed.
* Apply workaround for casacore's dependency on SOFA
Similar to the issues with casacore's readline dependency, casacore's optional
dependency on SOFA does not provide the user with a means of controlling the
dependency during build time. Unlike the readline library, the SOFA library is
unlikely to exist on most systems by default. As the SOFA dependency is only
optionally used for testing casacore, requiring it by default is not a good
workaround. Until casacore's CMakeLists.txt is fixed, this variant has been
removed to avoid unexpected library dependencies in the installed package.
* Add newer casacore versions
* Add mpokorny to maintainer field
- The suite-sparse author publishes new versions starting with 5.5.0 on GitHub, see https://github.com/DrTimothyAldenDavis/SuiteSparse/releases and http://faculty.cse.tamu.edu/davis/SuiteSparse/
- change spack to download from there
- updated sha256 checksums from GitHub for all available releases
- For versions 5.4.0, 5.5.0, 5.6.0 there is a slightly different compilation necessary: first `make default` then `make install`.
Summary of the version changes (+ added, -removed [because not available on GitHub]):
```
+ 5.6.0
+ 5.5.0
+ 5.4.0
5.3.0
5.2.0
+ 5.1.2
5.1.0
+ 5.0.0
+ 4.5.6
4.5.5
- 4.5.4
4.5.3
- 4.5.1
```
to disabled use of libunwind. Without this mesa fails to build
using recent Cray compilers - cce 9 and higher - on aarch64 systems.
Signed-off-by: Howard Pritchard <hppritcha@gmail.com>
It seems that stable versions of perl also install a `perlX.Y.Z` binary.
However, it seems that this binary can hang if used in conjunction with
Spack's sbang workaround, as observed during automake's build.
* Update py-csvkit
This PR updates the py-csvkit package. This version requires a python
stack based on agate and this PR includes the new dependency packages.
- py-agate-dbf
- py-agate-excel
- py-agate-sql
- py-agate
- py-dbfread
- py-isodate
- py-leather
- py-parsedatetime
- py-python-slugify
- py-pytimeparse
- py-text-unidecode
* Replace the copy/pasted apostrophes
Python 2 can not process the copy/pasted apostrophes so replace them
with standard single quote character.
* Add version constraints on dependencies
* Add version contraint to graphviz patches
This PR restricts the graphviz version that the patches for building
with the Intel compiler apply to. The two patches that were needed for
building graphviz-2.40.1 with the Intel compiler are not needed for
graphviz-2.42.2.
* Adjust the qt dependencies
The qt5 patch is only needed for graphviz-2.40.1. However, that version
will only compile with GCC-6 or greater.
* add variant for enabling testing
* add variant for enabling testing
* enable tests and clean up other options
* enable tests and clean up other options
* add numbered versions
* add numbered versions
* updates to avoid enable_tests variant; correct versioning
* updates to avoid enable_tests variant; correct versioning
* fixes for style
* appropriate partitioners are enabled if 'all' is specified - so no need to check in spec
* revert accidental change to copyright
* remove erroneously re-introduced line about tests
* new spack recipe for build Jali - unstructured mesh infrastructure for multiphysics applications
* remove the +parallel condition for mstk, update 1.1.1 sha256sum and whitespace cleanup
* reformat description
* cut down description
* Fixes:
1. MPI_THREAD_MULTIPLE problem with OpenMPI and UCX.
Changes:
1. OpenMPI provides two new depends_on options which result in UCX being compiled with multiple threads support. One implicit when OpenMPI 3.x is used, MPI_THREAD_MULTIPLE is enabled by default, and one explicit for OpenMPI <= 2.x, MPI_THREAD_MULTIPLE is disabled by default.
2. Extends UCX package to allow "Enable thread support in UCP and UCT" option.
3. Adds sha256 sums of UCX releases 1.6.1 and 1.2.0.
More details:
Fixes the issue with OpenMPI where programs which use MPI_THREAD_MULTIPLE will fail to execute because UCP worker didn't support it.
During the OpenMPI package installation it's the +thread_multiple spec was not propagated to UCX nor UCX handled it at all.
Now, the OpenMPI package is capable of handling +thread_multiple spec when UCX is request and the UCX package correctly handles +thread_multiple and compiles with the --enable-mt option.
Error message during runtime:
pml_ucx.c:226 Error: UCP worker does not support MPI_THREAD_MULTIPLE
* Adapts check of specs to read better and is the suggested form in the docs.
* Explicitly disables multithreading of UCX if +thread_multiple option is not used.
* Rework texlive package to install from source
This PR reworks the texlive package so that it installs from versioned
source distibution files. This is preferred over installing the binary
package for several reasons. For the binary installation:
1. Each component is downloaded, so can not use a spack mirror.
2. Changes in components over time are not reflected in spack hash.
3. Some of the binaries do not run due to glibc issues, depending on OS.
This PR keeps the binary installation as an option but it should be
considered deprecated, and probably rewmoved at some point.
This PR depends on zziplib from PR #14318.
* Fix flake8 issues
One of the perl scripts was encoded with ISO-8859-1, which caused the
sbang replacement process to fail when spack uses python3. This PR
converts the ps_scan script to UTF-8 encoding.
This PR converts ISO-8859 encoding to UTF-8 encoding for three scripts in
repeatmasker.
- the main RepeatMasker script
- SimpleBatcher.pm
- wublastToCrossmatch.pl
The ISO-8859 encoding prevented the sbang replacement of long paths when
spack uses python3.
* Update the icu4c package
This PR makes several changes to the icu4c package
1. add updated version to 65.1
2. modify the default url as project has moved to github
3. set UTF-8 locale to support building from source files in UTF-8
format
Note that the older versions are not available on github so explicit urls
were used. This PR will close#14399.
* Consolidate the urls
Consolidate the URLs in the `version` directives by using an if test in
`url_for_version`.
* Put version and sha256 on same line
* Put top level url back
* Update and fix bcftools package
This PR updates bcftools to 1.10.2 and is dependent on PR #14504.
This PR also fixes builds of other versions. Versions 1.2-1.4 did not
use autotools so when the packaeg was converted to use autotools with
version 1.6 those older versions could no longer build. Also, those
versions needed to be patched to use an external htsllib. The method of
finding the external htslib is also different for those older versions.
In addition, this PR adds two variants to bcftools:
- libgsl
- perl-filters
Finally, dependencies for perl and python are added, and an unused
dependency for libzip was removed.
* Do not use '@' in variant description
The '@' character in a variant description will cause a problem with
`spack info`.
```
==> Error: Incomplete color format: '@' in
expressions, for @1.8:
```
* Fix error with python2 processing this package
* Update htslib and add libcurl variant
This PR updates htslib to version 1.10.2 and adds a libcurl variant. The
libcurl variant defaults to True because, while it is optional, it is
highly recommended by the project developers.
Other things done:
- be consistent with quotes
* Change version in variant description
Apparently, `spack info` does not like the `@` character in a variant
description.
```
==> Error: Incomplete color format: '@' in
@1.3:.
```
The pathadd function was using setopt to configure zsh for word
splitting, which leaks out of the function and breaks default
functionality in a number of external zsh plugins and packages. This
switches to emulate -L, just as the spack function uses, to keep the
setting local to the function.
* libcircle: add develop version from git master branch
* Update var/spack/repos/builtin/packages/libcircle/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* libcircle: flake8 fix i think
* libcircle: naming things
* libcircle: 🐑 my sacrifice to the flake8 gods
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add 20181226 release of pgmath
* 20190329 release of pgmath requires match for CMPLX macro.
* Add llvm-flang package for Flang fork of LLVM
* Add new and old flang releases.
* Add cmake and python dependencies.
* Update dependencies on llvm-flang and pgmath.
* Fix cmake args and change spec to reflect llvm-flang package.
* change copyright date through 2020.
* Reference Flang package more explicitly.
* More robust support for python executable.
* import os no longer needed, picked up by flake8.
* Use built-in target spec. Variant and targets follow style in main LLVM package.
* Get rid of targets list and only support one target for now.
* Sparc does not appear to be supported in Flang.
* Raise InstallError if architecture not supported.
* fixes#967
* Version bump to 0.9.1
- Bugfixes for spack find
- 0.9.1 can read specs from current develop.
* Don't assume spack is in the path when building docs.
* Quick fix for relocation issues.
* elf relocation fix: cherry-picked from develop branch (#6889)
* Revert "Quick fix for relocation issues."
This reverts commit 57608a6dc4.
* Buildcache: relocate fixes (#6512)
* Updated function which checks if a binary file needs relocation.
Previously this was incorrectly identifying ELF binaries as symbolic
links (so they were being excluded from relocation). Added test to
check that ELF binaries are not considered symlinks.
* relocate_text was not replacing paths in text files. Added test to
check that text files are relocated properly (i.e. paths in the file
are converted to the new prefix).
* Exclude backup files created by filter_file when installing from
binary cache.
* Update write_buildinfo_file method signature to distinguish between
the spec prefix and the working directory for the binary cache
package.
* Final changes for v0.11.0 (#6318)
* Fix logo link in README.md to point to the develop branch. (#6969)
* Compiler flag handlers (#6415)
This adds the ability for packages to apply compiler flags in one of
three ways: by injecting them into the compiler wrapper calls (the
default in this PR and previously the only automated choice);
exporting environment variable definitions for variables with
corresponding names (e.g. CPPFLAGS=...); providing them as arguments
to the build system (e.g. configure).
When applying compiler flags using build system arguments, a package
must implement the 'flags_to_build_system_args" function. This is
provided for CMake and autotools packages, so for packages which
subclass those build systems, they need only update their flag
handler method specify which compiler flags should be specified as
arguments to the build system.
Convenience methods are provided to specify that all flags be applied
in one of the 3 available ways, so a custom implementation is only
required if more than one method of applying compiler flags is
needed.
This also removes redundant build system definitions from tutorial
examples
* Fix type issues with setting flag handlers (#6960)
The flag_handlers method was being set as a bound method, but when
reset in the package.py file it was being set as an unbound method
(all python2 issues). This gets the underlying function information,
which is the same in either case.
The bug was uncovered for parmetis in #6858. This is a partial fix.
Included are changes to the parmetis package.py file to make use of
flag_handlers.
* Bump version to 0.11.1
* Added flags to unit tests + OSX build done once per day (#6988)
* Adding flags to codecov reports
* OSX builds are triggered once a day
* Pull R list_urls from upstream.
* travis: removed /usr/local/include/c++ before installing gcc on OSX (#6515) (#7027)
"brew install gcc" fails for travis build because of an existing
/usr/local/include/c++. This commit removes the offending file
as suggested by brew.
* Fix gfortran 7 detection (#7017)
* Add NameError to exceptions caught from configure_args in module generation (#7173)
* Revert "Binary caching: remove symlinks, copy files instead (#9747)"
This reverts commit 058cf81312.
* Make Spack relocate text files in build caches with relative binaries
* add the tfel package
* fix the tfel package
* fix the tfel package
* fix the tfel package
* Taking Adam J. Steward' remarks into account
* fixes trailing white spaces
* Update description
* Update dependencies following @adamjstewart adices
* Style fixes
* Style fixes
* Add java optional support
* add the maintainers attribute (following @alalazo advice), disable interface not selected (following @adamjstewart advice)
* flake8 fixes
* Fix Cast3M and python-bindings support. Python detection is made compatible with cmake'FindPythonLibs module (at least how it is used in TFEL)
* Style fixes
* Style fixes
* Fix test on python version
* Follow @adamjstewart advices: code is much cleaner and readable
* Small fix
* Small fix
* Add comment
* Small fix in cmake option
* try again (trying to overcome Travis CI unstable build process)
* Add support for the MFrontGenericInterfaceSupport project (MGIS)
* Style fixes
* Package documentation update
* Package documentation update
* Fix a typo thanks to Andreas Baumbach review
* Follow Adam J. Stewart advices
* Fix type
* bugfix: add back r's for invalid regexes
* tutorial basics section: fix gcc install version
* version bump: v0.12.1
* bugfix: bring in .travis.yml from develop
* Add new TFEL' versions (3.0.4, 3.1.4 and 3.2.1). Add new MGIS version (1.0.1). Fix MGIS dependency
* merge with spack:develop
* add missing dependency
* new versions of and
* Fix MGIS url. Fix duplicate variant in TFEL
* Fix tfel packaging according to Adam J. Stewart' advices
* Fix flake8 warning
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Peter Scheibel <scheibel1@llnl.gov>
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add ADF
* Fix typo and lint
* fix lint again
* one more lint fix
* fix identation
* still stying to fix identation
* one final fix
* import needed libraries
* changes as per reviewer's request
fix setup environment function, enhance recipe
* add import os once again
* chnages as per reviewer's request
When removing packages from a view, extensions were being deactivated
in an arbitrary order. Extensions must be deactivated in preorder
traversal (dependents before dependencies), so when this order was
violated the view update would fail.
This commit ensures that views deactivate extensions based on a
preorder traversal and adds a test for it.
* Set conflicts for qt5 and the Intel compiler
This PR sets a `conflicts` statement for QT5 and the Intel compiler.
* New patches for intel compiles
This commit adds two patches to get QT5 to compile with the intel
compilers. The two patches are very similar but the file being patched
was changed substantially between qt-5.11 and qt-5.12. The patch checks
versions of both GCC and Intel compilers to know when to use overflow
builtis. Essentially, GCC must be >= 5 and Intel must be >= 18.
The sqlite dependency needs the `+column_metadata` variant when the
Intel compiler is used. That is made conditional on the compiler but it
might make sense to make that the default for the sqlite dependency.
Some other changes were made based on testing builds of various QT5
versions with several Intel compilers.
- The libxext dependency is still needed for QT5
- A dependency on libxrender is needed
- The gtk option format needs to be constrained at the qt@5.7 level, not
qt@5.8.
- An extra configure option is needed for the sql plugins RPATH
* Adding a new package, scikit-build, which is very useful for building python extensions
* Update package.py
* Update package.py
Trying to address flake8 corrections
* Update var/spack/repos/builtin/packages/py-scikit-build/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-scikit-build/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-rapidjson: new package at 0.9.1
* py-rapidjson: rename to py-python-rapidjson, use PyPI mirror
* py-python-rapidjson: add missing deps
* python-rapidjson: use short PyPI url
* py-python-rapidjson: remove extra dependencies
* Spack can uninstall unused specs
fixes#4382
Added an option to spack uninstall that removes all unused specs i.e.
build dependencies or transitive dependencies that are left
in the store after the specs that pulled them in have been removed.
* Moved the functionality to its own command
The command has been named 'spack autoremove' to follow the naming used
for the same functionality by other widely known package managers i.e.
yum and apt.
* Speed-up autoremoving specs by not locking and re-reading the scratch DB
* Make autoremove work directly on Spack's store
* Added unit tests for the new command
* Display a terser output to the user
* Renamed the "autoremove" command "gc"
Following discussion there's more consensus around
the latter name.
* Preserve root specs in env contexts
* Instead of preserving specs, restrict gc to the active environment
* Added docs
* Added a unit test for gc within an environment
* Updated copyright to 2020
* Updated documentation according to review
Rephrased a couple of sentences, added references to
`spack find` and dependency types.
* Updated function naming and docstrings
* Simplified computation of unused specs
Since the new approach uses private attributes of the DB
it has been coded as a method of that class rather than a
freestanding function.
* Add platform flag to QT for linux+clang
* Extend QT platform support to more compilers and systems
* Unify QT5 configure options
* fixup! Unify QT5 configure options
* fixup! Unify QT5 configure options
* fixup! Unify QT5 configure options
* Fix newer flake8 and mac qt5 configure
* Add Thirdorder recipe
* Remove white spaces
* Converting recipe to a PythonPackage base class
* remove trailing spaces
* remove line at end of file
* enhance recipe as per reviewer
* fix post_install as requested by reviewer
* rename dir to py-thirderorder
* change checksum to sha256
* py-intervaltree: new package at 3.0.2
* py-intervaltree: fix checksum
* py-intervaltree: add py-setuptools dep
* py-intervaltree: use inclusive ranges
* py-intervaltree: change py-test dep type
Beginning with numpy > 1.16 when using older versions of gcc the
`std=c99` flag must be used. The Intel compiler depends on gcc for its
language extensions so the version of gcc is important. If the version
of gcc used by the Intel compiler is one that requires the `-std=c99`
flag then that flag will have to be used for a build with the Intel
compiler as well.
This PR tests the version of gcc used by the Intel compiler and will
abort the build if the gcc version is < 4.8 and inject the `-std=c99`
flag if >= 4.8 and < 5.1. This will cover the system gcc compiler and
any gcc environment module loaded at build time.
Due to formatting differences, the older version of perl-bioperl was
getting picked up as the preferred version. This PR explicitly sets the
newer version to be preferred.
Because of a bug in the current concretizer,
spack install gromacs
fails because gromacs depends on hwloc (default is v2), and Open MPI
(the default MPI library) depends on hwloc v1.
As discussed in https://github.com/spack/spack/issues/14339, this
workaround should be removed once the concretizer is fixed
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
* create package py-zarr
* specify setuptools versions
* add more dependencies, improve style
* Update var/spack/repos/builtin/packages/py-zarr/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-zarr/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-zarr/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* add dependencies, remove python version constraint
* remove windows specific dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The imports in `spec.py` are getting to be pretty unwieldy.
- [x] Remove all of the `import from` style imports and replace them with
`import` or `import as`
- [x] Remove a number names that were exported by `spack.spec` that
weren't even in `spack.spec`
Previously, `spack test` automatically passed all of its arguments to
`pytest -k` if no options were provided, and to `pytest` if they were.
`spack test -l` also provided a list of test filenames, but they didn't
really let you completely narrow down which tests you wanted to run.
Instead of trying to do our own weird thing, this passes `spack test`
args directly to `pytest`, and omits the implicit `-k`. This means we
can now run, e.g.:
```console
$ spack test spec_syntax.py::TestSpecSyntax::test_ambiguous
```
This wasn't possible before, because we'd pass the fully qualified name
to `pytest -k` and get an error.
Because `pytest` doesn't have the greatest ability to list tests, I've
tweaked the `-l`/`--list`, `-L`/`--list-long`, and `-N`/`--list-names`
options to `spack test` so that they help you understand the names
better. you can combine these options with `-k` or other arguments to do
pretty powerful searches.
This one makes it easy to get a list of names so you can run tests in
different orders (something I find useful for debugging `pytest` issues):
```console
$ spack test --list-names -k "spec and concretize"
cmd/env.py::test_concretize_user_specs_together
concretize.py::TestConcretize::test_conflicts_in_spec
concretize.py::TestConcretize::test_find_spec_children
concretize.py::TestConcretize::test_find_spec_none
concretize.py::TestConcretize::test_find_spec_parents
concretize.py::TestConcretize::test_find_spec_self
concretize.py::TestConcretize::test_find_spec_sibling
concretize.py::TestConcretize::test_no_matching_compiler_specs
concretize.py::TestConcretize::test_simultaneous_concretization_of_specs
spec_dag.py::TestSpecDag::test_concretize_deptypes
spec_dag.py::TestSpecDag::test_copy_concretized
```
You can combine any list option with keywords:
```console
$ spack test --list -k microarchitecture
llnl/util/cpu.py modules/lmod.py
```
```console
$ spack test --list-long -k microarchitecture
llnl/util/cpu.py::
test_generic_microarchitecture
modules/lmod.py::TestLmod::
test_only_generic_microarchitectures_in_root
```
Or just list specific files:
```console
$ spack test --list-long cmd/test.py
cmd/test.py::
test_list test_list_names_with_pytest_arg
test_list_long test_list_with_keywords
test_list_long_with_pytest_arg test_list_with_pytest_arg
test_list_names
```
Hopefully this stuff will help with debugging test issues.
- [x] make `spack test` send args directly to `pytest` instead of trying
to do fancy things.
- [x] rework `--list`, `--list-long`, and add `--list-names` to make
searching for tests easier.
- [x] make it possible to mix Spack's list args with `pytest` args
(they're just fancy parsing around `pytest --collect-only`)
- [x] add docs
- [x] add tests
- [x] update spack completion
I usually want to look at the Travis CI output, but I currently have to
scroll down to see it. This renames checks to be a bit shorter and more
consistent with Travis's naming, and also so that actions appear lower
than travis and codecov in the list of checks.
Test configuration files (except modules.yaml) were in the root level of
test/data, but should really just be in their own directory. The absence
of modules.yaml was also breaking module tests if we got module
preferences after tests started, as the mock modules.yaml was not in the
test directory.
The module hook would previously fail if there were no enabled module types.
- Instead of looking for a `KeyError`, default to empty list when the
config variable is not present.
- Convert lambdas to real functions for clarity.
- Remove legacy yaml_version_check() hook
- Remove the pre_run hook from `hook/__init__.py` and `main.py`
We want to discourage the use of pre-run hooks because they have to run
at startup. To keep Spack fast, we should do things like this lazily
instead of in hooks that require spidering directories full of modules.
* Updated versions and more variants
- Added 'develop' and '3.0.0' versions
- Added 'tau', 'upcxx', 'gotcha', and 'likwid'
* Added conflict handling for +cupti~cuda
* Removed extra cmake args line
Continuing to shave small bits of time off startup --
`spack.cmd.common.arguments` constructs many `Args` objects at module
scope, which has to be done for all commands that import it. Instead of
doing this at load time, do it lazily.
- [x] construct Args objects lazily
- [x] remove the module-scoped argparse fixture
- [x] make the mock config scope set dirty to False by default (like the
regular scope)
This *seems* to reduce load time slightly
Previously, fixtures like `config`, `database`, and `store` were
module-scoped, but frequently used as test function arguments. These
fixtures swap out global on setup and restore them on teardown. As
function arguments, they would do the right set-up, but they'd leave the
global changes in place for the whole module the function lived in. This
meant that if you use `config` once, other functions in the same module
would inadvertently inherit the mock Spack configuration, as it would
only be torn down once all tests in the module were complete.
In general, we should module- or session-scope the *STATE* required for
these global objects (as it's expensive to create0, but we shouldn't
module-or session scope the activation/use of them, or things can get
really confusing.
- [x] Make generic context managers for global-modifying fixtures.
- [x] Make session- and module-scoped fixtures that ONLY build filesystem
state and create objects, but do not swap out any variables.
- [x] Make seeparate function-scoped fixtures that *use* the session
scoped fixtures and actually swap out (and back in) the global
variables like `config`, `database`, and `store`.
These changes make it so that global changes are *only* ever alive for a
singlee test function, and we don't get weird dependencies because a
global fixture hasn't been destroyed.
`PackagePrefs` has had a class-level cache of data from `packages.yaml` for
a long time, but it complicates testing and leads to subtle errors,
especially now that we frequently manipulate custom config scopes and
environments.
Moving the cache to instance-level doesn't slow down concretization or
the test suite, and it just caches for the life of a `PackagePrefs`
instance (i.e., for a single cocncretization) so we don't need to worry
about global state anymore.
- [x] Remove class-level caches from `PackagePrefs`
- [x] Add a cached _spec_order object on each `PackagePrefs` instance
- [x] Remove all calls to `PackagePrefs.clear_caches()`
Commands like `spack blame` were printig poorly when redirected to files,
as colify reverts to a single column when redirected. This works for
list data but not tables.
- [x] Force a table by always passing `tty=True` from `colify_table()`
In "spack info" the Variants header currently has two blank
lines under it. That's too much. It looks like the actual
content belongs to something else.
Instead underline the headers to make things more obvious.
* Add Avizo Recipe
* make changes as per review
* fix home url and linting
* Fix url
* fix identation
* change checksum to sha256 instead of md5
* fix installation
* fix lint
* fix identation
* make it compatible with python 2.6
* enhancing recipe and fixing avizo licensing
changes as per suggestions from reviewer; fix licensing
* fix identation
* use new setup_run_environment function
This PR moves build smoke tests from TravisCI and migrates them to Github Actions. The result is that build tests are performed in parallel with unit tests and they don't hog additional resources on Travis. The workflow will not run if a PR only changes packages in the built-in repository, but will always run on pushes to develop or master.
* Removed build tests from Travis and passed them to Github Actions
* Store ~/.ccache in Github Actions cache
* Add filters on paths and make sure this workflow don't run
* Use paths-ignore and exclude only files in the built-in repo
* Added a badge to README.md
This commit removes the `python_version.py` unit test module
and the vendored dependencies `pyqver2.py` and `pyqver3.py`.
It substitutes them with an equivalent check done using
`vermin` that is run as a separate workflow via Github Actions.
This allows us to delete 2 vendored dependencies that are unmaintained
and substitutes them with a maintained tool.
Also, updates the list of vendored dependencies.
Before this commit we used to run the entire unit test suite
in the presence of a failure. Since we currently rely a lot
on the state of the filesystem etc. the end report was most
of the time showing spurious failures that were a consequence
of the first failing test.
This PR makes unit tests exit at the first failing test
Also, pin codecov at v4.5.4 (last one supporting Python 2.6)
`ViewDescriptor.regenerate()` calls `get_all_specs()`, which reads
`spec.yaml` files, which is slow. It's fine to do this once, but
`view.remove_specs()` *also* calls it immediately afterwards.
- [x] Pass the result of `get_all_specs()` as an optional parameter to
`view.remove_specs()` to avoid reading `spec.yaml` files twice.
`ViewDescriptor.regenerate()` was copying specs and stripping build
dependencies, which clears `_hash` and other cached fields on concrete
specs, which causes a bunch of YAML hashes to be recomputed.
- [x] Preserve the `_hash` and `_normal` fields on stripped specs, as
these will be unchanged.
`spack install` previously concretized, writes the entire environment
out, regenerated views, then wrote and regenerated views
again. Regenerating views is slow, so ensure that we only do that once.
- [x] add an option to env.write() to skip view regeneration
- [x] add a note on whether regenerate_views() shouldn't just be a
separate operation -- not clear if we want to keep it as part of write
to ensure consistency, or take it out to avoid performance issues.
Environments need to read the DB a lot when installing all specs.
- [x] Put a read transaction around `install_all()` and `install()`
to avoid repeated locking
Our `LockTransaction` class was reading overly aggressively. In cases
like this:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
```
The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time. `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.
- [x] `Lock.acquire_write()` return `False` in cases where we already had
a read lock.
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
4 with spack.store.db.read_transaction():
...
```
The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.
The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.
The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it. Since the DB was never written, we got stale data.
- [x] Make `Lock.release_write()` return `True` whenever we release the
*last write* in a nest.
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released. This is
pretty obviously wrong.
- [x] Refactor `Lock` so that a release function can be passed to the
`Lock` and called *only* when a lock is really released.
- [x] Refactor `LockTransaction` classes to use the release function
instead of checking the return value of `release_read()` / `release_write()`
`ViewDescriptor.regenerate()` checks repeatedly whether packages are
installed and also does a lot of DB queries. Put a read transaction
around the whole thing to avoid repeatedly locking and unlocking the DB.
`Environment.added_specs()` has a loop around calls to
`Package.installed()`, which can result in repeated DB queries. Optimize
this with a read transaction in `Environment`.
Checks for deprecated specs were repeatedly taking out read locks on the
database, which can be very slow.
- [x] put a read transaction around the deprecation check
BundlePackages use a noop fetch strategy. The mirror logic was assuming
that the fetcher had a resource to cach after performing a fetch. This adds
a special check to skip caching if the stage is associated with a
BundleFetchStrategy. Note that this should allow caching resources
associated with BundlePackages.
When updating a mirror, Spack was re-retrieving all patches (since the
fetch logic for patches is separate). This updates the patch logic to
allow the mirror logic to avoid this.
Since cache_mirror does the fetch itself, it also needs to do the
checksum itself if it wants to verify that the source stored in the
mirror is valid. Note that this isn't strictly required because fetching
(including from mirrors) always separately verifies the checksum.
The targets for the cosmetic paths in mirrrors were being calculated
incorrectly as of fb3a3ba: the symlinks used relative paths as targets,
and the relative path was computed relative to the wrong directory.
When creating a cosmetic symlink for a resource in a mirror, remove
it if it already exists. The symlink is removed in case the logic to
create the symlink has changed.
* Some packages (e.g. mpfr at the time of this patch) can have patches
with the same name but different contents (which apply to different
versions of the package). This appends part of the patch hash to the
cache file name to avoid conflicts.
* Some exceptions which occur during fetching are not a subclass of
SpackError and therefore do not have a 'message' attribute. This
updates the logic for mirroring a single spec (add_single_spec)
to produce an appropriate error message in that case (where before
it failed with an AttributeError)
* In various circumstances, a mirror can contain the universal storage
path but not a cosmetic symlink; in this case it would not generate
a symlink. Now "spack mirror create" will create a symlink for any
package that doesn't have one.
`ViewDescriptor.regenerate()` calls `get_all_specs()`, which reads
`spec.yaml` files, which is slow. It's fine to do this once, but
`view.remove_specs()` *also* calls it immediately afterwards.
- [x] Pass the result of `get_all_specs()` as an optional parameter to
`view.remove_specs()` to avoid reading `spec.yaml` files twice.
`ViewDescriptor.regenerate()` was copying specs and stripping build
dependencies, which clears `_hash` and other cached fields on concrete
specs, which causes a bunch of YAML hashes to be recomputed.
- [x] Preserve the `_hash` and `_normal` fields on stripped specs, as
these will be unchanged.
`spack install` previously concretized, writes the entire environment
out, regenerated views, then wrote and regenerated views
again. Regenerating views is slow, so ensure that we only do that once.
- [x] add an option to env.write() to skip view regeneration
- [x] add a note on whether regenerate_views() shouldn't just be a
separate operation -- not clear if we want to keep it as part of write
to ensure consistency, or take it out to avoid performance issues.
Environments need to read the DB a lot when installing all specs.
- [x] Put a read transaction around `install_all()` and `install()`
to avoid repeated locking
Our `LockTransaction` class was reading overly aggressively. In cases
like this:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
```
The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time. `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.
- [x] `Lock.acquire_write()` return `False` in cases where we already had
a read lock.
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
4 with spack.store.db.read_transaction():
...
```
The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.
The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.
The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it. Since the DB was never written, we got stale data.
- [x] Make `Lock.release_write()` return `True` whenever we release the
*last write* in a nest.
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released. This is
pretty obviously wrong.
- [x] Refactor `Lock` so that a release function can be passed to the
`Lock` and called *only* when a lock is really released.
- [x] Refactor `LockTransaction` classes to use the release function
instead of checking the return value of `release_read()` / `release_write()`
`ViewDescriptor.regenerate()` checks repeatedly whether packages are
installed and also does a lot of DB queries. Put a read transaction
around the whole thing to avoid repeatedly locking and unlocking the DB.
Users can now list mirrors of the main url in packages.
- [x] Instead of just a single `url` attribute, users can provide a list (`urls`) in the package, and these will be tried by in order by the fetch strategy.
- [x] To handle one of the most common mirror cases, define a `GNUMirrorPackage` mixin to handle all the standard GNU mirrors. GNU packages can set `gnu_mirror_path` to define the path within a mirror, and the mixin handles setting up all the requisite GNU mirror URLs.
- [x] update all GNU packages in `builtin` to use the `GNUMirrorPackage` mixin.
* Add symbols patch
* Apply symbols patch to pgmath
* Add github issue number for symbols patch.
* Add naromero77 as a maintainer.
* Patch only applied to March 2019 release and master.
* Record that old versions of ROOT don't support modern GCC
* Well, actually I don't know about 6.07
* Fix typo and follow odd version recommendation from @chissg
* Add QE 6.5
* Support for serial HDF5 case with serial (no mpi) QE is now supported but requires a patch for 6.4.1 and 6.5.
* Add naromero77 as a maintainer.
* Start cinema package
* Remove boilerplate and add description
* Formatting for pep8
* Correct milestone tag
* 'master' instead of 'develop'
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Two variants, both with numpy and other small changes
* When +image for scikit
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- Add an optional argument so that `possible_dependencies()` will report
missing dependencies.
- Add a test to ensure it works.
- Ignore missing dependencies in `possible_dependencies()` by default.
- this version allows getting possible dependencies of multiple packages
or specs at once.
- New method handles calling `PackageBase.possible_dependencies` multiple
times and passing `visited` dict around.
`Environment.added_specs()` has a loop around calls to
`Package.installed()`, which can result in repeated DB queries. Optimize
this with a read transaction in `Environment`.
Checks for deprecated specs were repeatedly taking out read locks on the
database, which can be very slow.
- [x] put a read transaction around the deprecation check
* Add dependencies for hpcrypt
* address review comments
* flake
* license-fix
* fix checksums
* Update py-hvace homepage
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* update py-hvac url
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
doesn't understand a custom, user-defined compiler version. However, if
the compiler's version check fails, you can't build anything with the
custom compiler.
- [x] Be more lenient: fall back to the custom compiler version and use
it verbatim if the version check fails.
`pgcc -V` was failing on power machines because it returns 2 (despite
correctly printing version information). On x86_64 machines the same
command returns 0 and doesn't cause an error.
- [x] Ignore return value of 2 for pgcc when doign a version check
* add new package : filebench
* remove alpha version and duplicated autoheader cmd
* refine automake cmd in sh()
* refine filebench url as a stable tarball link
* root: Rationalize and improve version, variant and ROOT option handling.
* Completely re-vamp CMake option handling for readability and maintainability:
* Three categories of option: control, builtin and feature, alphabetically sorted.
* Each option is described as a list: an option name followed by an optional value which is either Boolean or a string representing the name of a variant. If the value is omitted, it defaults to the option name.
* New functions `_process_opts()` and `_process_opt()` (nested) to turn all supplied option/value specifications into CMake arguments.
* Remove overly-terse per-option comments in favor of (much) more comprehensive notes in README.md.
* Variants and conflicts:
* Remove `test` variant in favor of pegging ROOT `testing` option to the value of `self.run_tests` since the install is unaffected, per ROOT developer.
* Remove commented-out and never-functional variants: `asimage`, `avahi`, `kerberos`, `ldap`, `libcxx`, `odbc`, `oracle`, `pythia8`, `xinetd`.
* New variant `vmc` (default `OFF`) to control the Virtual Monte Carlo interface.
* Conflict: `+opengl` is incompatible with `~x`.
* Conflict: `http` is now an unconditional conflict due to dependency issues (see README.md).
* Remove commented-out and non-existent dependencies `avahi`, `kerberos`, `ldap`, `libcxx`, `odbc`, `oracle`, `pythia`, `veccore` (per #13949).
* New and changed options:
* Option `pch` was inadvertently set to `OFF` due to its dependence on a nonexistent variant `pch`. As it happens its value is ignored in the ROOT configuration handling, so there was no deleterious effect. It has been fixed to `ON` to better reflect actual behavior pending enablement of tuntime C++ modules.
* Add new versions 6.18.0{0,2,4}:
* Require CMake 3.9 for 6.18.00+.
* Add conflicts for variants `qt4` and `table` representing ROOT build options for which support was discontinued. Remove redundant conflict on \@master.
* C++ standard is now specified with `-DCMAKE_CXX_STANDARD=X` rather than `-Dcxx=X`.
* Remove old version 5.34.38 (wrong build system).
See README.md for more details of option-related changes.
* Flake8
* `rpath` option is a control option rather than a feature.
* Add new DD4hep release and some forgotten build requirements
* PR review suggestions
Use master naming convention for development branch, and put versions in decreasing order.
* rose: Update boost dependency for rose
* rose: Updated rose to version 0.9.12.45
* rose: Updated jdk dependency
* rose: Updated rose to version 0.9.13.0
* rose: Fixed formatting
* rose: Added maintainer and switch dependency to java@8
* Added package for Half C++ header-only library.
Fixed an checksum for Hydrogen 1.3.2. Cleaned up the Clara package to
not create an empty bin directory.
* Fixed flake8
* Added maintainer
* add new package : cosbench
* add cosbench depends and remove unstable version
* Update var/spack/repos/builtin/packages/cosbench/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cosbench/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Vendors for ARM come out of `/proc/cpuinfo` as hex numbers instead of readable strings.
- Add support for associating vendor names with the hex numbers.
- Also move these mappings from Python code to `microarchitectures.json`
- Move darwin feature name mappings to `microarchitectures.json` as well
Before this commit we used to run the entire unit test suite
in the presence of a failure. Since we currently rely a lot
on the state of the filesystem etc. the end report was most
of the time showing spurious failures that were a consequence
of the first failing test.
This PR makes unit tests exit at the first failing test
Also, pin codecov at v4.5.4 (last one supporting Python 2.6)
* when constructing package hash, default to including a method in the content hash if we can't determine whether it would be included by examining the AST
* add a test for updated content-hash calculations
* refactor content hash tests to eliminate repeated lines
* docker: add missing module to ubuntu images
* docker: fix issue with missing locale
* docker: one package per line + rm python2 support
* docker: ubuntu image also needs 'file' for buildcache creation
BundlePackages use a noop fetch strategy. The mirror logic was assuming
that the fetcher had a resource to cach after performing a fetch. This adds
a special check to skip caching if the stage is associated with a
BundleFetchStrategy. Note that this should allow caching resources
associated with BundlePackages.
When updating a mirror, Spack was re-retrieving all patches (since the
fetch logic for patches is separate). This updates the patch logic to
allow the mirror logic to avoid this.
Since cache_mirror does the fetch itself, it also needs to do the
checksum itself if it wants to verify that the source stored in the
mirror is valid. Note that this isn't strictly required because fetching
(including from mirrors) always separately verifies the checksum.
The targets for the cosmetic paths in mirrrors were being calculated
incorrectly as of fb3a3ba: the symlinks used relative paths as targets,
and the relative path was computed relative to the wrong directory.
When creating a cosmetic symlink for a resource in a mirror, remove
it if it already exists. The symlink is removed in case the logic to
create the symlink has changed.
PR #13975 added makefile filtering to replace gcc/g++ with the spack
compiler. This conflicts with other filtering that is done in the package to
add paths for dependencies. The text of the dependency paths might
have 'gcc' in the path name, depending on the install_path_scheme, and
that was being replaced by the new compiler filters. That would mangle
the path to the dependecy resulting in a failed build.
This PR moves the compiler filters to be before the other filters to
make sure that the compiler is set before the dependency paths.
* Add missing dependency on setuptools to py-subprocess32
* Update package.py
* Update var/spack/repos/builtin/packages/py-subprocess32/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
PR #10589 introduced a libiconv dependency to doxygen. This causes
problems on Linux systems, since the iconv symbols are included in libc,
which causes CMake to use the external header but not the external
library. Work around this by always using the external libiconv.
The xed CLI is handy, and can be gotten by building the examples in the
intel-xed package. This PR builds the examples and installs the xed CLI.
It would also be possible to install more of the example binaries if
someone thinks they are useful.
* openmolcas: new package at 19.11
* openmolcas: fill description
* openmolcas: rewrite using CMakePackage
* openmolcas: add py-six dep
* openmolcas: use setup_build_environment, setup_run_environment
* openmolcas: remove redundant cmake dep
* openmolcas: explicitly cast Executable to str
* pytest: add __init__ files for all test subdirs
* add licenses to empty files
* Fix Sphinx warning message about comment within docstring
* Further fixes to Sphinx docstring
Recent commit e9ee9eaf (#13989) fixed testing version ranges inside
patch when clauses. Previously, it was necessary to write all revs
individually for packages with multiple length version numbers (2019
and 2019.1).
This fixes the build for the old 2017.* versions.
* fix docstring in generate_package_index() refering to "public" keys as "signing" keys
* use explicit kwargs in push_to_url()
* simplify url_util.parse() per tgamblin's suggestion
* replace standardize_header_names() with the much simpler get_header()
* add some basic tests
* update s3_fetch tests
* update S3 list code to strip leading slashes from prefix
* correct minor warning regression introduced in #11117
* add more tests
* flake8 fixes
* add capsys fixture to mirror_crud test
* add get_header() tests
* use get_header() in more places
* incorporate review comments
This PR allows virtual packages to be added to the specs list using
the add command.
Virtual packages are already allowed in named lists in spack
environments/stacks, and they are already allowed in the specs list
when added using the yaml directly.
I have, more than once, tried to install the list of things that need
to build the docs, only to discover that the list doesn't use Spack's
package names. I'm tired of facepalming....
While I was there I touched up the prose about activating the new
Python packages; activating a python package doesn't add anything to
your PYTHONPATH, it links things into a directory that's *already* on
your PYTHONPATH. Note that this all presupposes that you're using
that same python....
* elpa: port to microarch
* flake8
* Update package.py
* Update var/spack/repos/builtin/packages/elpa/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Adding libid3tag package for supporting feh
* Adding libexif package for supporting feh
* Adding imlib2 package for supporting feh
* Adding the feh package
* Rewording the cleanup function for libid3tag
* Fixing some flake8 issues for imlib2 and libid3tag
* Adding sources for the patches and swapping rm for os.remove
* Flake8 fixes
* swapping md5sums for sha256sums
* CUDA HeaderList: Unit Test
* Spec Header Dirs: Only first include/
Avoid matching recurringly nested include paths that usually
refer to internally shipped libraries in packages.
Example in CUDA Toolkit, shipping a libc++ fork internally
with libcu++ since 10.2.89:
`<prefix>/include/cuda/some/more/details/include/` or
`<prefix>/include/cuda/std/detail/libcxx/include`
regex: non-greedy first match of include
Co-Authored-By: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* CUDA: Re-Enable 10.2.89 as Default
* ibm-java: add version 8.0.6.0
Add version 8.0.6.0 and remove 8.0.5.30. IBM is fairly aggressive
about removing old versions, and 8.0.5.30 is no longer available from
their download site.
* Restore version 8.0.5.30, although it is no longer available for
download from IBM.
* Addition of repository branches to maestrowf.
* Addition of 1.1.5dev pre-release.
* Correction of a merge conflict.
* Addition of Maestro release 1.1.5
* Addition of Maestro release 1.1.6 (removal of 1.1.5)
* Sets 1.1.6 to the preferred version.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Tweak to the url to point to latest.
* Fix Apex and OTF2 support
- Comment out apex as a dependncy: it is bundled with HPX.
- Apply a patch to v1.3.0 to correctly build with APEX.
- Add otf2 as a dependency when APEX is enabled.
* Remove depends_on('apex')
* augustus: Set compile commands for each compiler and Fix for using 'boost' on Spack
* fix for flake8
* delete 'string' args
* Fix args of filter_file func
* Fix args of other filter_file func
* Add patch to fix issue building current llvm develop master on power9
* Conform to proper block commenting
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* add tensorflow
Change-Id: Id778c68d148cc42f0b478a9d10a8f937cb54cdc6
* make bazel and tensorflow build
Change-Id: Iae9005e8f4dcc8f1ed36ea9337d2430aeebb291f
* fix flake8
Change-Id: Ib05529dd796eab4a8855a5d7775cc4efea8e479d
* 2nd flake8 attempt
Change-Id: I46224be3a374b2a65793048b0c5178ea64adbd78
* replace md5 sums with sha256
* add version 1.13.2
* bazel() -> bazel('build',...
* specify versions of bazel dependency
* build with CUDA
* add TODOs
* add more todo"s
* improve enum34 dependency
* py-future is a dependency as of v1.14
* Update var/spack/repos/builtin/packages/tensorflow/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/tensorflow/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/tensorflow/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/tensorflow/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* enable nccl, cuda by default
* explain patches
* add todo
* remove unnecessary copt_flag
* use join
* join argument must be an iterable
* split long line; use same opts for non-cuda build
* without opt flags, configure hangs
* introduce build phases; re-arrange
* undo mistake
* restore unset tmp_path
* as of v1.14, nccl_install_path is parsed correctly, hence change ...prefix.lib to ...prefix
* now, version 1.14 compiles successfully with cuda
* add version 2.1.0
* specify bazel dependency for version 2.1.0-rc0
* account for deprecated bazel opts for v2.1.0-rc0
* disable mkldnn contraction kernel
* Flake8 fixes
* md5 -> sha256
* Fix TF and TF-estimator version deps
* Don't just comment out patch
* Add myself as a maintainer
* Patch py-astor to support newer py-setuptools
* Add more versions and bazel version constraints
* Add a build phase
* Add note about configure interactivity
* dev-build -> build-env
* Disable iOS build
* Use correct optimization flags
* Add variants for all possible features
* nccl isn't always a dependency
* Specify correct dependency versions for each release
* Libs may not be in lib or lib64
* Add py-opt-einsum package
* Add newer version of py-protobuf
* Add newer version of py-wrapt
* Fix Python 2.6 syntax error
* Code review
* Set more env vars for older versions
* Add more env vars, fix bazel versions, add conflicts
* Fix config options
* Specify version that support --config args
* Add py-future dependency for Python 2
* Fix cuda config flag and compute capabilities
* Fix installation on macOS, add unit tests
* Override cuda variant default to True on non-macOS
* Rename tensorflow to py-tensorflow
* Has to extend something
* Fix os.symlink call
* convert cuda_arc values to capabilities
* restore nccl prefix path for v1.13.1
* Revert to v2
* Remove extraneous period
* Add new version of jdk/openjdk
* More stable cuda_arch formatting
* Fix bazel unit tests
* Fix symlinking
* Fix unit tests
* +gcp by default until build error figured out
* apply strict constraint checks for patches, otherwise Spack may incorrectly treat a version range constraint as satisfied when mixing x.y and x.y.z versions
* add mixed version checks to version comparison tests
v0.13.2
This release contains major performance improvements for Spack environments, as well as some bugfixes and minor changes.
* allow missing modules if they are blacklisted (#13540)
* speed up environment activation (#13557)
* mirror path works for unknown versions (#13626)
* environments: don't try to modify run-env if a spec is not installed (#13589)
* use semicolons instead of newlines in module/python command (#13904)
* verify.py: os.path.exists exception handling (#13656)
* Document use of the maintainers field (#13479)
* bugfix with config caching (#13755)
* hwloc: added 'master' version pointing at the HEAD of the master branch (#13734)
* config option to allow gpg warning suppression (#13744)
* fix for relative symlinks when relocating binary packages (#13727)
* allow binary relocation of strings in relative binaries (#13724)
`spack module loads` and `spack module find` previously failed if any upstream modules were missing. This prevented it from being used with upstreams (or, really, any spack instance) that blacklisted modules.
This PR makes module finding is now more lenient (especially for blacklisted modules).
- `spack module find` now does not report an error if the spec is blacklisted
- instead, it prints a single warning if any modules will be omitted from the loads file
- It comments the missing modules out of the loads file so the user can see what's missing
- Debug messages are also printed so users can check this with `spack -d...`
- also added tests for new functionality
`spack module loads` and `spack module find` previously failed if any upstream modules were missing. This prevented it from being used with upstreams (or, really, any spack instance) that blacklisted modules.
This PR makes module finding is now more lenient (especially for blacklisted modules).
- `spack module find` now does not report an error if the spec is blacklisted
- instead, it prints a single warning if any modules will be omitted from the loads file
- It comments the missing modules out of the loads file so the user can see what's missing
- Debug messages are also printed so users can check this with `spack -d...`
- also added tests for new functionality
* Fixed x86-64 optimization flags for clang
* Fixed expected results in unit tests
Before the flags used where the one for llc, the underlying compiler from LLVM IR to machine assembly. It turns out that the semantic of `-march`, `-mtune` and `-mcpu` changes from clang front-end to llc.
I found no definitive reference for the flags submitted in this PR, but I checked the assembly on a vectorizable function using Godbolt's web-site.
* package for Simmetrix SimModSuite
* simmodsuite: passes flake8
* simmetrix: add version, set cmake prefix path
A given install will either use the libs built on rhel7 or rhel6.
For now, I'm sticking with the non-spack install convention of
placing the libraries into sub-directories named according to their
build process (os + compiler).
* simmetrix: add older version
* simmetrix: set build env paths
easier to build pumi using CMAKE_PREFIX_PATH
* simmetrix: address review comments
* simmetrix: add new version and remove old one
* simmetrix: flake8 fixes
* simmodsuite: oslib var is in self
* simmodsuite: update version and checksum
* simodsuite: set LD_LIBRARY_PATH for cad kernels
* update license
* update setup_environment calls
* increase indentation for flake8
* python3.8 flake8 fixes
* use spack consistent naming
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* sha256 required, update versions and hashes
* Added build dependency on gawk
* Use virtual depdendency
* Added patch to prepare libgpg-error for use with gawk@5
* Added reasoning with link for need for patch
* Add a transaction around repeated calls to `spec.prefix` in the activation process
* cache the computation of home in the python package to speed up setting deps
* ensure that module-scope variables are only set *once* per module
* Add a transaction around repeated calls to `spec.prefix` in the activation process
* cache the computation of home in the python package to speed up setting deps
* ensure that module-scope variables are only set *once* per module
* amber: Improved package.py and added version 18
- Added amber 18 with ambertools 19
- Added all available patches
- Added +update variant to use the self update
- Added +openmp variant to get openmp optomizations
- Added +x11 variant when possible
- Splitted amber 16 and 18 dependencies
- We now detect the copiler type and compile accordingly
- Added cray variant which is a bit special (untested)
- Improved detection of possible cuda versions
- All compilation optimizations +mpi +openmp +cuda are compatible
- Updated to use setup_build_environment(), setup_run_environment()
* dealii: Added 'threads' variant that controls the TBB dependency (#13931)
* dealii: Added 'threads' variant that controls the DEAL_II_WITH_THREADS cmake option and the dependency on Intel TBB
* Update var/spack/repos/builtin/packages/dealii/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* amber: Improved package.py and added version 18
- Added amber 18 with ambertools 19
- Added all available patches
- Added +update variant to use the self update
- Added +openmp variant to get openmp optomizations
- Added +x11 variant when possible
- Splitted amber 16 and 18 dependencies
- We now detect the copiler type and compile accordingly
- Added cray variant which is a bit special (untested)
- Improved detection of possible cuda versions
- All compilation optimizations +mpi +openmp +cuda are compatible
- Updated to use setup_build_environment(), setup_run_environment()
* amber: Adding missing flex and bison dependencies
* Removed cray variant; flex and bison now build only
* amber: Improved package.py and added version 18
- Added amber 18 with ambertools 19
- Added all available patches
- Added +update variant to use the self update
- Added +openmp variant to get openmp optomizations
- Added +x11 variant when possible
- Splitted amber 16 and 18 dependencies
- We now detect the copiler type and compile accordingly
- Added cray variant which is a bit special (untested)
- Improved detection of possible cuda versions
- All compilation optimizations +mpi +openmp +cuda are compatible
- Updated to use setup_build_environment(), setup_run_environment()
* amber: Adding missing flex and bison dependencies
* Removed cray variant; flex and bison now build only
* dealii: Fixed flake8 issues
* amber: corrected typo
* amber: Removed unused variant python
Add a line to .gitattributes so that `git grep -p` shows function names
properly for `*.py` files. Without this, the class name is shown instead
of the function for python files.
This also causes diff output to use proper functions as hunk headers in
`diff` output.
Here's an example with `git grep -p`.
Before:
$ git grep -p spack_cc var/spack/repos/builtin/packages/athena
var/spack/repos/builtin/packages/athena/package.py=class Athena(AutotoolsPackage):
var/spack/repos/builtin/packages/athena/package.py: env.set('CC', spack_cc)
var/spack/repos/builtin/packages/athena/package.py: env.set('LDR', spack_cc)
After:
$ git grep -p spack_cc var/spack/repos/builtin/packages/athena
var/spack/repos/builtin/packages/athena/package.py= def setup_build_environment(self, env):
var/spack/repos/builtin/packages/athena/package.py: env.set('CC', spack_cc)
var/spack/repos/builtin/packages/athena/package.py: env.set('LDR', spack_cc)
Here's an example with `diff`.
Before:
$ git show c5da94eb58
[...]
@@ -28,6 +29,7 @@ print(u'\\xc3')
# make it executable
fs.set_executable(script_name)
+ filter_shebangs_in_directory('.', [script_name])
# read the unicode back in and see whether things work
script = ex.Executable('./%s' % script_name)
After:
$ git show c5da94eb58
[...]
@@ -28,6 +29,7 @@ def test_read_unicode(tmpdir):
# make it executable
fs.set_executable(script_name)
+ filter_shebangs_in_directory('.', [script_name])
# read the unicode back in and see whether things work
script = ex.Executable('./%s' % script_name)
`mirror_archive_path` was failing to account for the case where the fetched version isn't known to Spack.
- [x] don't require the fetched version to be in `Package.versions`
- [x] add regression test for mirror paths when package does not have a version
* dealii: Added 'threads' variant that controls the DEAL_II_WITH_THREADS cmake option and the dependency on Intel TBB
* Update var/spack/repos/builtin/packages/dealii/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Created an initial recipe for Sensei
* Cleanup syntax
* Small fixes for the Sensei recipe
* Cosmetic fixes to comply with PEP8
* More cosmetic fixes before PR
* Added more documentation before PR
* Fixed flake8 errors
* Fixes following PR review
* Fixes to pass Flake8 passes
* Some changes following PR review and support for SENSEI 3
* Update var/spack/repos/builtin/packages/sensei/package.py
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
* Fixed Flake8 errors
Commit 78724357 added versions 2019.5 to 2019.8 but failed to update
the patches for these versions.
1. gcc_generic-pedantic patch -- include this up through 2019.5. This
was fixed in the TBB source tree in 2019.6.
2. tbb_cmakeConfig patch -- this needs to be modified (different file)
for 2019.5 and later.
3. tbb_gcc_rtm_key patch -- replace this with filter_file. This is
simpler and eliminates the need to update the patch whenever the
surrounding context changes.
* dont add perl bin directory to PATH when setting up env (this is already handled by spack core in a way that omits system dirs); also consolidate repeated logic between build/run env setup.
* the bin/ dir of each dependency is already added to PATH in Spack core, so there is no need to do this in the Perl package
* BLD: enforce C++11 std for boost + xl_r
* the spack `cxxstd` variant is not sufficient to enforce
`-std=c++11` usage in boost compile lines when `xl_r` compiler
spec is in use; while it would be nice if this were fixed
in a boost config file somewhere, for now this patch
allows boost to build on POWER9 with
an %xl_r compiler spec if the user specifies i.e.,:
`spack install boost@1.70.0+mpi cxxstd=11 %xl_r@16.1.1.5`
* Update var/spack/repos/builtin/packages/boost/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
The documentation states that Spack builds R without the recommmened
packages, with Spack handling the build of those packages to satisfy
dependencies. From the docs:
> Spack explicitly adds the --without-recommended-packages flag to
> prevent the installation of these packages. Due to the way Spack
> handles package activation (symlinking packages to the R installation
> directory), pre-existing recommended packages will cause conflicts for
> already-existing files. We could either not include these recommended
> packages in Spack and require them to be installed through
> --with-recommended-packages, or we could not install them with R and
> let users choose the version of the package they want to install. We
> chose the latter.
However, this is not what Spack is actually doing. The
`--without-recommended` configure option is not passed to R and
therefore those packages are built. This prevents R extension activation
from working as files in the recommended packages installed with R will
block linking of file from the respective `r-` packages.
This PR adds the `--without-recommended` flag to the configure options
of the R package. This will then have the Spack R build match what is
documented.
* Replace git-based Bioconductor R packages
The current collection of bioconductor packages tend to have scattered
dependencies and missing versions. This commit replaces git-based
packages with tool-generated Spack package recipes with correct
dependencies and descriptions in place.
* Fix some broken package names, add periods to title docstrings
* r-clue: new package at 0.3-57
* r-genomeinfodbdata: add 1.2.1
* r-gofuncr: new package at 1.4.0
* r-pfam-db: add 3.8.2
* Add missed package r-genelendatabase
* update r-goseq package
* update r-glimma package
* update r-rots package
* r-org-hs-eg-db: add 3.8.2
* r-vgam: fix incorrect R version
* r-rnaseqmap: new package at 2.42.0
* r-rhdf5lib: new package at 1.6.0
* r-scrime: new package at 1.3.5
* r-delayedmatrixstats: new package at 1.6.0
* r-hdf5array: new package at 1.12.1
* r-biocfilecache: new package at 1.8.0
* r-ctc: add new versions, dependencies
* r-genemeta: new package at 1.56.0
* r-scrime: fix flake8
* r-ensembldb: add missing dependencies
* Added missing dependencies to packages with certain DESCRIPTIONS
* r-mapplots: new package at 1.5.1
* r-beachmat: new package at 2.0.0
* r-beeswarm: new package at 0.2.3
* r-biocneighbors: new package at 1.2.0
* r-biocsingular: new package at 1.0.0
* r-ecp: new package at 3.1.1
* r-enrichplot: new package at 1.4.0
* r-europepmc: new package at 0.3
* r-ggbeeswarm: new package at 0.6.0
* r-ggplotify: new package at 0.0.3
* r-ggraph: new package at 1.0.2
* r-gridgraphics: new package at 0.4-1
* r-rcppannoy: new package at 0.0.12
* r-rcpphnsw: new package at 0.1.0
* r-rsvd: new package at 1.0.1
* r-scater: new package at 1.12.2
* r-singlecellexperiment: new package at 1.6.0
* r-tximport: new package at 1.12.3
* r-upsetr: new package at 1.4.0
* r-vioplot: new package at 0.3.2
* r-readr: add 1.3.1
* r-matrixstats: add 0.54.0
* r-ecp: flake8 fix
* r-biocmanager: new package at 1.30.4
* update bioconductor packages requiring BiocManager, new versions
* r-lambda-r: add 1.2.3
* r-vegan: add 2.5-5
* r-cner, r-rcppannoy, r-reportingtools, r-rsvd: add missing newlines at EOF
* r-chemometrics: flake8 fixes
* r-vgam: flake8 fixes
* CRAN packages: use cloud.r-project.org
* Use DESCRIPTION for R version constraints over bioconductor releases
* Update missed packages ABAData, acde, affydata
* Update remaining missed packages
* bio: Drop 'when' clause from first checksummed versions
* bio: improve package description generation logic
* r-genomeinfodbdata: use explicit sha256 sums
* r-pfam-db: update dependencies, add 3.10.0
* update r-org-hs-eg-db
* r-dirichletmultinomial: re-add gsl
* r-polyclip: new package at 1.10-0
* r-farver: new package at 1.1.0
* r-tweenr: new package at 1.0.1
* r-ggforce: new package at 0.3.1
* r-ggforce: remove redundant dep
* r-ggraph: add missing deps
* r-rcpphnsw: remove redundant depends_on
* r-reportingtools: re-add r-r-utils dep
* r-rhdf5: add gmake dep
* r-rhtslib: add system dependencies
* r-rsamtools: add gmake dep
* r-farver: remove redundant dep
* r-tweenr: remove redundant dep
* r-variantannotation: add gmake dep
* r-rgraphviz: add graphviz dep
* r-vsn: correct r-hexbin constraint
* r-scater: fix obsolete deps
* r-variantannotation: fix gmake dep type
* r-scater: tighten R version constraints
* r-rsamtools: fix gmake dep type
* r-rhtslib: fix gmake dep type
* r-rhtslib: use xz over lzma
* r-rhdf5: fix gmake dep type
* r-farver: replace with newer recipe for 2.0.1
* r-mzr: remove old dependency
* r-reportingtools: remove builtin dependency
* r-mzr: add gmake dep
* r-rhtslib: make system libraries link deps
* r-genomeinfodbdata: fix R version constraints
* r-geoquery: remove old deps from new versions
* r-genomicfeatures: tighten r-rmysql dep
* r-ensembldb: tighten r-annotationhub dep
* r-complexheatmap: fix r-dendextend dep
* r-cner: fix utils dep name
* r-clusterprofiler: fix r-gosemsim version req
* r-biostrings: fix r-iranges version reqs
* r-rhdf5lib: add gmake dep
* r-oligoclasses: fix r-biocinstaller dep range
* r-organismdbi: fix r-biocinstaller dep range
* r-hdf5array: add gmake dep
* r-gtrellis: tighten r-circlize version req
* r-gostats: fix r-graph version req
* r-glimma: fix old dependency ranges
* r-biostrings: syntax fix
* r-organismdbi: syntax fix
* r-dose: fix r-igraph dep
* r-dose: fix r-scales, r-rvcheck deps
* r-affy: fix r-biocinstaller dep
* r-ampliqueso: fix homepage
* r-aneufinder: fix r-biocgenerics dep
* r-beachmat: fix changed deps
* r-biocneighbors: fix old R constraint
* r-biocmanager: rewrite recipe for 1.30.10
* Update var/spack/repos/builtin/packages/r-biocinstaller/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/r-oligoclasses/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update cartopy version and fix recipe
Cartopy 0.17.0 works fine with proj 6
* Update cartopy version and fix recipe
Cartopy 0.17.0 works fine with proj 6
* Set ACCEPT_USE_OF_DEPRECATED_PROJ_API_H flag when building extension
* Add variants to py-cartopy recipe as suggested
* Fix proj dependency
* Split dependency
* Fix PEP-8; remove extra dependency
* Bump up QE version number to 6.4.1.
* Fix QMCPACK conflicts.
* HDF5 dependencies where over specified which could cause unnecessary installs of HDF5.
* Update QMCPACK testing option.
* Remove support for serial QE 6.4.1 converter. Add support for parallel QE 6.4.1. converter with serial HDF5.
* Switch to setup_run_environment.
* Fix setup_run_environment call arguements.
* Fix typo.
* switch run_env to env
* package py-cheroot
* package py-cheroot
* autopep8, docutils cleanup
* Update var/spack/repos/builtin/packages/py-cheroot/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* missing deps
* flake8
* license bits
* Update var/spack/repos/builtin/packages/py-cheroot/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-cheroot/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-cheroot/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-cheroot/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* python dep
* flake8
Note to spack people: these are expected to be end of line releases for both the 3.1.5 an 3.0.5 releases
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
* metabat: add versions 2.14 and 2.13
* update build environment
* Update var/spack/repos/builtin/packages/metabat/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/metabat/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* update py-nbconvert
* add setuptools dependency, like all the other jupyter packages
it seems to be using setuptools for some commands all the time
but requires it for the newest version
* added dependencies, not necessarily only needed for the latest one
* depends on new packages (defusedxml, pandocfilters, testpath)
* should also be moved to pypi sources?
* '@5:@5:' is a valid spec -> intended?
* make dependencies optional
* Update dependencies and add description
* relax py-mistune dependency restriction
* Update var/spack/repos/builtin/packages/py-nbconvert/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-arrow
* actually use dependencies from 0.14.7 not from current HEAD
* drop dependencies that dont appear in the source
* readd sphinx as doc dependency
* update dependencies
* drop doc-only dependencies
* add new package : busybox@1.31.1
* 1. add some other version for busybox
2. change Busybox class to MakefilePackage
3. move make('defconfig') and make() to build() function
4. change install_tree('', prefix) to install_tree('.', prefix)
Extensions have been available for a while and the overall design
seems solid enough to be feasible for extensions without losing
backward compatibility.
* add variant for enabling testing
* enable tests and clean up other options
* add numbered versions
* updates to avoid enable_tests variant; correct versioning
* fixes for style
* appropriate partitioners are enabled if 'all' is specified - so no need to check in spec
* define url so spack knows how to fetch the tar.gz files for different versions
* Add SOLLVE package with Shintaro's help on rebasing.
Co-authored-by: Vivek Kale <vivek.lkale@gmail.com>
* sollve: reflect suggestions by @adamjstewart
* sollve: update target detection
Copied from llvm/package.py.
* sollve: fix a few things
- url -> git
- remove git in version()
- explicit cmake options in else clauses
- add newlines for better readability
* Added new package libmmtf-cpp required by py-pymol
* Added SPDX-License-Identifier to MIT
* Updated py-pymol to version 2.3.0
* py-pymol: Added mising py-pmw dependency
* py-pymol: flake8 minor change
* py-pymol: Added patch for apbstools_tcltk8.6
This patch is borrowed from archlinux
https://bugs.archlinux.org/task/39526
* libmmtf-cpp: flake8 compliance
* libmmtf-cpp: flake8 compliance
* libmmtf-cpp: change license to (Apache-2.0 OR MIT) when refering to the MIT license
* libmmtf-cpp: Added header text about license as in examples
* py-pymol: removed unnecesary dependency mesa-glu
* py-pymol: removed unnecesary patch
* py-pymol: Removed empty line at the end of the file to comply with flake8.
* Add the py-coloredlogs package
* Remove extraneous line.
* Remove dashed line.
* Add version for humanfriendly dep and build to type.
* Change source url to use pypi.
* Improved library access for lm-sensors and implemented use in papi.
* Fixed comment formatting
* Removed explicit "None" from return of libs().
* Added two new software release versions.
* fix runtime error involving py-pycairo and PDF
* Update var/spack/repos/builtin/packages/py-python-mapnik/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* fix env setup
* Add the py-importlib_re package
* Rename package to conform with spack naming convention.
* Rename package to py-importlib-resources
Add python depend modules for previous python versions and depends_on python.
* Add whitespaces.
* use bazel commit in #13112, and add version 0.24.1, and corresponding cc_env patch
* undo preferred java version by dodo47
* patch for v0.26
* Update install steps
* Add patches for more versions
* Add unit tests
* Update patches for new Spack env vars
* env is already defined, use spackEnv
This makes several installs from the same download cache impossible once
the hash of the used perl-install changes.
Fixes: #13824
Change-Id: I5f10d9d54ae999d0ca7e4171f989dfca2e6a7169
* Add py-wub, with supporting fixes
- add py-wub
- add py-pycmd because py-wub needs it
- update py-statsmodels, which needs at least v0.9.0 to work with
python3.7 because cython.
* Update based on Adam's comments
* Fix dependency types for py-six in py-wub
* statsmodels tests fail, update comment w/ Issue #
The statsmodels tests weren't run in the previous version of the
package. If I enable them, the fail.
Update the package comment with the statsmodels issue I opened to
track the problem:
https://github.com/statsmodels/statsmodels/issues/6263
* Update dependency types in py-wub/package.py
* flake8 cleanups
* Make statsmodels tests work
- need to use patsy@0.5.1:
- need to run the tests from within the build/lib* dir
* Add mg, a gnu-emacs like fork of microemacs
* Use Package, since not really an Autotools package
Switch from AutotoolsPackage to Package. Even though mg has a
configure script, it's not really an Autotools package.
* Need to also provide --prefix to configure
* Some packages (e.g. mpfr at the time of this patch) can have patches
with the same name but different contents (which apply to different
versions of the package). This appends part of the patch hash to the
cache file name to avoid conflicts.
* Some exceptions which occur during fetching are not a subclass of
SpackError and therefore do not have a 'message' attribute. This
updates the logic for mirroring a single spec (add_single_spec)
to produce an appropriate error message in that case (where before
it failed with an AttributeError)
* In various circumstances, a mirror can contain the universal storage
path but not a cosmetic symlink; in this case it would not generate
a symlink. Now "spack mirror create" will create a symlink for any
package that doesn't have one.
* Add process to determine aarch64 microarchitecture
* add microarchitectures for thunderx2 and a64fx
* Add optimize flags for gcc on aarch64 family processors thunderx2 and a64fx.
* Add optimize flags for clang on aarch64 family processors thunderx2 and a64fx
* Add testing for thunderx2 and a64fx microarchitectures
* Make relative binaries relocate text files properly
* rb strings aren't valid in python 2
* move perl to new interface for setup_environment family methods
* fix metis src dl url
* update ascent, vtk-h and vtk-m recipes
* update conduit package
* fix vtk-m shas
* mfem conduit fix
* use vtk-h develop
* fix issue with stripped include paths in mfem
* more metis fixes
* simpler fix for mfem conduit include issue
* finish mfem changes
* pin to cmake 3.14, since we hit cuda issues with 3.15
* add rtd theme as dep for ascent
* add vtk-h 0.5.0 release, update ascent to use it
* add ascent 0.5.0 release
* fix cmake pin to allow all vers of 3.14
* fix format string error in mfem pkg
* review fixes for mfem pkg
* review fixes for vtk-h and vtk-m packages
* address review comments for ascent pkg
* changing default off of develop broke downstream use
* revert prefed
* guile package: Handling the threads option.
Currently guile by default tries to compile its thread variant.
However, the threaded version can only be compiled if bdw-gc is
compiled with some threads support. Currently, the default
compilation of the bdw garbage collector is compiled without any
thread support resulting in a compilation error.
I have changed the the default guile compilation to the non-threaded
version. I have also added the appropiated options for the bdw-gc
compilation in case the user prefers the threaded variant.
* guile package(flake8): fixed identation issues
* remove reference to `spack.store` in method definition
Referencing `spack.store` in method definition will cache the `spack.config.config` singleton variable too early, before we have a chance to add command line and environment scopes.
* remove reference to `spack.store` in method definition
Referencing `spack.store` in method definition will cache the `spack.config.config` singleton variable too early, before we have a chance to add command line and environment scopes.
Add a configuration option to suppress gpg warnings during binary
package verification. This only suppresses warnings: a gpg failure
will still fail the install. This allows users who have already
explicitly trusted the gpg key they are using to avoid seeing
repeated warnings that it is self-signed.
Add a configuration option to suppress gpg warnings during binary
package verification. This only suppresses warnings: a gpg failure
will still fail the install. This allows users who have already
explicitly trusted the gpg key they are using to avoid seeing
repeated warnings that it is self-signed.
* z3:
* Fixed python dependency to always be required.
* bugfix about fallthrough annotation.
* z3: Add patch for before ver.4.4.1.
* Update var/spack/repos/builtin/packages/z3/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Mark compiler/version conflict for CMake
Intel 14 lacks some C++11 features needed to compile new versions of
cmake.
```
/tmp/s3j/spack-stage/spack-stage-cmake-3.15.5-46lgp4ybhopy2p4rr66rxnew5iaddvmg/spack-src/Source/
cm_static_string_view.hxx(28): error: expected an operator
friend static_string_view operator"" _s(const char* data, size_t
^
```
* Mark compiler/version conflict for icu4c
With Intel 14.0.4 on Linux for icu4c 60.1 and higher:
```
locid.cpp(1156): error #1140: a using-declaration may not name a constructor or destructor
using KeywordEnumeration::KeywordEnumeration;
```
* Mark compiler/version conflict for nasm
Error installing `nasm@2.14.02%intel@14.0.4`:
```
In file included from nasmlib/crc64.c(35):
./include/nasmlib.h(116): error: expected a ";"
fatal_func nasm_assert_failed(const char *, int, const char *);
```
* Mark compiler/version conflict for bison
Installing `bison@3.4.2%intel@14.0.4`:
```
In file included from /tmp/s3j/spack-stage/spack-stage-bison-3.4.2-
uzjszv4owvqsymjpxtxvvegfavc6k5my/spack-src/lib/quotearg.c(33):
/tmp/s3j/spack-stage/spack-stage-bison-3.4.2-uzjszv4owvqsymjpxtxvvegfavc6k5my/spack-src/lib/
xalloc.h(51): warning #303: explicit type is missing ("int" assumed)
extern _Noreturn void xalloc_die (void);
```
* Mark compiler/version conflict for icu4c
With `icu4c@60.1%intel@16.0.4` and `icu4c@64.1%intel@16.0.4`:
```
In file included from ucurr.cpp(26):
static_unicode_sets.h(130): error #913: invalid multibyte character sequence
{POUND_SIGN, u'£'},
^
```
* Change conflict comments into messages
* py-matplotlib only needs backports when ^python@:2
This implements @scheibelp's suggestion in #13711.
py-matplotlib should only depends_on py-backports-functools-lru-cache
when it's using a python that actually *needs* it.
See #13711 for details.
* Don't depend_ons py-enum34 unless python@:3.3
* Tighten up enum34 dependency
@adamjstewart cracked open the setup.py files and suggested a tighter
dependency for py-enum34. 1.4 and 1.5 only require it for pythons
before 3.4, 1.3 requires it unconditionally. So...., we'll do the
same.
* Remove conflict on python 3.4 from enum34
at @adamjstewart's request see PR notes
* qscintilla_with_python_bindings_disabled
* pyqt5 with variant +qsci to compile qscintilla python bindings
* fix a dyn linking issue for Qsci python module
* fix a bug
* fix bug: use sip provided by py-pyqt5
* fix typo
* tidy up, make designer
* tidy up
* fix designer build issue, set env for designer plugin
* tidy up
* tidy up
* minor improvements
* improve style
* build Qscintilla python bindings here
* make qsci config option variant dependent
* get rid of commented out code
* improvements: add resource for qscintilla, improve config_args
* flake8: spaces, blank lines etc
* flake8: fix long lines
* Update var/spack/repos/builtin/packages/py-pyqt4/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyqt4/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyqt4/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qscintilla/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* dont install source tree under prefix
* remove duplicate line
* use os.path.join instead of +
* separate build and run environment setups
* flake8
* Update var/spack/repos/builtin/packages/py-pyqt5/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyqt5/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyqt5/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* fix rsrc path
* use python_include_dir
* use "with working_dir"
* Update var/spack/repos/builtin/packages/py-pyqt4/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyqt5/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyqt5/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyqt5/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/qscintilla/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyqt4/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyqt4/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* flake8
* package/geopm: Added versions 1.0.0 and 1.1.0
Added changes for 1.1.0 and 1.0.0 in this patch.
Patch for 1.0.0 was previously not merged.
variant for hwloc removed since that is not a dependency since 0.5.1 and
variant('hwloc', when=:0.5.1 is not supported afaik.
made depends_on versions more explicit.
* package/geopm: removed 1.0.0 release candidates 1 and 2.
* Adding final bug-releases for the gromacs-2016 and -2018
* Added newer versions of plumed and libmatheval not a dependency >v2.5
* plumed package: chamge name git branch to master
Libbsd assumes GCC-defined compiler macros:
```
In file included from nlist.c(44):
local-elf.h(238): catastrophic error: #error directive: Unknown ELF machine type
#error Unknown ELF machine type
^
```
The `__amd64__` and `__x86_64__` macros should be equivalent, but the
latter is defined by intel.
when making a package relative, relocate links relative to link directory
rather than the full link path (which includes the file name) because `os.path.relpath` expects a directory.
Binaries with relative RPATHS currently do not relocate strings
hard-coded in binaries
This PR extends the best-effort relocation of strings hard-coded
in binaries to those whose RPATHs have been relativized.
Binaries with relative RPATHS currently do not relocate strings
hard-coded in binaries
This PR extends the best-effort relocation of strings hard-coded
in binaries to those whose RPATHs have been relativized.
* Docs update for deprecated `spack sha256`
* Added macOS shasum
* Update lib/spack/docs/packaging_guide.rst
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update MVAPICH2 package for 2.3.2 release
Update default build from psm to mrail
* Update different provides for older versions based on feedback from Todd Gamblin
* Simplify rule so one rule covers 2.1 and 2.2
* Add support for disabling rpath based on feedback from Dr. Shende
* Add colon based on comment
* Address review comment by Adam Stewart
* Add declaration of the wrapperpath variant.
- Thanks to Massimiliano Culpo for the comment
* superlu_dist: fix build with xl compilers
* fix link error ../SRC/libsuperlu_dist.so.6.1.1: undefined reference to `ztrtri_'
* Fixed the ordering of the spec on the xl-611 patch.
* fix flake8 error
- [x] Use higher contrast terminal output font
- [x] Use higher contrast code block background color than default
- [x] Use a noticeable prompt character
See also https://github.com/spack/spack-tutorial/pull/10.
* Update nlopt package to add Python bindings to PYTHONPATH
* Use extends for nlopt/python fix
* nlopt - change develop to master and add python dep qualifiers
* athena: updated api call to setup build environment
* mvapich2: updated api call to setup build and run environment
* spectrum-mpi: updated api call to setup build and run environment
- depends on spfft starting from 6.4.0
- add magma variant
- avoid setting cuda_arch to none
- add python dependencies
- use release as default build type
* Adding in HMPT package for HPE MPI libraries
* Updating copyright dates
* Renaming HPE MPI package
* Fixing error in package file
* Tidying up defintions and linting
* liniting
* Adding in library setup so packages that want to manually add mpi libraries can do so (i.e. npb)
* Linting
* Linting
* Investigating old API errors
* Investigating api errors
* Investigating api errors
* Investigating api errors
* Investigating api errors
* Investigating api errors: adding back in functions to see when the build fails
* Investigating api errors: adding back in functions to see when the build fails
* Investigating api errors: adding back in functions to see when the build fails
* Investigating api errors: adding back in functions to see when the build fails
* Investigating api errors: adding back in functions to see when the build fails
* Linting
* Linting
* Fixing
* Fixing
* Add new strumpack version (3.2.0), with new
dependency on ButterflyPACK.
* add ButterflyPACK version 1.1.0
* Add strumpack version 3.3.0, add dependency on ButterflyPACK 1.1.0
* Sort ButterflyPACk versions from newest to oldest
* Add a shared variant for STRUMPACK
* Also allow possible newer versions of ButterflyPACK
* New Package: py-pyside2
https://wiki.qt.io/Qt_for_Python
The Qt for Python project aims to provide a complete port of the PySide module
to Qt 5. The development started on GitHub in May 2015. The project managed to
port Pyside to Qt 5.3, 5.4 & 5.5. During April 2016 The Qt Company decided to
properly support the por
* Address review comments:
+ Add a variant for `+doc` and only depned on some packages if this variant is
active.
+ Enable building the tests if requested.
+ Correct registered required verions for python and qt.
* Remove dead code, fix depends_on command args.
* fix one more flake8 issue.
* One more fix to arguments and change version name to .
* Add a variant for tests and parallelize builds
- Fix a bug with boost defaulting to cxxstd=98 when cxxstd=11 is the
minimum for hpx
- Disable tests by default and use a variant to enable them if requested
- Enable parallel builds: each process takes up to 1Gb when tests are
not enabled.
* Remove tests variant
- The variant doesn't change what gets installed. Testing can be
activated using `spack install --test=root hpx`
* Propagate MPI option in VTK to NetCDF
This fixes a conflict message reads as though NetCDF always requires
hdf5+mpi. It allows `visit~mpi` to resolve correctly without an
additional `^netcdf~mpi`.
* Tell VisIt not to look for 'hdf5_mpi' library
- patched versions are located in the same directory as the
original release. For example, 1906_191103 is located in
the 'v1906/' directory, not in 'v1906_191103/'.
- add master branch as a known version
* package py-zc-buildout
* package py-zc-buildout
* Update var/spack/repos/builtin/packages/py-zc-buildout/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-zc-buildout/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* license bits
* Update var/spack/repos/builtin/packages/py-zc-buildout/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* package py-cherrypy
* package py-cherrypy
* autopep8, docutils cleanup
* Update var/spack/repos/builtin/packages/py-cherrypy/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-cherrypy/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-cherrypy/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* portend depends on tempora, not cherrypy directly
* setuptools_scm bits
* dependency one level up
* license bits
* Update var/spack/repos/builtin/packages/py-cherrypy/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* package py-tempora
* package py-tempora
* Update var/spack/repos/builtin/packages/py-tempora/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* missing deps
* Update var/spack/repos/builtin/packages/py-tempora/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tempora/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tempora/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* license bits
`mirror_archive_path` was failing to account for the case where the fetched version isn't known to Spack.
- [x] don't require the fetched version to be in `Package.versions`
- [x] add regression test for mirror paths when package does not have a version
* (py)arrow: new versions
* move py-arrow source to github as not all versions are on pypi
same checksum as pypi, adding build_directory
* move back to pypi sources
* drop 0.15.0 and 0.14.1 as only .whl are available on pypi
* add new dependencies
* Update var/spack/repos/builtin/packages/py-pyarrow/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* py-jupyter-core add new version and setuptools dependency
* update checksums to use pypi
* fixup url and python dep type
* Update var/spack/repos/builtin/packages/py-jupyter-core/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-dateparser
* take setup.py dependencies into account
* fixup help->description
* reorder dependencies
* drop version constraints not enforced in setup.py
netcdf-fortran@4.5: will error if netcdf-c has been built with MPI
support:
```
configure: error:
-----------------------------------------------------------------------
The NetCDF C library is built with parallel I/O feature enabled, but
the Fortran compiler '.../lib/spack/env/gcc/gfortran' supplied in this configure command
does not support MPI-IO. Please use one that does. If parallel I/O
feature is not desired, please use a NetCDF C library with parallel
I/O feature disabled. Abort.
-----------------------------------------------------------------------
```
Copy logic from netcdf-c to add an `mpi` variant.
* pybind11: test get_include path
Helper for non-CMake downstream projects to find the pybind11
header location.
* pybind11: return proper get_include()
use our prefix instead of letting pybind11 trying to self-determine
it from given conda/virtualenv/global rules.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-ics
* drop 0.5 and move to tar-ball
* Update var/spack/repos/builtin/packages/py-ics/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ics/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* mpi4py 3.0.3
Adds support for Python 3.8 with new Cython files.
I tried to build 3.0.0 with Python 3.8 and this builds as well,
therefor I added no conflict.
* mpi4py: update dependency version ranges
* update py-jupyter-notebook
* add setuptools dependency for newer version
the whole jupyter collection seems to use setuptools in case of
certain setup.py-arguments from the very beginning. However the latest
ones actually require it, otherwise the build will fail
* add newly introduced dependencies
* dependency constraints
* drop terminal variant and update python dep
* added relion 3.0.8 [latest stable version in 3.0 branch] as well as 3.1_beta
* relion 3.1 beta compiles without Benchmarking build type see https://github.com/3dem/relion/issues/533
* relion 3.0.X - 3.1 now supports latest cmake@3
This fixes a regression introduced in #10792. `spack uninstall` in an
environment would not match concrete query specs properly after the index
hash of enviroments changed.
- [x] Search by DAG hash for specs to remove instead of by build hash
If you do this in a spack environment:
spack add hdf5+hl
hdf5+hl will be the root added to the `spack.yaml` file, and you should
really expect `hdf5+hl` to display as a root in the environment.
- [x] Add decoration to roots so that you can see the details about what
is required to build.
- [x] Add a test.
If you do this in a spack environment:
spack add hdf5+hl
hdf5+hl will be the root added to the `spack.yaml` file, and you should
really expect `hdf5+hl` to display as a root in the environment.
- [x] Add decoration to roots so that you can see the details about what
is required to build.
- [x] Add a test.
This fixes a regression introduced in #10792. `spack uninstall` in an
environment would not match concrete query specs properly after the index
hash of enviroments changed.
- [x] Search by DAG hash for specs to remove instead of by build hash
* Make relative binaries relocate text files properly
* rb strings aren't valid in python 2
* move perl to new interface for setup_environment family methods
* flux: add `url_for_version` to support their C4 repo model
Flux uses a fork of ZeroMQ's Collective Code Construction Contract
(https://github.com/flux-framework/rfc/blob/master/spec_1.adoc).
This model requires a repository fork for every stable release that has
patch releases. For example, 0.8.0 and 0.9.0 are both tags within the
main repository, but 0.8.1 and 0.9.5 would be releases on the v0.8 and
v0.9 forks, respectively.
* flux: add latest versions
* flux: remove master from `when=@0.X:,master` statements
Now that #1983 has been merged, master > 0.X.0.
* flux-core: remove extraneous `99` patch version in `when` range
Replace `when=@:0.11.99` with `when=@:0.11` since the intention is to
include all patch versions of `0.11`.
* flux-core: fix `setup_build_environment` after changes in #13411
In #13411, `setup_environment` was split into `setup_build_environment`
and `setup_run_environment`, with the `spack_env` and `run_env`
arguments being changed to `env`. Somehow the flux package was the only
one to not have its `spack_env` references in the function changed to
`env`.
* flux: add runtime environment variables that Flux checks
with older versions of Flux (i.e, 0.0:0.13), FLUX_CONNECTOR_PATH must be
set by spack to prevent failures in certain
scenarios (https://github.com/flux-framework/flux-core/issues/2456).
the flux binary also sets some other environment variables, which can be
listed by running `flux -v start`. I added a few of those just to be
sure that the Spack-installed paths are used, rather than
system-installed ones.
* flux: add optional testing dependencies to maximize test coverage
Install optional dependencies to ensure that only spack-installed
software is detected and that all tests are run when `spack install
--test` is used.
Flux's test suite will test for the existance of valgrind, jq, and any
MPI installation. If it detects them (even if they are system-installed
and outside the spack environment), it will run optional tests against
them. I noticed on my machine that the valgrind tests were running
against the system-install valgrind.
* flux-sched: switch to new `setup_run_environment` API
- [x] insert at beginning of list so fetch grabs local mirrors before remote resources
- [x] update the S3FetchStrategy so that it throws a SpackError if the fetch fails.
Before, it was throwing URLError, which was not being caught in stage.py.
- [x] move error handling out of S3FetchStrategy and into web_util.read_from_url()
- [x] pass string instead of URLError to SpackWebError
- [x] insert at beginning of list so fetch grabs local mirrors before remote resources
- [x] update the S3FetchStrategy so that it throws a SpackError if the fetch fails.
Before, it was throwing URLError, which was not being caught in stage.py.
- [x] move error handling out of S3FetchStrategy and into web_util.read_from_url()
- [x] pass string instead of URLError to SpackWebError
* r-gstat: new package at 2.0-3
* Update var/spack/repos/builtin/packages/r-gstat/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/r-gstat/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
This changes Spack environments so that the YAML file associated with the environment is *only* written when necessary (i.e., if it is changed *by spack*). The lockfile is still written out as before.
There is a larger question here of which part of Spack should be responsible for setting defaults in config files, and how we can get rid of empty lists and data structures currently cluttering files like `compilers.yaml`. But that probably requires a rework of the default-setting validator in `spack.config`, as well as the code that uses `spack.config`. This will at least help for `spack.yaml`.
This changes Spack environments so that the YAML file associated with the environment is *only* written when necessary (i.e., if it is changed *by spack*). The lockfile is still written out as before.
There is a larger question here of which part of Spack should be responsible for setting defaults in config files, and how we can get rid of empty lists and data structures currently cluttering files like `compilers.yaml`. But that probably requires a rework of the default-setting validator in `spack.config`, as well as the code that uses `spack.config`. This will at least help for `spack.yaml`.
* Improvements of saga-gis package
* Added explicit version ranges for old saga-gis version
* Update var/spack/repos/builtin/packages/saga-gis/package.py
Creative usage of redefinition of getter method
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/saga-gis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/saga-gis/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
Commands like "spack mirror list" were displaying mirrors in a
different order than what was listed in the corresponding mirrors.yaml
file.
This restores commands to iterate over mirrors in the order that
they appear in the config file.
* Travis CI: Test Python 3.8
* Fix use of deprecated cgi.escape method
* Fix version comparison
* Fix flake8 F811 change in Python 3.8
* Make flake8 happy
* Use Python 3.8 for all test categories
Currently, query arguments in the Spack core are documented on the
Database._query method, where the functionality is defined.
For users of the spack python command, this makes the python builtin
method help less than ideally useful, as help(spack.store.db.query)
and help(spack.store.db.query_local) do not show relevant information.
This PR updates the doc attributes for the Database.query and
Database.query_local arguments to mirror everything after the first
line of the Database._query docstring.
* cuda: fix conflict statements for x86-64 targets
fixes#13462
This build system mixin was not updated after the support for specific
targets has been merged.
* Updated the version range of cuda that conflicts with gcc@8:
* Updated the version range of cuda that conflicts with gcc@8: for ppc64le
* Relaxed conflicts for version > 10.1
* Updated versions in conflicts
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
4af4487 added a mirror_id function to most FetchStrategy
implementations that is used to calculate resource locations in
mirrors. It left out BundleFetchStrategy which broke all packages
making use of BundlePackage (e.g. xsdk). This adds a noop
implementation of mirror_id to BundleFetchStrategy so that the
download/installation of BundlePackages can proceed as normal.
* Travis CI: Test Python 3.8
* Fix use of deprecated cgi.escape method
* Fix version comparison
* Fix flake8 F811 change in Python 3.8
* Make flake8 happy
* Use Python 3.8 for all test categories
* Adding flecsph package
* Correcting header
* Boost version update
* Correcting Flake8 errors
* Correcting headers
* Develop preferred in FleCSI
* Removing FleCSPH branch of FleCSI
* MeshToolKit package
* formatting
* Formatting
* Correcting MSTK package
* Format
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Format
* Format
* Correcting package behavior
* Correcting format
* Corrections
* Update var/spack/repos/builtin/packages/mstk/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Multiline
Currently, query arguments in the Spack core are documented on the
Database._query method, where the functionality is defined.
For users of the spack python command, this makes the python builtin
method help less than ideally useful, as help(spack.store.db.query)
and help(spack.store.db.query_local) do not show relevant information.
This PR updates the doc attributes for the Database.query and
Database.query_local arguments to mirror everything after the first
line of the Database._query docstring.
* cuda: fix conflict statements for x86-64 targets
fixes#13462
This build system mixin was not updated after the support for specific
targets has been merged.
* Updated the version range of cuda that conflicts with gcc@8:
* Updated the version range of cuda that conflicts with gcc@8: for ppc64le
* Relaxed conflicts for version > 10.1
* Updated versions in conflicts
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
* r-dada2: new package at 1.14
* r-dada2: add gmake dependency
* Update var/spack/repos/builtin/packages/r-dada2/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-scs
* rename
* flake8
* Update var/spack/repos/builtin/packages/py-scs/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
The `test_changed_files` in `test/cmd/flake8.py` was failing because it calls
`ArgumentParser.parse_args()` without arguments. Normally that would just
parse `sys.argv` but it seems to fail because of something in either `spack test`
or `pytest`. Call it with an empty array so that it doesn't try to touch`sys.argv`
at all.
- [x] allow `-d` spack option for `test_changed_files`
* Update the module files for cbtf-krell and openspeedshop adding man paths and needed papi and libmonitor paths.
* Update module files for new API.
* Use the same python for both develop and release branches.
* add support for static (via ~shared) and use vtk-m 1.2
* updating vtkh package to output cmake configure file and pinning it to vtkm 1.2
* trying a different cmake for vtkh
* removing problematic b
* making conduit respect ~python
* fixing ascent python logic
* update ascent package
* consistant cmake usage
* conditionally add tbb in ascent if vtkh
* applying becker fix
* adding vtkh tag
* fixing vtkh tagged version
* updating ascent and conduit for static builds
* enabling openmp
* reverting files that should not have been changed
* ascent updates
* more robust handling of variants
* fixing ascent package typo
* ascent: add optional support for mfem
* enable mfem conduit support for ascent
* add optional adios dep to conduit
* remove ver req from conduit
* ascent: remove confusing comment
* tweaks to conduit and ascent pkg recipes
* fix typo in conduit package
* pref conduit master
* fixing mfem typo for ascent
* reverting files
* adding mirror for bzip
* changing mfem to depend on conduit@master to get updated relay
* restore use of conduit 0.3.1 or greater for mfem
* set master as prefered conduit version
* allow mfem to use conduit master
* adding rover package and editing ascent
* updating vtkm cmake dep
* updates from axom
* guard ascent python support on +shared
* removing rover to simply ascent package
* add fortran variant to conduit, to allow us to turn off conduit support even when a fortran compiler is specified
* fix fortran compiler check so it can work on cray systems
* working towards cuda fix for vtkm lagrange filter
* update ascent package with more variants, and patch to avoid nvcc issue
* hdf5 flags fix for BGQ
* flake8
* extra guards for cuda patch
* conduit and ascent fortran fix
* fix patch for non cuda case
* add test variant to conduit, tweak ascent pkg
* change min ver of cmake used for ascent
* h5z-zfp package: unset FC when ~fortran
* conform to expected upstream solution
* pinning vtkm
* going back to vtkm master
* add back vtk-m variant for shared libs
* update ascent and vtkh packages
* wire up option to run tests during install
* add post install test
* add testing to ascent
* tweak for blueos xl
* add ctest output on error for run_tests
* enable ctest output on error for run_tests
* add testing of the using-with-make example
* update using-with-make examples
* typo in ascent using-with-cmake test
* fix ascent using test exe names
* more fixes, less sleep
* more fixes, less sleep ...
* remove unwired up version
* improvments suggested on review
* adding new cmake
* Update package.py
* Update package.py
* changes post cori os update
* fix cray hack
* Update package.py
Fixing 'fix'. Inconsistent variable names in conduit package
* type in spack recipes
* add zfp support to conduit
* fix indent error in conduit pkg
* move to use build phases, add sphinx rtd as dep, fix ex names in tests
* add conduit 0.5.0 release
* flake8
* remove old cray hack
* incorp feedback from review
* fix to use proper build env sig
* Fix gcc recipe for RHEL7.
+ It appears that macOS related changes to the gcc recipe broke gcc on RHEL7.
This bug manifests as `libstdc++.so: undefined reference to libiconv` when gcc
is used.
+ Fixes#13452 by moving
`--with-libiconv-prefix={0}'.format(spec['libiconv'].prefix)` into the darwin
OS section of the configuration.
+ Change qualification of `depends_on(libiconv)` to limit dependency to macOS.
* Replace deprecated 'setup_environment' with 'setup_run_environment'.
* Fix cut and paste error.
* Rename 'run_env' to just 'env'.
* Add versinos 1.13.3, 1.13.2, 1.12.12, and 1.12.11
* Replace setup_environment/setup_dependent_environment with
setup_build_environment and setup_dependent_{build, run}_environment
according to 9ddc98e
* Add dependency and patch perl-dbfile
There are two problems for the building of `perl-dbfile`:
1) this package depends on the package `berkeley-db`
2) fix the building using a patch, which locates the position of `berkeley-db` and modify the configuration file for the building
* Update and reformat the script package.py
* Simplify the patch
* Update package.py
* Update package.py
* This filter_file was difficult to maintain and is no longer needed.
* Clarify lack of support for HDF5 in serial QE.
* Update QE and HDF5 conflicts based on user feedback.
* Add Lizard (LZ5)
Add a new package for Lizard, formerly LZ5, a very fast compressor
and decompressor library.
* c-blosc2: use external lizard
Use an external Lizard (LZ5) dependency and add missing
"when="s for other compressor dependents.
* Fixes for libbeagle
This PR fixes a couple of issues with the libbeagle package.
- Use args.append('--with-cuda=%s' % self.spec['cuda'].prefix)
- Disable the default of compiling with -march=native as Spack now
inserts architecture specific flags
- Set BEAST_LIB in the beast1 package not in libbeagle.
* Use new setup_run_environment method
about: Report a bug in the core of Spack (command not working as expected, etc.)
labels: bug
name: "\U0001F41E Bug report"
about: Report a bug in the core of Spack (command not working as expected, etc.)
labels: "bug,triage"
---
*Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran Spack find to list all the installed packages and..."*
<!-- Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran `spack find` to list all the installed packages and ..." -->
### Steps to reproduce the issue
@@ -20,30 +17,26 @@ $ spack <command2> <spec>
### Error Message
If Spack reported an error, provide the error message. If it did not report an error
but the output appears incorrect, provide the incorrect output. If there was no error
message and no output but the result is incorrect, describe how it does not match
what you expect. To provide more information you might re-run the commands with
the additional -d/--stacktrace flags:
<!-- If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect. -->
```console
$ spack -d --stacktrace <command1> <spec>
$ spack -d --stacktrace <command2> <spec>
...
$ spack --debug --stacktrace <command>
```
that activate the full debug output.
### Information on your system
This includes:
<!-- Please include the output of `spack debug report` -->
1. which platform you are using
2. any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.)
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
-----
### Additional information
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have searched the issues of this repo and believe this is not a duplicate
- [ ] I have run the failing commands in debug mode and reported the output
<!-- We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
Other than that, thanks for taking the time to contribute to Spack! -->
about: Some package in Spack didn't build correctly
name: "\U0001F4A5 Build error"
about: Some package in Spack didn't build correctly
title: "Installation issue: "
labels: "build-error"
---
*Thanks for taking the time to report this build failure. To proceed with the
report please:*
<!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
3. Remove the template instructions before posting the issue.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
---
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install <spec># Fill in the exact spec you are using
... # and the relevant part of the error message
$ spack install <spec>
...
```
### Platform and user environment
### Information on your system
Please report your OS here:
```commandline
$ uname -a
Linux nuvolari 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -d
Description: Ubuntu 18.04.1 LTS
```
and, if relevant, post or attach:
<!-- Please include the output of `spack debug report` -->
- `packages.yaml`
- `compilers.yaml`
to the issue
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
Sometimes the issue benefits from additional details. In these cases there are
a few things we can suggest doing. First of all, you can post the full output of:
```console
$ spack spec --install-status <spec>
...
```
to show people whether Spack installed a faulty software or if it was not able to
build it at all.
<!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. -->
* [spack-build-out.txt]()
* [spack-build-env.txt]()
If your build didn't make it past the configure stage, Spack as also commands to parse
You might want to run this command on the `config.log` or any other similar file
found in the stage directory:
```console
$ spack location -s <spec>
```
In case in `config.log` there are other settings that you think might be the cause
of the build failure, you can consider attaching the file to this issue.
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
Rebuilding the package with the following options:
```console
$ spack -d install -j 1 <spec>
...
```
will provide additional debug information. After the failure you will find two files in the current directory:
### General information
1.`spack-cc-<spec>.in`, which contains details on the command given in input
to Spack's compiler wrapper
1.`spack-cc-<spec>.out`, which contains the command used to compile / link the
failed object after Spack's compiler wrapper did its processing
You can post or attach those files to provide maintainers with more information on what
is causing the failure.
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [ ] I have uploaded the build log and environment files
- [ ] I have searched the issues of this repo and believe this is not a duplicate
about: Suggest adding a feature that is not yet in Spack
labels: feature
---
*Please add a concise summary of your suggestion here.*
<!--*Please add a concise summary of your suggestion here.*-->
### Rationale
*Is your feature request related to a problem? Please describe it!*
<!--*Is your feature request related to a problem? Please describe it!*-->
### Description
*Describe the solution you'd like and the alternatives you have considered.*
<!--*Describe the solution you'd like and the alternatives you have considered.*-->
### Additional information
*Add any other context about the feature request here.*
<!--*Add any other context about the feature request here.*-->
-----
### General information
- [ ] I have run `spack --version` and reported the version of Spack
- [ ] I have searched the issues of this repo and believe this is not a duplicate
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
<!--If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
.. Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -10,29 +10,25 @@ Modules
=======
The use of module systems to manage user environment in a controlled way
is a common practice at HPC centers that is often embraced also by individual
programmers on their development machines. To support this common practice
Spack integrates with `Environment Modules
<http://modules.sourceforge.net/>`_ and `LMod
<http://lmod.readthedocs.io/en/latest/>`_ by
providing post-install hooks that generate module files and commands to manipulate them.
is a common practice at HPC centers that is often embraced also by
individual programmers on their development machines. To support this
common practice Spack integrates with `Environment Modules
<http://modules.sourceforge.net/>`_ and `LMod
<http://lmod.readthedocs.io/en/latest/>`_ by providing post-install hooks
that generate module files and commands to manipulate them.
..note::
If your machine does not already have a module system installed,
we advise you to use either Environment Modules or LMod. See :ref:`InstallEnvironmentModules`
for more details.
.._shell-support:
Modules are one of several ways you can use Spack packages. For other
options that may fit your use case better, you should also look at
:ref:`spack load <spack-load>` and :ref:`environments <environments>`.
----------------------------
Using module files via Spack
----------------------------
If you have installed a supported module system either manually or through
``spack bootstrap``, you should be able to run either ``module avail`` or
``use -l spack`` to see what module files have been installed. Here is
sample output of those programs, showing lots of installed packages:
If you have installed a supported module system you should be able to
run either ``module avail`` or``use -l spack`` to see what module
files have been installed. Here is sample output of those programs,
showing lots of installed packages:
..code-block::console
@@ -66,201 +62,9 @@ to load the ``cmake`` module:
$ module load cmake-3.7.2-gcc-6.3.0-fowuuby
Neither of these is particularly pretty, easy to remember, or
easy to type. Luckily, Spack has its own interface for using modules.
^^^^^^^^^^^^^
Shell support
^^^^^^^^^^^^^
To enable additional Spack commands for loading and unloading module files,
and to add the correct path to ``MODULEPATH``, you need to source the appropriate
setup file in the ``$SPACK_ROOT/share/spack`` directory. This will activate shell
support for the commands that need it. For ``bash``, ``ksh`` or ``zsh`` users:
..code-block::console
$ . ${SPACK_ROOT}/share/spack/setup-env.sh
For ``csh`` and ``tcsh`` instead:
..code-block::console
$ set SPACK_ROOT ...
$ source $SPACK_ROOT/share/spack/setup-env.csh
Note that in the latter case it is necessary to explicitly set ``SPACK_ROOT``
before sourcing the setup file (you will get a meaningful error message
if you don't).
When ``bash`` and ``ksh`` users update their environment with ``setup-env.sh``, it will check for spack-installed environment modules and add the ``module`` command to their environment; This only occurs if the module command is not already available. You can install ``environment-modules`` with ``spack bootstrap`` as described in :ref:`InstallEnvironmentModules`.
Finally, if you want to have Spack's shell support available on the command line at
any login you can put this source line in one of the files that are sourced
at startup (like ``.profile``, ``.bashrc`` or ``.cshrc``). Be aware though
that the startup time may be slightly increased because of that.
.._cmd-spack-load:
^^^^^^^^^^^^^^^^^^^^^^^
``spack load / unload``
^^^^^^^^^^^^^^^^^^^^^^^
Once you have shell support enabled you can use the same spec syntax
you're used to and you can use the same shortened names you use
everywhere else in Spack.
For example this will add the ``mpich`` package built with ``gcc`` to your path:
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.