* Bootstrap clingo from binaries
* Move information on clingo binaries to a JSON file
* Add support to bootstrap on Cray
Bootstrapping on Cray requires, at the moment, to
swap the platform when looking for binaries - due
to #22800.
* Add SHA256 verification for bootstrapped software
Use sha256 verification for binaries necessary to bootstrap
the concretizer and gpg for signature verification
* patchelf: use Spec._old_concretize() to bootstrap
As noted in #24450 we may happen to need the
concretizer when bootstrapping clingo. In that case
only the old concretizer is available.
* Add a schema for bootstrapping methods
Two fields have been added to bootstrap.yaml:
"sources" which lists the methods available for
bootstrapping software
"trusted" which records if a source is trusted or not
A subcommand has been added to "spack bootstrap" to list
the sources currently available.
* Methods used for bootstrapping are configurable from bootstrap:sources
The function that tries to ensure a given Python module
is importable now tries bootstrapping methods in the same
order as they are defined in `bootstrap.yaml`
* Permit to trust/untrust bootstrapping methods
* Add binary tests for MacOS, Ubuntu
* Add documentation
* Add a note on bash
Spack is internally using a patched version of `argparse` mainly to backport Python 3 functionality
into Python 2. This PR makes it such that for the supported Python 3 versions we use `argparse`
from the standard Python library. This PR has been extracted from #25371 where it was needed
to be able to use recent versions of `pytest`.
* Fixed formatting issues when using a pristine argparse.py
* Fix error message for Python 3.X when missing positional arguments
* Account for the change of API in Python 3.7
* Layout multi-valued args into columns in error messages
* Seamless transition in develop if argparse.pyc is in external
* Be more defensive in case we can't remove the file.
Add link type to spack.yaml format
Add tests to verify link behavior is correct for installed files
for all three view types
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* ampl: Add missing ampl_lic install and improve look of resources
* ampl: Add myself as maintainer
* ampl: Remove unused variable and delete extra lines
Co-authored-by: Rob Groner <rug262@psu.edu>
* rct: new packages (core packages and some dependencies)
* rct: new packages (core packages and some dependencies)
* radical-entk: updated dependencies (according to comments)
* radical-gtod: updated version name
* radical-pilot: updated dependencies (according to comments)
* radical-saga: updated dependencies (according to comments)
* radical-utils: updated dependencies and set old versions deprecated
* saga-python: removed due to absence of packages (in PyPI, GitHub), this project was replaced by `radical-saga` and corresponding package `py-radical-saga` should be used
* saga-python: rolled back, but with deprecation status
* ntplib: removed maintainer
* pika: removed maintainer
The commands have been deprecated in #7098, and have
been failing with an error message since then.
Cleaning the code since it is unlikely that somebody
is still using them.
* Fix for building shared lib when enabling ROCm, for STRUMPACK 5.1.1.
* Update patch for shared lib with STRUMPACK 5.1.1 and ROCm, also update FindHIP.cmake
* update patch for shared libs with ROCm
Preferred providers had a non-zero weight because in an earlier formulation of the logic program that was needed to prefer external providers over default providers. With the current formulation for externals this is not needed anymore, so we can give a weight of zero to both default choices and providers that are externals. _Using zero ensures that we don't introduce any drift towards having less providers, which was happening when minimizing positive weights_.
Modifications:
- [x] Default weight for providers starts at 0 (instead of 10, needed before to prefer externals)
- [x] Rules to compute the `provider_weight` have been refactored. There are multiple possible weights for a given `Virtual`. Only one gets selected by the solver (the one that minimizes the objective function).
- [x] `provider_weight` are now accounting for each different `Virtual`. Before there was a single weight per provider, even if the package was providing multiple virtuals.
* Give preferred providers a weight of zero
Preferred providers had a non-zero weight because in an earlier
formulation of the logic program that was needed to prefer
external providers over default providers.
With the current formulation for externals this is not needed anymore,
so we can give a weight of zero to default choices. Using zero
ensures that we don't introduce any drift towards having
less providers, which was happening when minimizing positive weights.
* Simplify how we compute weights for providers
Rewrite rules so that specific events (i.e. being
an external) unlock the possibility to use certain
weights. The weight being considered is then selected
by the minimization process to be the one that gives
the best score.
* Allow providers to have different weights for different virtuals
Before this change we didn't differentiate providers based on
the virtual they provide, which meant that packages providing
more than one virtual had nonetheless a single weight.
With this change there will be a weight per virtual.
* elk package updated to handle 3 latest versions support for older
versions is dropped
* fixed typos
* openmp dependency handling added
* and for blis too
* Retain support for elk 3, deprecate
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cp2k: fix build with GCC-10+ and MPICH
* cp2k: update SIRIUS and ELPA dependencies
* elpa: add version 2021.05.001, add ROCm support, include SVE flags
This is both a bugfix and a generalization of #25168. In #25168, we attempted to filter padding
*just* from the debug output of `spack.util.executable.Executable` objects. It turns out we got it
wrong -- filtering the command line string instead of the arg list resulted in output like this:
```
==> [2021-08-05-21:34:19.918576] ["'", '/', 'b', 'i', 'n', '/', 't', 'a', 'r', "'", ' ', "'", '-', 'o', 'x', 'f', "'", ' ', "'", '/', 't', 'm', 'p', '/', 'r', 'o', 'o', 't', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '-', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '-', 'w', 'p', 'h', 'p', 't', 'l', 'h', 'w', 'u', 's', 'e', 'i', 'a', '4', 'k', 'p', 'g', 'y', 'd', 'q', 'l', 'l', 'i', '2', '4', 'q', 'b', '5', '5', 'q', 'u', '4', '/', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '.', 't', 'a', 'r', '.', 'b', 'z', '2', "'"]
```
Additionally, plenty of builds output padded paths in other plcaes -- e.g., not just command
arguments, but in other `tty` messages via `llnl.util.filesystem` and other places. `Executable`
isn't really the right place for this.
This PR reverts the changes to `Executable` and moves the filtering into `llnl.util.tty`. There is
now a context manager there that you can use to install a filter for all output.
`spack.installer.build_process()` now uses this context manager to make `tty` do path filtering
when padding is enabled.
- [x] revert filtering in `Executable`
- [x] add ability for `tty` to filter output
- [x] install output filter in `build_process()`
- [x] tests
These versions can cause weird concretizations, and it looks like the
old version of xsdk may not even work because of xsdktrilinos being
disabled. The hypre version tagged for xsdk@0.2 no longer exists at the
described location.
With the previous naming scheme, `trilinos@:10` concretizes to
`trilinos@xsdk-0.2.0`. Now, it's clear what the xsdk version is closest
to. Changed from tag to the corresponding commit SHA for safety.
* Do not allow cray build system patch for later version of otf2
* Modify flag_handler logic in the trilinos package
Modify flag_handler logic in the trilinos package to work better with compilers
other than CCE
Run CTest at build time with:
```
spack install --test=root openpmd-api@<version>
```
and run smoke-tests after install and loading of the package via
```
spack load -r /<spec>
spack test run /<spec>
```
This pull request adds a new workflow to build and deploy Spack Docker containers
from GitHub Actions. In comparison with our current system where we use Dockerhub's
CI to build our Docker containers, this workflow will allow us to now build for multiple
architectures and deploy to multiple registries. (At the moment x86_64 and Arm64 because
ppc64le is throwing an error within archspec.)
As currently set up, the PR will build all of the current containers (minus Centos6 because
those yum repositories are no longer available?) as both x86_64 and Arm64 variants. The
workflow is currently setup to build and deploy containers nightly from develop as well as
on tagged releases. The workflow will also build, but NOT deploy containers on a pull request
for the purposes of testing this PR. At the moment it is setup to deploy the built containers to
GitHub's Container Registry although, support for also uploading to Dockerhub/Quay can be
included easily if we decide to keep releasing on Dockerhub/want to begin releasing on Quay.
This is an attempt to fix "Missing base commit" messages in the codecov UI. Because we do not run
full tests on package PRs, package PRs' merge commits on `develop` don't have coverage info. It
appears that codecov will give you an error if the pseudo-base's coverage data doesn't all apply
properly to the real PR base, unless the `allow_coverage_offsets` option is set.
* See here for docs:
https://docs.codecov.com/docs/comparing-commits#pseudo-comparison
* See here for another potential solution:
https://community.codecov.com/t/2480/15
`compare_specs()` had a `colorful` keyword argument, but everything else in
spack uses `color` for this.
- [x] rename the argument
- [x] make the default follow spack's `--color=always/never/auto` setting
* Bump py-boto3, add python constraints, bump deps
* Update var/spack/repos/builtin/packages/py-boto3/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-boto3/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-boto3/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-os-service-types
* Update var/spack/repos/builtin/packages/py-os-service-types/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-oslo-i18n
* Update var/spack/repos/builtin/packages/py-oslo-i18n/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Bump py-botocore and add python constraints
* Update var/spack/repos/builtin/packages/py-botocore/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Add a workflow to test bootstrapping clingo on
different platforms so that we can detect changes
that break it.
Compute `site_packages_dir` in `bootstrap.py` as it was
before #24095, until we figure a better way to override
that attribute.
These are the versions tested (and successfully patched) against
intel-tbb.nvhpc-remove-flags.2017.patch: @2017, @2017.8, @2018, @2018.6
intel-tbb.nvhpc-remove-flags.2019.patch: @2019
intel-tbb.nvhpc-remove-flags.2019.1.patch: @2019.[1-6]
intel-tbb.nvhpc-remove-flags.2019.7.patch: @2019.[7-8]
intel-tbb.nvhpc-remove-flags.2019.9.patch: @2019.9, 2020.[0-3]
The intel-tbb.nvhpc-version-script-fix.2017.patch was tested and
applied successfully against all of the versions above.
Long, padded install paths can get to be very long in the verbose install
output. This has to be filtered out by the Executable class, as it
generates these debug messages.
- [x] add ability to filter paths from Executable output.
- [x] add a context manager that can enable path filtering
- [x] make `build_process` in `installer.py`
This should hopefully allow us to see most of the build output in
Gitlab pipeline builds again.
`build_process` has been around a long time but it's become a very large,
unwieldy method. It's hard to work with because it has a lot of local
variables that need to persist across all of the code.
- [x] To address this, convert it its own `BuildInfoProcess` class.
- [x] Start breaking the method apart by factoring out the main
installation logic into its own function.
When context managers are used to save and restore values, we need to remember
to use try/finally around the yield in case an exception is thrown. Otherwise,
the cleanup will be skipped.
* rnpletal: New package
RNPL is an old package that is still used today by my collaborators, but doesn't see any development any more. I'm creating a Spack package merely to make it easier to install it on various systems. The code is not modern (C without prototypes – yes, that used to be a thing), and a large diff modernizes the code to make it palatable to modern C and Fortran compilers.
RNPL contains several sub-package. The current Spack package builds only the main one.
* rnpletal: Remove unused import
* Convert into AutotoolsPackage
* Don't check for "shared" variant
* rnpletal: Change "version" to `develop`
* rnpletal: Use existing `configure` function
* adjust for erroneous detection of nvc as gcc
adjust for erroneous detection of nvc as gcc when it is built with gcc
* add missing parenthesis :/
* fix trailing whitespace
* re-work hdf5 patch for nvc to make it more general
* flake8 fixes
* Render as comment
Render intended note as a comment rather than logical constraint
Co-authored-by: Frank Willmore <willmore@anl.gov>
- Change config from the undocumented `use_curl: true/false` to `url_fetch_method: urllib/curl`.
- Documentation of `url_fetch_method` in `defaults/config.yaml`
- Default fetch option explicitly set to `urllib` for users who may not have curl on their system
To upgrade from `use_curl` to `url_fetch_method`, run `spack config update config`
* kadath: New package
* Update var/spack/repos/builtin/packages/kadath/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/kadath/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/kadath/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/kadath/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* kadath: Add description to MPI variant
* kadath: Add empty line
* kadath: Add variant "codes=none" to avoid empty default
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add variant with compiler optimization
Update package.py to include variant with compiler optimization, benchmarked at A-HUG hackaton to improve major kernel time by roughly 3%.
* fix style
* Update var/spack/repos/builtin/packages/laghos/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The output order for `spack diff` is nondeterministic for larger diffs -- if you
ran it several times it will not put the fields in the spec in the same order on
successive invocations.
This makes a few fixes to `spack diff`:
- [x] Implement the change discussed in https://github.com/spack/spack/pull/22283#discussion_r598337448
to make `AspFunction` comparable in and of itself and to eliminate the need for `to_tuple()`
- [x] Sort the lists of diff properties so that the output is always in the same order.
- [x] Make the output for different fields the same as what we use in the solver. Previously, we
would use `Type(value)` for non-string values and `value` for strings. Now we just use
the value. So the output looks a little cleaner:
```
== Old ========================== == New ====================
@@ node_target @@ @@ node_target @@
- gdbm Target(x86_64) - gdbm x86_64
+ zlib Target(skylake) + zlib skylake
@@ variant_value @@ @@ variant_value @@
- ncurses symlinks bool(False) - ncurses symlinks False
+ zlib optimize bool(True) + zlib optimize True
@@ version @@ @@ version @@
- gdbm Version(1.18.1) - gdbm 1.18.1
+ zlib Version(1.2.11) + zlib 1.2.11
@@ node_os @@ @@ node_os @@
- gdbm catalina - gdbm catalina
+ zlib catalina + zlib catalina
```
I suppose if we want to use `repr()` in the output we could do that and could be
consistent but we don't do that elsewhere -- the types of things in Specs are
all stringifiable so the string and the name of the attribute (`version`, `node_os`,
etc.) are sufficient to know what they are.
* lorene: Install only executables, not unrelated files in the same directory
* lorene: Don't determine compile dependencies
The current way doesn't work (cpp misses C++ include paths), and we don't need dependencies anyway.
* lorene: Correct BLAS library names
* lorene: Remove comment
Gitlab truncates job trace output (even the complete raw output) at 4MB,
so this change captures it to a file under "user_data" artifacts as well,
to make sure we can debug output from the end of the rebuild job.
When a spec fails to build on `develop`, instead of storing an empty file as the entry in the broken specs list, this change stores the full spec yaml as well as links to the failing pipeline and job.
A `spack diff` will take two specs, and then use the spack.solver.asp.SpackSolverSetup to generate
lists of facts about each (e.g., nodes, variants, etc.) and then take a set difference between the
two to show the user the differences.
Example output:
$ spack diff python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ variant_value @@
- python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
+ python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
@@ version @@
- openssl Version(1.0.2u)
+ openssl Version(1.1.1k)
- python Version(2.7.8)
+ python Version(3.8.11)
Currently this uses diff-like output but we will attempt to improve on this in the future.
One use case for `spack diff` is whenever a user has a disambiguate situation and cannot
remember how two different installs are different. The command can also output `--json` in
the case of a more analysis type use case where we want to save complete data with all
diffs and the intersection. However, the command is really more intended for a command
line use case, and we likely will have an analyzer more suited to saving data
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Catch ConnectionError from CDash reporter
Catch ConnectionError when attempting to upload the results of `spack install`
to CDash. This follows in the spirit of #24299. We do not want `spack install`
to exit with a non-zero status when something goes wrong while attempting to
report results to CDash.
* Catch HTTP Error 400 (Bad Request) in relate_cdash_builds()
* sst-elements: add optional support for flashdimmsim, dramsim3 and
add new packages for each
* sst-dumpi: add version 7.1.0
* sst-core: autotools dependencies are required for all versions
* new package: dtc
* add error message redirect for +dumpi, otf, and otf2: these are not
currently supported
Modifications:
- Remove the "build tests" workflow from GitHub Actions
- Setup a similar e2e test on Gitlab
In this way we'll reduce load on GitHub Actions workflows and for e2e tests will
benefit from the buildcache reuse granted by pipelines.
ENABLE_SPLASH configuration has been removed entirely after 21.06 so
patch is no longer necessary after #24931. (Versions between 0.90.1 and
21.06 will likely still need a patch, and while it's not clear if this
patch is the right one, seems better to leave something in.)
- add version 9.1.2
- set a license file
- set the license environment variable
- remove the download and license information out of the description so
it does not show up in environment modules
- extend python and set python version constraints
- build gurobipy to be used in any compatible python, used for more
extensive computations than the gurobi shell
- remove preexisting PYTHONPATH from gurobi.sh as the shell uses a
built-in python, which will likely be different from "system" python
- add maintainer
`spack style` previously used a Travis CI variable to figure out
what the base branch of a PR was, and this was apparently also set
on `develop`. We switched to `GITHUB_BASE_REF` to support GitHub
Actions, but it looks like this is set to `""` in pushes to develop,
so `spack style` breaks there.
This PR does two things:
- [x] Remove `GITHUB_BASE_REF` knowledge from `spack style` entirely
- [x] Handle `GITHUB_BASE_REF` in style scripts instead, and explicitly
pass the base ref if it is present, but don't otherwise.
This makes `spack style` *not* dependent on the environment and fixes
handling of the base branch in the right place.
This adds a `--root` option so that `spack style` can check style for
a spack instance other than its own.
We also change the inner workings of `spack style` so that `--config FILE`
(and similar options for the various tools) options are used. This ensures
that when `spack style` runs, it always uses the config from the running spack,
and does *not* pick up configuration from the external root.
- [x] add `--root` option to `spack style`
- [x] add `--config` (or similar) option when invoking style tools
- [x] add a test that verifies we can check an external instance
* [py-lmfit] fixed py-asteval dependency requirements
* [py-lmfit] added version 1.0.2
* [py-lmfit] flake8
* [py-lmfit] 1.0.2 reqires python 3.6
* [py-lmfit] removed newer dependency requirements to be in line with setup.py not requirements.txt
* pbs: new virtual package
Some of our clusters have an older installation of
libtorque and tm.h that are *not* from OpenPBS. Using the current
openpbs dependency for openmpi causes concretization errors due to
restrictions on older python and hwloc requirements that don't apply,
even with an external non-buildable installation.
The new 'torque' bundle package allows users to point to that external
installation without problems.
Detailed description of torque by Sergey Kosukhin <skosukhin@gmail.com>
Intel oneAPI installs maintain a lock file in XDG_RUNTIME_DIR,
which by default exists in /tmp (and is shared by all component
installs). This prevented multiple oneAPI components from being
installed in parallel. This commit sets XDG_RUNTIME_DIR to exist
within Spack's installation Stage, so allows multiple components
to be installed at the same time.
* aws-parallelcluster: update maintainers list
Signed-off-by: Tim Lane <tilne@amazon.com>
* aws-parallelcluster: add v2.11.1
Signed-off-by: Tim Lane <tilne@amazon.com>
This PR fixes the tesseract package
- add missing dependencies
- build documentation
- build and install java component
- build and install training component
This uses our bootstrapping logic to automatically install dependencies for
`spack style`. Users should no longer have to pre-install all of the tools
(`isort`, `mypy`, `black`, `flake8`). The command will do it for them.
- [x] add logic to bootstrap specs with specific version requirements in `spack style`
- [x] remove style tools from CI requirements (to ensure we test bootstrapping)
- [x] rework dependencies for `mypy` and `py-typed-ast`
- `py-typed-ast` needs to be a link dependency
- it needs to be at 1.4.1 or higher to work with python 3.9
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* Adding package for omegaconf
* Update var/spack/repos/builtin/packages/py-omegaconf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Changing py-omegaconf to use github source URL instead of pypi
* Style fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Worked with flecsi developers to tighten, relax, and clarify
constraints and better understand how the flecsi project uses
legion. In the process, discovered that flecsi@1.4 cannot be
built with legion without heavy changes/reverts to the legion
and gasnet spackages.
Also, most importantly, fixed branding as to how flecsi is spelled
We add compilation flags when using %nvhpc to suppress warnings
(which due to global -Werror flag in the build get promoted to
errors) for the following:
Diagnostic 111: statement is unreachable
Diagnostic 177: variable "foo" was declared but never referenced
Diagnostic 188: enumerated type mixed with another type
Diagnostic 550: variable "foo" was set but never used
#24095 introduced a couple of bugs, which are fixed here:
1. The module path is computed incorrectly for bootstrapped clingo
2. We remove too many paths for `sys.path` in case of failures
z3 is a dependency of llvm and llvm-amdgpu, and when z3 python bindings
are enabled it depends on py-setuptools as a run dependency. That's
fine, except that py-setuptools now influences the hash of
llvm/llvm-amdgpu, which can be very annoying when another package
restricts the py-setuptools version -- you'll end up recompiling llvm
for no good reason :(.
* Updated the lbann package to not enabled OpenMP in BLAS package when
working on Darwin systems.
* Add the Sphinx RTD theme as an explicit dependency when building documentation
In some cases the FindHDF5.cmake returnd a wrong value for the HDF5 library names and path. For example it returns hdf5-shared as library name without a search path or checking if this is really an existing shared library. By HDF5_NO_FIND_PACKAGE_CONFIG_FILE=True/ON to the cmake options, the FindHDF5 module does not rely on a properly install hdf5-config.cmake and thus searches for the library and its paths. This results in a usable return value and fenics works afterwards.
* Updates for dependencies in main branch
* Add more depends
* Make CMake available at runtime for fenics-dolfinx
* Add maintainer
Co-authored-by: Garth N. Wells <gnw20@cam.ac.uk>
Although `cpio` is present in many environments, it may not be always
available.
The failure to build this package can be reproduced in a fresh Docker
image `debian:10`.
* trilinos: rename basker variant
The Basker solver is part of amesos2 but is clearer without the extra
scoping.
* trilinos: automatically enable teuchos and remove variant
Basically everything in trilinos needs teuchos
* trilinos: group top-level dependencies
* trilinos: update dependencies, removing unused
- GLM, X11 are unused (x11 lacks dependency specs too)
- Python variant is more like a TPL so rearrange that
- Gtest internal package shouldn't be compiled or exported
- Add MPI4PY requirement for pytrilinos
* trilinos: remove package meta-options
- XSDK settings and "all opt packages" are not used anywhere
- all optional packages are dangerous
* trilinos: Use hwloc iff kokkos
See #19119, also the HWLOC tpl name was misspelled so this was being ignored before.
* Flake
* Fix trilinos +netcdf~mpi
* trilinos: default to disabling external dependencies
* Remove teuchos from downstream dependencies
* fixup! trilinos: Use hwloc iff kokkos
* Add netcdf requirements to packages with ^trilinos+exodus
* trilinos: disable exodus by default
* fixup! Add netcdf requirements to packages with ^trilinos+exodus
* trilinos: only enable hwloc when @13: +kokkos
* xyce: propagate trilinos dependencies more simply
* dtk: fix missing boost dependency
* trilinos: remove explicit metis dependency
* trilinos: require metis/parmetis for zoltan
Disable zoltan by default to minimize default dependencies
* trilinos: mark mesquite disabled and fix kokkos arch
* xsdk: fix trilinos to also list zoltan [with zoltan2]
* ci: remove nonexistent variant from trilinos
* trilinos: add missing boost dependency
Co-authored-by: Satish Balay <balay@mcs.anl.gov>
Third-party Python libraries may be installed in one of several directories:
1. `lib/pythonX.Y/site-packages` for Spack-installed Python
2. `lib64/pythonX.Y/site-packages` for system Python on RHEL/CentOS/Fedora
3. `lib/pythonX/dist-packages` for system Python on Debian/Ubuntu
Previously, Spack packages were hard-coded to use the (1). Now, we query the Python installation itself and ask it which to use. Ever since #21446 this is how we've been determining where to install Python libraries anyway.
Note: there are still many packages that are hard-coded to use (1). I can change them in this PR, but I don't have the bandwidth to test all of them.
* Python: handle dist-packages and site-packages
* Query Python to find site-packages directory
* Add try-except statements for when distutils isn't installed
* Catch more errors
* Fix root directory used in import tests
* Rely on site_packages_dir property
* Change url and checksums for libpng to official sourceforge archives
* Update url scheme from http to https
* switch to .xz archives
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Add py-h5py version 3.3.0
The mpi4py dependency was bumped to 3.0.2 in setup.py. I'm not sure if that's actually required or not, but nothing lower is still tested.
* Use environment variable to stop h5py using setuptools setup_requires feature
* Add myself as a maintainer for py-h5py
* [py-transformers] can now use newer versions of tokenizers
* [py-transformers] Added version 4.6.1
* [py-transformers] removing old patch
* [py-transformers] boto3 no longer needed
* first build of py-torchmeta
* updated versions for torchvision and torch
* [py-torchmeta] using pil provider
Co-authored-by: Sid Pendelberry <sid@rit.edu>
The Makefile for the MAGMA smoke tests uses pkg-config to find
the MAGMA compile flags, but the test() routine in the spack
package was not configured to provide the location of the
pkg-config file. This modification sets PKG_CONFIG_PATH correctly
to allow the smoketests to successfully compile. It also removes
the *_dir variables which were unused by the magma
examples/Makefile.
Using the original concretizer, trying to concretize py-jupyterlab fails
with
```
==> Error: Invalid Version range: 6.1.0:6.1
```
because py-tornado does not have a 6.1.0 version but only a 6.1 one.
Makefiles for libtirpc have hardcoded the -pipe flag to the compiler
nvhpc compilers do not recognize that flag.
This PR provides a patch to remove the -pipe flag from the Makefile.
Patch should work with libtirpc@1.2.6 and @1.1.4
jupyterlab was looking for its application directory inside the python
prefix instead its own one. This was fixed by setting the according
environment variable.
* Permit to enable/disable bootstrapping and customize store location
This PR adds configuration handles to allow enabling
and disabling bootstrapping, and to customize the store
location.
* Move bootstrap related configuration into its own YAML file
* Add a bootstrap command to manage configuration
Spack allows users to set `padded_length` to pad out the installation path in
build farms so that any binaries created are more easily relocatable. The issue
with this is that the padding dominates installation output and makes it
difficult to see what is going on. The padding also causes logs to easily
exceed size limits for things like GitLab artifacts.
This PR fixes this by adding a filter in the logger daemon. If you use a
setting like this:
config:
install_tree:
padded_length: 512
Then lines like this in the output:
==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_pla/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga
will be replaced with the much more readable:
==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/[padded-to-512-chars]/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga
You can see that the padding has been replaced with `[padded-to-512-chars]` to
indicate the total number of characters in the padded prefix. Over a long log
file, this should save a lot of space and allow us to see error messages in
GitHub/GitLab log output.
The *actual* build logs still have full paths in them. Also lines that are
output by Spack and not by a package build are not filtered and will still
display the fully padded path. There aren't that many of these, so the change
should still help reduce file size and readability quite a bit.
015e29efe1 that introduced this section to the
documentation said “two” here instead of the actual count, three.
9f54cea5c5 then added a fourth, BLAS/LAPACK.
Rather than trying to keep this leading count in sync, this change just replaces
the wording with something more generic/stable.
Getting rid of another top-level file.
`coverage.py` has supported `pyproject.toml` since version 5.0, and
all versions of coverage so far work with python 2.7. We just need to
ensure that a version of coverage with the `toml` extra is installed
in the test environment.
I tested this with `coverage run`, `coverage report`, and `coverage html`.
* openPMD-api: rename develop
Rename to match known Spack version comparison schemes:
```
develop>main>master>head>trunk>9999>0>z>a
```
Currently, the hdf5 patch that is pre-0.14.0 is also applied to
`dev`, which naturally fails (already applied).
* fix dev in warpx
* py-markupsafe: add 2.0.1
* Update var/spack/repos/builtin/packages/py-markupsafe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This moves our `mypy` configuration from `.mypy.ini` to `.pyproject.toml`
and increases the minimum `mypy` version in the tests.
- [x] move `mypy` configuration to `pyproject.toml`
- [x] remove `.mypy.ini`
- [x] ensure that `mypy` version .900 or higher is used in tests
Ideally a test-only dependency won't be in the build, but until then
mark the requirement of gtest up to 1.10.
See e4s job failure at https://gitlab.spack.io/spack/spack/-/jobs/349959 .
Looks like 1.11 introduces some breaking incompatibilities, so perhaps
we should transition later.
* fix remaining flake8 errors
* imports: sort imports everywhere in Spack
We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.
This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
* Fix compiler test
Use `self.spec.satisfies` on compiler to determine if a flag should be
applied or not. This approach avoids issues with the strings `gcc`
or `clang` appearing in the full path to the compiler executables, as
happens with spack-installed compilers (e.g. `nvhpc%gcc`).
* Limit compiler name search to last path component
@skosukhin pointed out that the cflag modification should happen for any
clang or gcc compiler, regardless of what compiler spec provides them.
This commit reverts to searching for a compiler name containing "gcc"
or "clang", but limits the search to the last path component, which
avoids matching spack-installed compilers built with gcc (e.g.
`nvhpc%gcc`), which will have "gcc" in the compiler path.
* Use `os.path` rather than `pathlib`
Co-authored-by: Paul Henning <phenning@lanl.gov>
`dateutil.parser` was an optional dependency for CVS tests. It was failing on macOS
beacuse the dateutil types were not being installed, and mypy was failing *even when the
CVS tests were skipped*. This seems like it was an oversight on macOS --
`types-dateutil-parser` was not installed there, though it was on Linux unit tests.
It takes 6 lines of YAML and some weird test-skipping logic to get `python-dateutil` and
`types-python-dateutil` installed in all the tests where we need them, but it only takes
4 lines of code to write the date parser we need for CVS, so I just did that instead.
Note that CVS date format can vary from system to system, but it seems like it's always
pretty similar for the parts we care about.
- [x] Replace dateutil.parser with a simpler date regex
- [x] Lose the dependency on `dateutil.parser`
Previous tests of `spack style` didn't really run the tools --
they just ensure that the commands worked enough to get coverage.
This adds several real tests and ensures that we hit the corner
cases in `spack style`. This also tests sucess as well as failure
cases.
This consolidates code across tools in `spack style` so that each
`run_<tool>` function can be called indirecty through a dictionary
of handlers, and os that checks like finding the executable for the
tool can be shared across commands.
- [x] rework `spack style` to use decorators to register tools
- [x] define tool order in one place in `spack style`
- [x] fix python 2/3 issues to Get `isort` checks working
- [x] make isort error regex more robust across versions
- [x] remove unused output option
- [x] change vestigial `TRAVIS_BRANCH` to `GITHUB_BASE_REF`
- [x] update completion
This PR configures the spack docbook packages
- docbook-xsl
- docbook-xml
The public entities are now mapped to the locally installed files of the
respective packages. The example catalogs are left in place and
XML_CATALOG_FILES points to the newly created catalogs.
Perl keeps copies of the bzip2 and zlib source code in its own source
tree and by default uses them in favor of outside libraries. Instead,
put these dependencies under control of spack and tell perl to use the
spack-built versions.
* py-keyring: fix installation on linux
* Update var/spack/repos/builtin/packages/py-keyring/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-keyring/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We should not fail the generate stage simply due to the presence of
a broken-spec somewhere in the DAG. Only fail if the known broken
spec needs to be rebuilt.
This PR adds a context manager that permit to group the common part of a `when=` argument and add that to the context:
```python
class Gcc(AutotoolsPackage):
with when('+nvptx'):
depends_on('cuda')
conflicts('@:6', msg='NVPTX only supported in gcc 7 and above')
conflicts('languages=ada')
conflicts('languages=brig')
conflicts('languages=go')
```
The above snippet is equivalent to:
```python
class Gcc(AutotoolsPackage):
depends_on('cuda', when='+nvptx')
conflicts('@:6', when='+nvptx', msg='NVPTX only supported in gcc 7 and above')
conflicts('languages=ada', when='+nvptx')
conflicts('languages=brig', when='+nvptx')
conflicts('languages=go', when='+nvptx')
```
which needs a repetition of the `when='+nvptx'` argument. The context manager might help improving readability and permits to group together directives related to the same semantic aspect (e.g. all the directives needed to model the behavior of `gcc` when `+nvptx` is active).
Modifications:
- [x] Added a `when` context manager to be used with package directives
- [x] Add unit tests and documentation for the new feature
- [x] Modified `cp2k` and `gcc` to show the use of the context manager
I installed curl on my mac and it picked up a homebrew (I think?)
installation of gsasl. A later system update broke git because of the
implicitly added dependency. Explicitly disabling libraries that *might*
exist on the system is the safe approach here.
```
dyld: Library not loaded: /usr/local/opt/gsasl/lib/libgsasl.7.dylib
Referenced from: /rnsdhpc/code/spack/opt/spack/apple-clang/curl/gag5v3c/lib/libcurl.4.dylib
Reason: image not found
error: git-remote-https died of signal 6
```
* Added Perl workaround for CUDA <= 8
* Re-wrapped comment
* Proofreading corrections
* Added a reference
* Do not override Perl include path
* Retrieve shell once
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* trilinos: add teko conflict
* trilinos: improve gotype variant
Instead of 'none' and 'long' typically being the same (but not for older
trilinos versions), add an explicit 'all' variant that only works for
older trilinos which supports multiple simultaneous tpetra
instantiations.
* trilinos: add self as maintainer
* trilinos: disable vendored gtest by default
This changes several conflicting variants to a single
multi-value variant, and uses conflicts instead of raising InstallError.
(With clingo, requesting +gui automatically selects features=huge!)
I have also rearranged the dependencies for clarity and simplified the
conifgure args.
ci: only write to broken-specs list on SpackError
Only write to the broken-specs list when `spack install` raises a SpackError,
instead of writing to this list unnecessarily when infrastructure-related problems
prevent a develop job from completing successfully.
If two Specs have the same hash (and prefix) but are not equal, Spack
originally had logic to detect this and raise an error (since both
cannot be installed in the same place). Recently this has eroded and
the check no-longer works; moreover, when defining projections (which
may truncate the hash or other distinguishing properties from the
prefix) Spack was also failing to detect collisions (in both of these
cases, Spack would overwrite the old prefix with the new Spec).
This PR maintains a list of all "taken" prefixes: if a hash is not
registered (i.e. recorded as installed in the database) but the prefix
is occupied, that is a collision. This can detect collisions created
by defining projections (specifically when they omit the hash).
The PR does not detect collisions where specs have the same hash
(and prefix) but are not equal.
Fix syntax of conflict between numpy 1.21.0 and gcc11 to that the clingo
concretizer recognizes it.
In addition the upstream master branch was renamed to main.
* Switch hdf5 package from autotools to cmake.
* Add variant for building with zlib, default to ON.
* Update for format requirements.
* Format change.
* Fix breakage from last merge from develop.
Switch szip to use libaec (unrestricted encryption).
Remove 'static' variant: static libs will only be installed when
~shared.
* Improve args based on suggestions from pull request.
* Update code URL to github.com
Add/modify 4 depends_on lines to fix running "spack graph --deptype=link hdf5".
* Remove trailing whitespace.
* Remove dependencies added solely to make "spack greph --type=link" work.
* Add new version HDF5 1.8.22.
* Remove unnecessary java_check.
* Fix whitespace for style checks.
* Reverted zlib version dependency to 1.1.2:.
zlib variant removed.
api version default renamed "default".
* Remove blank line.
* Whitespace corrections.
* iRemoved unnecessary 'debug' variant.
* Fix typo in version number in conflict for '+szip'.
* Set default for tools variant to True.
Remove patch functions dependent on 'libtool' file that cmake doesn't
produce.
* Remove line to set ONLY_SHARED_LIBS to true.
Add post_install code to install only one version of tools with shared
linkage and original tool names.
* Remove trailing white space and import of glob package not used.
* Leave BUILD_TESTING set to default which is ON.
* Remove post_install code to install only one version of tools because
some dependent packages running tests in e4s testing are using
h5diff-shared. Keep both tools versions for now.
* No longer need to import os.
Instead of refusing to build +mpi with gcc10, add what I guess is now
the standard workaround, ie., `-fallow-argument-mismatch`.
Getting this into pfunit's cmake-based but kinda non-standard build isi
a bit ugly, but you gotta do what you gotta do...
Version 1.17 of DD4hep was renamed from "01-17-00" to "01-17", in line
with the naming conventions of previous releases. Since release archives
contain a subdirectory with the version string in it, this changes the contents
of the tarball ever so slightly, so the SHA-256 checksum must change as well.
Fix url to find newer versions, add newest version 4.0.2 and add
variants for
- cxxstd: To use a specific c++ standard
- static: Enable or disable build of static libraries
- boost: Boost support
- sqlite: SQLite support
- postgresql: PostgreSQL support
When having a few packages loaded, installing go-bootstrap will fail
because the `PATH` variable is truncated at 4096 bytes. Increase the
limit to 128 KiB to make longer paths fit.
1. "+simplex" conflicts with "dealii@:9.2" [The interface to simplex is supported from version 9.3.0 onwards. Please explicitly disable this variant via ~simplex]
2. "+arborx" conflicts with "dealii@:9.2" [The interface to arborx is supported from version 9.3.0 onwards. Please explicitly disable this variant via ~arborx]
Prior to any Spack build, Spack modifies PATH etc. to help the build
find the dependencies it needs. It also allows any package to define
custom environment modifications (and furthermore a package can
specify environment modifications to apply when it is used as a
dependency). If an external package defines custom environment
modifications that alter PATH, and the external package is in a merged
or system prefix, then that prefix could "override" the Spack-built
packages.
This commit reorders environment modifications so that PrependPath
actions which expose Spack-built packages override PrependPath actions
for custom environment modifications of external packages.
In more detail, the original order of environment modifications is:
* Modules
* Compiler flag variables
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH for dependencies
* Custom package.py modifications in the following order:
* dependencies
* root
This commit changes the order:
* Modules
* Compiler flag variables
* For each external dependency
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH modifications
* Custom modifications
* For each Spack-built dependency
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH modifications
* Custom modifications
Spack pipelines need to take specific actions internally that depend
on whether the pipeline is being run on a PR to spack or a merge to
the develop branch. Pipelines can also run in other repositories,
which represents other possible use cases than just the two mentioned
above. This PR creates a "SPACK_PIPELINE_TYPE" gitlab variable which
is propagated to rebuild jobs, and is also used internally to determine
which pipeline-specific tasks to run.
One goal of the PR is fix an issue where rebuild jobs which failed on
develop pipelines did not properly report the broken full hash to the
"broken-specs-url".
* Add Externally Findable section to info command
* Use comma delimited detection attributes in addition to boolean value
* Unit test externally detectable part of spack info
yes I know this name isn't popular but that's the way it is right now.
master and the upcoming v5.0.x release branch use git submodules.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
* [py-xxhash] created template
* [py-xxhash] working on dependencies
* [py-xxhash] set version for xxhash
* [py-xxhash] Final cleanup
- added homepage
- added description
- removed fixmes
* Force the Python interpreter with an env variable
This commit forces the Python interpreter with an
environment variable, to ensure that the Python set
by the "setup-python" action is the one being used.
Due to the policy adopted by Spack to prefer python3
over python we may end up picking a Python 3.X
interpreter where Python 2.7 was meant to be used.
* Revert "Update conftest.py (#24473)"
This reverts commit 477c8ce820.
* Make python-dateutil a soft dependency for unit tests
Before #23212 people could clone spack and run
```
spack unit-tests
```
while now this is not possible, since python-dateutil is
a required but not vendored dependency. This change makes
it not a hard requirement, i.e. it will be used if found
in the current interpreter.
* Workaround mypy complaint
This commit fixes a subtle bug that may occur when
a package is a "possible_provider" of a virtual but
no "provides_virtual" can be deduced. In that case
the cardinality constraint on "provides_virtual"
may arbitrarily assign a package the role of provider
even if the constraints for it to be one are not fulfilled.
The fix reworks the logic around three concepts:
- "possible_provider": a package may provide a virtual if some constraints are met
- "provides_virtual": a package meet the constraints to provide a virtual
- "provider": a package selected to provide a virtual
Spack packages can now fetch versions from CVS repositories. Note
this fetch mechanism is unsafe unless using :extssh:. Most public
CVS repositories use an insecure protocol implemented as part of CVS.
Here we are adding an install_times.json into the spack install metadata folder.
We record a total, global time, along with the times for each phase. The type
of phase or install start / end is included (e.g., build or fail)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
The original implementation of `flag_handler` searched the
`self.compiler.cc` string for `clang` or `gcc` in order to add a flag
for those compilers. This approach fails when using a spack-installed
compiler that was itself built with gcc or clang, as those strings will
appear in the fully-qualified compiler executable paths. This commit
switches to searching for `%gcc` or `%clang` in `self.spec`.
Co-authored-by: Paul Henning <phenning@lanl.gov>
the 4.1.1 release has fixes for problems that kept 4.1.0 from
being the default open mpi version to build using spack.
related to #24396
Signed-off-by: Howard Pritchard <hppritcha@gmail.com>
* remove blueos check on cuda variant, fix typo
* restore necessary compiler guard
* remove axom+cuda from testing because it only partially works outside ppc systems
This PR does the following:
- adds version corresponding to commit at 08/03/2020
- adds missing get_DE_events.py script
- adds dependencies needed by get_DE_events.py
- removes REDItoolDenovo.py.patch and python2to3.patch in favor of
running 2to3 and reindent pre-build
- add batch_sort.patch to handle differences in string/char handling
betweeen python2 and python3
- adds a variant for the Nature Protocol
- adds dependencies for the nature_protocol variant
- added myself as maintainer
This PR adds a new version of reditools from git.
This PR fixes a couple of issues with the opencv package, mostly in
relation to cuda. This is only focused on cuda, not any of the other
variants.
- Added versions to the contrib_vers list. Added for all that can be
retrieved from github. The one for the latest version was missing.
- Added a cmake patch for v3.2.0.
- Deprecated versions 3.1.0 and 3.2.0 as neither of those could be
built, with or without cuda.
- Adjusted constraints on applying initial cmake patch.
- Added cudnn dependency when +cuda.
- Set constraints for cudnn and cuda for older versions of opencv.
Add a new "spack audit" command. This command can check for issues
with configuration or with packages and is intended to help a
user debug a failed Spack build.
In some cases the reported issues are always errors but are too
costly to check for (e.g. packages that specify missing variants on
dependencies). In other cases the issues may be legitimate but
uncommon usage of Spack and we want to be sure the user intended the
behavior (e.g. duplicate compiler definitions).
Audits are grouped by theme, and for now the two themes are packages
and configuration. For example you can run all available audits
on packages with "spack audit packages". It is intended that in
the future users will be able to define their own audits.
The package audits are good candidates for running in package_sanity
(i.e. they could catch bugs in user-submitted packages before they
are merged) but that is left for a later PR.
Building magma has been failing consistently and is currently
blocking PRs from being merged. Disable that spec while we
investigate the failure and work on a fix.
This should get us most of the way there to support using monitor during a spack container build, for both Singularity and Docker. Some quick notes:
### Docker
Docker works by way of BUILDKIT and being able to specify --secret. What this means is that you can prefix a line with a mount of type secret as follows:
```bash
# Install the software, remove unnecessary deps
RUN --mount=type=secret,id=su --mount=type=secret,id=st cd /opt/spack-environment && spack env activate . && export SPACKMON_USER=$(cat /run/secrets/su) && export SPACKMON_TOKEN=$(cat /run/secrets/st) && spack install --monitor --fail-fast && spack gc -y
```
Where the id for one or more secrets corresponds to the file mounted at `/run/secrets/<name>`. So, for example, to build this container with su (spackmon user) and sv (spackmon token) defined I would export them on my host and do:
```bash
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
```
And when we add `env` to the secret definition that tells the build to look for the secret with id "st" in the environment variable `SPACKMON_TOKEN` for example.
If the user is building locally with a local spack monitor, we also need to set the `--network` to be the host, otherwise you can't connect to it (a la isolation of course.)
## Singularity
Singularity doesn't have as nice an ability to clearly specify secrets, so (hoping this eventually gets implemented) what I'm doing now is providing the user instructions to write the credentials to a file, add it to the container to source, and remove when done.
## Tags
Note that the tags PR https://github.com/spack/spack/pull/23712 will need to be merged before `--monitor-tags` will actually work because I'm checking for the attribute (that doesn't exist yet):
```bash
"tags": getattr(args, "monitor_tags", None)
```
So when that PR is merged to update the argument group, it will work here, and I can either update the PR here to not check if the attribute is there (it will be) or open another one in the case this PR is already merged.
Finally, I added a bunch of documetation for how to use monitor with containerize. I say "mostly working" because I can't do a full test run with this new version until the container base is built with the updated spack (the request to the monitor server for an env install was missing so I had to add it here).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Inline codecov annotations make the code hard to read, and they add annotations
in files that seemingly have nothing to do with the PR. Sadly, they add a whole
lot of noise and not a lot of benefit over looking at the PR on codecov. We
should just have people look at the coverage on codecov itself.
* New package: py-pyusb
Change-Id: I606127858b961b5841c60befc5a8353df0f9f38c
* fixup dependencies
Change-Id: I0c9b0ccee693d2c4e847717950d4ce64cb319794
* fixup 2
Change-Id: Ibaccbdafd865e363564f491054e4e4ceb778727b
* Update var/spack/repos/builtin/packages/py-pyusb/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
A patch no longer applies cleanly as its fixed in v4.0.6 - fix it here
==> Installing openmpi-4.0.6-in47f6rxspbnyibkdx6x4ekg6piujobd
==> No binary for openmpi-4.0.6-in47f6rxspbnyibkdx6x4ekg6piujobd found: installing from source
==> Fetching https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.6.tar.bz2
Reversed (or previously applied) patch detected! Assume -R? [n]
Apply anyway? [n]
2 out of 2 hunks ignored -- saving rejects to file opal/include/opal/sys/gcc_builtin/atomic.h.rej
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
When running executables from build dependencies, we want to avoid that
`LD_PRELOAD` and `DYLD_INSERT_LIBRARIES` any of their shared libs build
by spack with system libraries.
The Z3 solver provides a Z3Config.cmake file when built using the CMake build
system. This submission changes the package build system to inherit the
CMakePackage type. In addition to changing the build system, this submission:
- Adds the GMP variant
- Removes v4.4.0 and v4.4.1 as CMake was implemented starting with v4.5.0
This adds a package for `irep`, a tool for reading `lua` input decks from
Fortran, C, and C++.
`irep` can be built with either `lua` or `luajit`. To address this, we also add
a virtual package for lua called `lua-lang`. `luajit` isn't, by default, a drop-in
replacement for `lua`, but we add a `+lualinks` variant to it that adds symlinks
that make it behave like `lua@5.1`. With this variant enabled, it provides the
`lua-lang` virtual. `lua` always provides `lua-lang`.
- [x] add `irep` package
- [x] add `+lualinks` variant to `lua-luajit`
- [x] create `lua-lang` virtual, provided by `lua` and `luajit+lualinks`
Co-authored-by: Kayla Richarda Butler <butler59@quartz1148.llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* libdrm: fix one configure error and require libpciaccess
Failure with `LIBS`: the linker can't find `-lrt` so configure fails on
darwin-bigsur %apple-clang@12.0.5
```
>> 22 configure: error: in `/private/var/folders/gy/mrg1ffts2h945qj9k29s1l1dvvmbqb/T/s3j/spack-s
tage/spack-stage-libdrm-2.4.100-ofhk6m25n2pi427ihnxmvjkfmgyzlrqc/spack-src':
>> 23 configure: error: C compiler cannot create executables
24 See `config.log' for more details
See build log for details:
/var/folders/gy/mrg1ffts2h945qj9k29s1l1dvvmbqb/T/s3j/spack-stage/spack-stage-libdrm-2.4.100-ofhk6m25n2pi427ihnxmvjkfmgyzlrqc/spack-build-out.txt
```
* libpciaccess: Mark conflict with darwin
```
make[2]: *** [common_init.lo] Error 1
make[2]: *** Waiting for unfinished jobs....
common_interface.c:75:10: fatal error: 'sys/endian.h' file not found
^~~~~~~~~~~~~~
```
and
```
common_init.c:73:3: error: "Unsupported OS"
```
and others
* extending example for buildcaches
I was attempting to create a local build cache from a directory, and I found the
docs for both buildcaches and mirrors, but did not connect the docs that the
url variable could be the local filesystem variable. I am extending the docs for
buildcaches with an example of creating and interacting with one on the filesystem
because I suspect other users will run into this need and possibly not find what
they are looking for.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* adding as follows to spack mirror list
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* update url, add all new versions and fix installation
* add wxparaver package and set the old paraver package as deprecated
* remove update of deprecated package
* remove old version from new wxparaver
* Update url
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
It is currently kind of confusing to the reader to distinguish spack buildcache install
and spack install, and it is not clear how to use a build cache once a mirror is added.
Hopefully this little big of description can help (and I hope I got it right!)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Use the 'version_yearlike' attribute instead of 'version' to
check if the SPACK_COMPILER_EXTRA_RPATHS should be set to include
the built-in 'libfabrics'.
When using the bare 'version', the comparison is wrong when
building with 'intel-parallel-studio', which has the version
format '<edition>.YYYY.Nupdate', due to the leading '<edition>'.
xfsprogs currently does not install with error message:
FATAL ERROR: could not find a valid ini.h header.
Adding this package libinih, and including it as
a dependency for xfsprogs seems to fix the issue. It could be
that we only need to add it for newer versions (if it worked before)
and maybe a maintainer can comment on that.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Pagination on Github prevent spack from easily parse all available
versions. Also, due to recent migration to GitHub, tarballs for
versions up to 3.12.13 have be regenerated, changing the hash.
The current URL will apparently be supported, so we keep it, and give
the alternative one as a comment.
This should fix#24278
$INSTALLDIR/lib/python3.7/site-packages/IPython/core/events.py contains an
import from backcall even in @7.3.0, so dependency on py-backcall needs
to start earlier.
Restrict poppler version for texlive to poppler@:0.84
Should fix#19946
See also https://github.com/NixOS/nixpkgs/issues/79170
Looks like poppler@0.84 upgraded their header files to use the C++ cstdio
instead of the C stdio.h. Since TeX is using C, not C++. this causes problems.
* zfp: several package improvements
- add variants for build targets, language bindings, backends
- ensure selected variants are compatible with zfp version
- point to GitHub (not LLNL) tar balls
- add dependencies
- update link to homepage
- add maintainers
* zfp: address suggestions by Spack team
- use conflicts() instead of raising exceptions
- use define() and define_from_variant() where applicable
* Apply suggestions from code review
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Fix ZFP OpenMP build.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* [py-keyboard] created template
* [py-keyboard]
- updated homepage
- added dependency for OSX
- added description
- removed fixmes
* [py-keyboard] Until py-pyobjc can be created, specifying conflict with platform=darwin
* [py-keyboard] is verb
* Update of Flecsi Spackage
Update of flecsi spackage to reconcile differences between flecsi@1:1.9
and flecsi@2: for future support purposes
* Removing Unnecessary Conditional
Removing unused conditional. Initially the plan was to switch based on
version in `cmake_args` but this was not necessary as build system
variable names remained mostly the same and conflicts prevent the rest.
For the most part, if a variant is there it does not need to check
against what version of the code is being built.
* Updated CI To Reconcile Flecsi Changes
Updated CI to target flecsi@1.4.2 which best matches the previous
release version and reconciled change in variant name
The common.inc script in TBB uses the environ var 'OS' to determine
the platform it's on. On Linux, this is normally empty and TBB falls
back to uname. But some systems set this to 'CentOS Linux 8' which is
descriptive, but not exactly what common.inc is looking for.
Instead, take the value from python and explicitly set OS to what TBB
expects to avoid this problem.
Since the two packages share a common history, the installation
procedure has been factored into a common base class.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* Tcl: fix TCLLIBPATH
* Fix TCL|TK|TIX_LIBRARY paths
* Fix TCL_LIBRARY, no tcl8.6 subdir
* Don't rely on os.listdirs sorting
For tcl and tk, we also install the source directory, so there are
two init.tcl and tk.tcl locations. We want the one in lib/lib64,
which should come before the one in share.
* Add more patches
* Fix dylib on macOS
* Tk: add smoke tests
* Tix: add smoke test
Extracting specs for the result of a solve has been factored
as a method into the asp.Result class. The method account for
virtual specs being passed as initial requests.
Minimizing compiler mismatches in the DAG and preferring newer
versions of packages are now higher priority than trying to use as
many default values as possible in multi-valued variants.
According to the docs, r is needed for plotting, but plotting is
untested. In addition, the specific version requirement of java for gatk
could lead to multiple installations of r being triggered in an
environment. That might cause people to have to be deliberate about
java in a deployment. All in all, it seems that r is better as a
variant for gatk.
* Set job_id for SGE in darshan-runtime package
* Use a multi value variant for scheduler
Only one scheduler can be selected so make the variant multi valued and
set multi=False.
* hdf-eos5: Fix issue when linking against hdf5+szip (#23411)
Should fix issue #23411 when linking against hdf5+szip
Also fix bug if hdf5 does not depend on zlib
Reluctantly added payerle as a maintainer
Added version 1.1.13
Fixed versions for dependencies based on README.md for package
In particular:
* versions 1.1.x require python@3, at least 3.4 and for 1.1.13 at least 3.6
* py-osqp had been pinned to version 0.4.1, but README.md either shows
no version restriction, of 0.4.1 and higher
* @1.1.13 requires at least 1.1.6 of py-scs
* I am assuming since 1.1.x is python@3 only, py-six no longer required
(it was not explicitly showing up in README.md for these versions)
Since the module roots were removed from the config file,
`--print-shell-vars` cannot find the module roots anymore. Fix it by
using the new `root_path` function. Moreover, the roots for lmod and
modules seems to have been flipped by accident.
* add versions 2.2.0.2 and 2.2.1.1
* Add maintainer
Added Ishaan as additional maintainer as he is also maintainer of the Python bindings
* add new major precice version as dependency
The VALID_VERSION regex didn't check that the version string was
completely valid, only that a prefix of it was. This version ensures
the entire string represents a valid version.
This makes a few related changes.
1. Make the SEGMENT_REGEX identify *which* arm it matches by what groups
are populated, including whether it's a string or int component or a
separator all at once.
2. Use the updated regex to parse the input once with a findall rather
than twice, once with findall and once with split, since the version
components and separators can be distinguished by their group status.
3. Rather than "convert to int, on exception stay string," if the int
group is set then convert to int, if not then construct an instance
of the VersionStrComponent class
4. VersionStrComponent now implements all of the special string
comparison logic as part of its __lt__ and __eq__ methods to deal
with infinity versions and also overloads comparison with integers.
5. Version now uses direct tuple comparison since it has no per-element
special logic outside the VersionStrComponent class.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* New Package:py-haphpipe@1.0.3
* removed llvm restrict. & changed freebayes
* Style fix
* Removed pip, wheel, added url for deps list
* used proper gsutil naming
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* url src for deps, samtools fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* petsc: add hip variant
* libceed: add 0.8, disable occa by default, and let autodetect AVX
Disabling OCCA because backend updates did not make this release and
there are some known bugs so most users won't have reason to use OCCA.
https://github.com/CEED/libCEED/pull/688
* WIP: ceed: 4.0 release
* MFEM package updates (#19748)
* MFEM package updates
* mfem: flake8
* [mfem] Various fixes and tweaks.
[arpack-ng] Add a patch to fix building with IBM XL Fortran.
[libceed] Fix building with IBM XL C/C++.
[pumi] Add C++11 flag for version 2.2.3.
* [mfem] Fix the shared CUDA build.
Reported by: @MPhysXDev
* [mfem] Fix a TODO item
* [mfem] Tweak the AmgX dependencies
* [suite-sparse] Fix the version of the mpfr dependency
* MFEM: add initial HIP support using the ROCmPackage.
* MFEM: add 'slepc' variant.
* MFEM: update the patch for v4.2 for SLEPc.
* mfem: apply 'mfem-4.2-slepc.patch' just to v4.2.
* ceed: apply 'spack style'
* [mfem] Add a patch for mfem v4.2 to work with petsc v3.15.0.
[laghos] Add laghos version 3.1 based on the latest commit in
the repository; this version works with mfem v4.2.
[ceed] For ceed v4.0 use laghos v3.1.
* [libceed] Explicitly set 'CC_VENDOR=icc' when using 'intel'
compiler.
* [mfem] Allow pumi >= 2.2.3 with mfem >= 4.2.0.
[ceed] Use pumi v2.2.5 with ceed v4.0.0.
* [ceed] Explicitly use occa v1.1.0 with ceed v4.0.0.
Use mfem@4.2.0+rocm with ceed@4.0.0+mfem+hip.
* [ceed] Add NekRS v21 as a dependency for ceed v4.0.0.
* [ceed] Fix NekRS version: 21 --> 21.0
* [ceed] Propagate +cuda variant to petsc for ceed v4.0.
* [mfem] Propagate '+rocm' variant to some other packages.
* [ceed] Use +rocm variant of nekrs instead of +hip.
* [ceed] Do not enable magma with ceed@4.0.0+hip.
* [libceed] Fix hip build with libceed@0.8.
* [laghos] For v3.1, use the release .tar.gz file instead of commit.
* Remove cuda & hip variants as they are inherited
* [ceed] Remove comments and FIXMEs about 'magma+hip'.
* [ceed] [libceed] Remove TODOs about occa + hip.
* libceed: use ROCmPackage and +rocm
* petsc: use ROCmPackage for HIP
* libceed, petsc: use CudaPackage
* ceed: forward cuda_arch and amdgpu_target
* [mfem] Use Spack's CudaPackage as a base class; as a result,
'cuda_arch' values should not include the 'sm_' prefix.
Also, propagate 'cuda_arch' and 'amdgpu_target' variants
to enabled dependencies.
* petsc: variant is +rocm, package name is hip
Co-authored-by: Jed Brown <jed@jedbrown.org>
Co-authored-by: Thilina Rathnayake <thilinarmtb@gmail.com>
Passing absolute paths from pipeline generate job to downstream rebuild jobs
causes problems when the CI_PROJECT_DIR is not the same for the generate and
rebuild jobs. This has happened, for example, when gitlab checks out the
project into a runner-specific directory and different runners are chosen
for the generate and rebuild jobs.
* ensure that the stage root exists for `spack stage -p <PATH>`
* add test to verify `spack stage -p <PATH>` works!
* move out shared tmp staging path setup to a fixture to fix the test
* Simplified the spack.util.gpg implementation
All the classes defined in this Python module,
which were previously used to construct singleton
instances, have been removed in favor of four
global variables. These variables are initialized
lazily, like before.
The API of the module has been unchanged for the
most part. A few tests have been modified to use
the new global names.
1. add version 2021.05.15.
2. add patch to build old revs with gcc 11.x, version 2021.15.05
already has patch integrated, fixes#23667.
3. add variant +debug to build unoptimized, debug version.
4. add variant +viewer to include hpcviewer and add viewer path to
hpctoolkit module.
5. add dependency on memkind to workaround a glibc problem found on
some Cray platforms.
For me the buildcache force overwrite option does not work. It tries to
delete a file, but errors with a key error, apparently because the
leading / has to be removed.
* util.tty.log: read up to 100 lines if ready
Rework to read up to 100 lines from the captured stdin as long as data
is ready to be read immediately. Adds a helper function to poll with
`select` for ready data. This showed a roughly 5-10x perf improvement
for high-rate writes through the logger with relatively short lines.
* util.tty.log: Defer flushes to end of ready reads
Rather than flush per line, flush per set of reads. Since this is a
non-blocking loop, the total perceived wait is short.
* util.tty.log: only scan each line once, usually
Rather than always find all control characters then substitute them all,
use `subn` to count the number of control characters replaced. Only if
control characters exist find out what they are. This could be made
truly single pass with sub with a function, but it's a more intrusive
change and this got 99%ish of the performance improvement (roughly
another 2x in some cases).
* util.tty.log: remove check for `readable`
Python < 3 does not support a readable check on streams, should not be
necessary here since we control the only use and it's explicitly a
stream to be read.
* e4s ci: enable full e4s
* add llvm-amdgpu to list of specs needing an xlarge tagged runner
* comment out qt and qwt because of intermittent build failures
* remove +rocm specs because rocblas job consistently fails due to infrastructure
* qt: skip multimedia when ~opengl
On 5.9 on macOS the multimedia option causes build errors; on other
platforms and versions it should probably be assumed inoperative anyway.
* qt: Omit flags when disabling multimedia
```
ERROR: Unknown command line option '-no-pulseaudio'.
```
* Work around another qt@5.9 error
* qt: Fix build error on darwin
This PR allows users to `--export`, `--export-secret`, or both to export GPG keys
from Spack. The docs are updated that include a warning that this usually does not
need to be done.
This addresses an issue brought up in slack, and also represented in #14721.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the config section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
* New Package:py-ucsf-pyem
* Dep additions, eun env deletion
* extraction step change
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
### Overview
The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment. The two key changes here which aim to improve reproducibility are:
1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts. This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.
To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:
- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build
#### Some highlights
One consequence of this change will be much smaller pipeline yaml files. By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.
Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace. With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows. If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.
There are some changes to be aware of in how pipelines should be set up after this PR:
#### Pipeline generation
Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile. This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts. If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:
```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack ci generate --check-index-only
+ --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+ - "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags: ["spack", "public", "medium", "x86_64"]
interruptible: true
```
Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`. This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.
#### Rebuild jobs
Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts. When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs. The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`. The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.
When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment. If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate. No other changes to existing custom rebuild scripts should be required as a result of this PR.
As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI. The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`. If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:
```
To reproduce this build locally, run:
spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```
When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you can copy-paste to run a local reproducer of the CI job.
This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.
This PR erelies on
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
* embree: allow for compiling with gcc 7.3
strip out unsupported -mprefer-vector-width=256
* embree: fix build on AMD CPUs
The ISAs that embree is compiled for have to match the CPU
features enabled by the compiler, as embree derives theISA
that it compiles for from the latter.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Spack's source mirror was previously in a plain old S3 bucket. That will still
work, but we can do better. This switches to AWS's CloudFront CDN for hosting
the mirror.
CloudFront is 16x faster (or more) than the old bucket.
- [x] change mirror to https://mirror.spack.io
* New package:py-coveralls
* dep fixes
* added python constraint
* pyyaml version constraint
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- [x] add `in_buildcache` field to DB records to indicate what parts of an index,
which includes roots and dependencies, are in the buildcache.
- [x] add `mark()` method to DB for setting values on single nodes of the DAG.
This also fixes the build with %gcc@11:. According to upstream, the
proper solution is to disable -Werror=array-bounds since the stable
branch will not receive a patch for newer compilers.
* Update py-pint and fix runtime dependency on setuptools
Without the runtime dependency on setuptools, importing pint yields:
0.11:
ModuleNotFoundError: No module named 'pkg_resources'
0.17:
ModuleNotFoundError: No module named 'packaging'
* Fix
* Address comments
I would like to be able to export (and save and then load programatically)
spack blame metadata, so this commit adds a spack blame --json argument,
along with developer docs for it
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
This work will come in two phases. The first here is to allow saving of a local result
with spack monitor, and the second will add a spack monitor command so the user can
do spack monitor upload.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently if one package does `depends_on('pkg default_library=shared')`
and another does `depends_on('pkg default_library=both')`, you'd get a
concretization error.
With this PR one package can do `depends_on('pkg default_library=shared')`
and another depends_on('default_library=static'), and it would concretize to
`pkg default_library=shared,static`
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Package update to version 1.0.2
* switched submodule boolean to string
* switched from string to bools
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- Changed to cmake package with backward compatibility with older
makefile
- Removed unused cmake variable 'blas_blas_libs'
- Added new version 5.2.2 which change to external blas variable
- Remove unused tcsh dependency
- Change URL to use git repository for current and future versions
- Add older 4.2 version
- Add conflict for older versions with apple-clang
This adds RHEL8's `/usr/libexec/platform-python` to Spack's list of preferred
pythons. It will only be used if no other `python` is available in the `PATH`.
We have been testing with this python for a while now, and it seems to do all
that we need. If Spack one day isn't able to work with it, we'll take it out,
but for now it is useful to allow Spack to be used on RHEL8 without a dedicated
`python` installation.
Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.
When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`). That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations. It's a win now to make `Repo.exists()` check files directly.
**Fix:**
This PR does a number of things to speed up `spack load`, `spack info`, and other commands:
- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
(avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`
The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).
Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.
The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.
This PR thus modifies spack.hook in the following ways:
- Use __import__ instead of spack.util.imp.load_source (this
addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
to stay local
fixes#22786
Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.
We remove system paths from search variables like PATH and
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages
may be installed to prefixes that are not actually system paths
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.
If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.
This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.
This makes a similar change to spack cd.
If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).
Co-authored-by: Gregory Becker <becker33@llnl.gov>
fixes#22294
A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.
This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.
fixes#22565
This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.
Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.
This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.
* clingo: modify recipe for bootstrapping
Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping
* Remove option that breaks on linux
* Give more hints for the current Python
* Disable CLINGO_BUILD_PY_SHARED for bootstrapping
* bootstrapping: try to detect the current python from std library
This is much faster than calling external executables
* Fix compatibility with Python 2.6
* Give hints on which compiler and OS to use when bootstrapping
This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.
* Use spec_for_current_python to constrain module requirement
(cherry picked from commit d5fa509b07)
* ASP-based solver: avoid adding values to variants when they're set
fixes#22533fixes#21911
Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.
* Ensure disjoint sets have a clear semantics for external packages
fixes#22547
SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.
In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.
This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.
We add one special case for this: dependencies of externals.
We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.
- [x] change asp.py so that this can't happen -- we now only generate
dependency types for possible dependencies.
This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.
Given some directives in a package like:
```python
depends_on("foo@1.0+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```
We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:
```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
attr(Name, Arg1) : required_dependency_condition(ID, Name, Arg1);
attr(Name, Arg1, Arg2) : required_dependency_condition(ID, Name, Arg1, Arg2);
attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
dependency_condition(ID, Parent, Dependency);
node(Parent).
```
And we handled `foo@1.0+bar` and `mpi@2:` parts ("imposed constraints")
like this:
```prolog
attr(Name, Arg1, Arg2) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2).
attr(Name, Arg1, Arg2, Arg3) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```
These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.
Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:
```prolog
condition_holds(ID) :-
condition(ID);
attr(Name, A1) : condition_requirement(ID, Name, A1);
attr(Name, A1, A2) : condition_requirement(ID, Name, A1, A2);
attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).
attr(Name, A1) :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```
this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:
```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```
The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:
```prolog
dependency_holds(Package, Dependency, Type) :-
dependency_condition(ID, Package, Dependency),
dependency_type(ID, Type),
condition_holds(ID).
```
Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:
```python
def package_dependencies_rules(self, pkg, tests):
"""Translate 'depends_on' directives into ASP logic."""
for _, conditions in sorted(pkg.dependencies.items()):
for cond, dep in sorted(conditions.items()):
condition_id = self.condition(cond, dep.spec, pkg.name) # create a condition and get its id
self.gen.fact(fn.dependency_condition( # associate specifics about the dependency w/the id
condition_id, pkg.name, dep.spec.name
))
# etc.
```
- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals
LocalWords: concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords: virtuals def pkg cond dep fn refactor github py
This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.
(cherry picked from commit 413c422e53)
* Allow the bootstrapping of clingo from sources
Allow python builds with system python as external
for MacOS
* Ensure consistent configuration when bootstrapping clingo
This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.
* Github actions: test clingo with bootstrapping from sources
* Add command to inspect and clean the bootstrap store
Prevent users to set the install tree root to the bootstrap store
* clingo: documented how to bootstrap from sources
Co-authored-by: Gregory Becker <becker33@llnl.gov>
(cherry picked from commit 10e9e142b7)
Bash has a builtin `fc` that will override the compiler if you use "fc",
so it's better to use the full spack-supplied compiler path.
Additionally, the filter regex in the docs was wrong: it replaced the
entire assignment operation with the RHS.
* py-kubernetes: add new package
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-kubernetes: remove alpha/beta versions, fix dependency types
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR updates the abinit package. The underlying build system has
several changes from previous versions, which are reflected in the
package recipe.
- added version 9.4.2
- removed commented out code
- add new libxml2 variant, with dependency and conflicts
- add dependency on atompaw
- depend on fftw-api when ~openmp
This allows other fftw implementations to be used. This PR adds MKL.
- depend on netcdf explicitly
- remove hdf5 variant as hdf5 is required
- only use wannier90 if +mpi as the wannier90 spack package is MPI only
- allow newer versions of libxc for abinit 9
- split configure options for versions before and after abinit 9
- always use MPI compiler wrappers
- add patch to remove march settings for version 9
- Set conflict for fftw~openmp if abinit+openmp
This allows the virtual fftw-api to be used for the dependency. If fftw
is the fftw-api provider then bail if fftw~openmp is set when
abinit+openmp is used.
- Set conflicts for +openmp and mkl
- Be explicit about +mkl for intel-parallel-studio
- Add TODO entry for switching conflicts/depends_on logic
* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo
package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically
* clingo-bootstrap: apple-clang options to bootstrap statically on darwin
* clingo: fix the path of the Python interpreter
In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.
This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.
* clingo: the commit for "spack" version has been updated.
Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.
- [x] make `master` the default so we don't have to keep telling people
to install `clingo@master`. We'll update the preferred version when
there's a new release.
* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
if an environment is active and if yes we fetch all
uninstalled specs.
This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.
This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated
(cherry picked from commit 4558dc06e2)
The context manager can be used to swap the current
configuration temporarily, for any use case that may need it.
(cherry picked from commit 553d37a6d6)
The method is now called "use_repositories" and
makes it clear in the docstring that it accepts
as arguments either Repo objects or paths.
Since there was some duplication between this
contextmanager and "use_repo" in the testing framework,
remove the latter and use spack.repo.use_repositories
across the entire code base.
Make a few adjustment to MockPackageMultiRepo, since it was
stating in the docstring that it was supposed to mock
spack.repo.Repo and was instead mocking spack.repo.RepoPath.
(cherry picked from commit 1a8963b0f4)
There clingo-cffi job has two issues to be solved:
1. It uses the default concretizer
2. It requires a package from https://test.pypi.org/simple/
The former can be fixed by setting the SPACK_TEST_SOLVER
environment variable to "clingo".
The latter though requires clingo-cffi to be pushed to a
more stable package index (since https://test.pypi.org/simple/
is meant as a scratch version of PyPI that can be wiped at
any time).
For the time being run the tests in a container. Switch back to
PyPI whenever a new official version of clingo will be released.
* Support clingo when used with cffi
Clingo recently merged in a new Python module option based on cffi.
Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.
manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi
(cherry picked from commit 93ed1a410c)
* Improve error message for inconsistencies in package.py
Sometimes directives refer to variants that do not exist.
Make it such that:
1. The name of the variant
2. The name of the package which is supposed to have
such variant
3. The name of the package making this assumption
are all printed in the error message for easier debugging.
* Add unit tests
(cherry picked from commit 7226bd64dc)
The "fact" method before was dealing with multiple facts
registered per call, which was used when we were emitting
grounded rules from knowledge of the problem instance.
Now that the encoding is changed we can simplify the method
to deal only with a single fact per call.
(cherry picked from commit ba42c36f00)
about: Report a bug in the core of Spack (command not working as expected, etc.)
labels: "bug,triage"
---
<!-- Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran `spack find` to list all the installed packages and ..." -->
### Steps to reproduce the issue
```console
$ spack <command1> <spec>
$ spack <command2> <spec>
...
```
### Error Message
<!-- If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect. -->
```console
$ spack --debug --stacktrace <command>
```
### Information on your system
<!-- Please include the output of `spack debug report` -->
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have searched the issues of this repo and believe this is not a duplicate
- [ ] I have run the failing commands in debug mode and reported the output
<!-- We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack! -->
description:Report a bug in the core of Spack (command not working as expected, etc.)
labels:[bug, triage]
body:
- type:textarea
id:reproduce
attributes:
label:Steps to reproduce
description:|
Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran `spack find` to list all the installed packages and ..."
placeholder:|
```console
$ spack <command1> <spec>
$ spack <command2> <spec>
...
```
validations:
required:true
- type:textarea
id:error
attributes:
label:Error message
description:|
If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect.
placeholder:|
```console
$ spack --debug --stacktrace <command>
```
- type:textarea
id:information
attributes:
label:Information on your system
description:Please include the output of `spack debug report`
validations:
required:true
- type:markdown
attributes:
value:|
If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well.
- type:checkboxes
id:checks
attributes:
label:General information
options:
- label:I have run `spack debug report` and reported the version of Spack/Python/Platform
required:true
- label:I have searched the issues of this repo and believe this is not a duplicate
required:true
- label:I have run the failing commands in debug mode and reported the output
required:true
- type:markdown
attributes:
value:|
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
about: Some package in Spack didn't build correctly
title: "Installation issue: "
labels: "build-error"
---
<!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install <spec>
...
```
### Information on your system
<!-- Please include the output of `spack debug report` -->
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. -->
* [spack-build-out.txt]()
* [spack-build-env.txt]()
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [ ] I have uploaded the build log and environment files
- [ ] I have searched the issues of this repo and believe this is not a duplicate
description:Some package in Spack didn't build correctly
title:"Installation issue: "
labels:[build-error]
body:
- type:markdown
attributes:
value:|
Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue `Installation issue: <name-of-the-package>`.
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
- type:textarea
id:reproduce
attributes:
label:Steps to reproduce the issue
description:|
Fill in the exact spec you are trying to build and the relevant part of the error message
placeholder:|
```console
$ spack install <spec>
...
```
validations:
required:true
- type:textarea
id:information
attributes:
label:Information on your system
description:Please include the output of `spack debug report`
validations:
required:true
- type:markdown
attributes:
value:|
If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well.
- type:textarea
id:additional_information
attributes:
label:Additional information
description:|
Please upload the following files:
* **`spack-build-out.txt`**
* **`spack-build-env.txt`**
They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists.
- type:markdown
attributes:
value:|
Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and **@mention** them here if they exist.
- type:checkboxes
id:checks
attributes:
label:General information
options:
- label:I have run `spack debug report` and reported the version of Spack/Python/Platform
required:true
- label:I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers
required:true
- label:I have uploaded the build log and environment files
required:true
- label:I have searched the issues of this repo and believe this is not a duplicate
about: Suggest adding a feature that is not yet in Spack
labels: feature
---
<!--*Please add a concise summary of your suggestion here.*-->
### Rationale
<!--*Is your feature request related to a problem? Please describe it!*-->
### Description
<!--*Describe the solution you'd like and the alternatives you have considered.*-->
### Additional information
<!--*Add any other context about the feature request here.*-->
### General information
- [ ] I have run `spack --version` and reported the version of Spack
- [ ] I have searched the issues of this repo and believe this is not a duplicate
<!--If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
description:Suggest adding a feature that is not yet in Spack
labels:[feature]
body:
- type:textarea
id:summary
attributes:
label:Summary
description:Please add a concise summary of your suggestion here.
validations:
required:true
- type:textarea
id:rationale
attributes:
label:Rationale
description:Is your feature request related to a problem? Please describe it!
- type:textarea
id:description
attributes:
label:Description
description:Describe the solution you'd like and the alternatives you have considered.
- type:textarea
id:additional_information
attributes:
label:Additional information
description:Add any other context about the feature request here.
- type:checkboxes
id:checks
attributes:
label:General information
options:
- label:I have run `spack --version` and reported the version of Spack
required:true
- label:I have searched the issues of this repo and believe this is not a duplicate
required:true
- type:markdown
attributes:
value:|
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
A module folder with a :class:`AnalyzerBase <spack.analyzers.analyzer_base.AnalyzerBase>`
A module folder with a :class:`~spack.analyzers.analyzer_base.AnalyzerBase`
that provides base functions to run, save, and (optionally) upload analysis
results to a `Spack Monitor <https://github.com/spack/spack-monitor>`_ server.
@@ -286,7 +277,7 @@ Other Modules
tarball URLs.
:mod:`spack.error`
:class:`SpackError <spack.error.SpackError>`, the base class for
:class:`~spack.error.SpackError`, the base class for
Spack's exception hierarchy.
:mod:`llnl.util.tty`
@@ -335,8 +326,8 @@ Writing analyzers
To write an analyzer, you should add a new python file to the
analyzers module directory at ``lib/spack/spack/analyzers`` .
Your analyzer should be a subclass of the :class:`AnalyzerBase <spack.analyzers.analyzer_base.AnalyzerBase>`. For example, if you want
to add an analyzer class ``Myanalyzer`` you woul write to
``spack/analyzers/myanalyzer.py`` and import and
to add an analyzer class ``Myanalyzer`` you would write to
``spack/analyzers/myanalyzer.py`` and import and
use the base as follows:
..code-block::python
@@ -347,7 +338,7 @@ use the base as follows:
Note that the class name is your module file name, all lowercase
except for the first capital letter. You can look at other analyzers in
except for the first capital letter. You can look at other analyzers in
that analyzer directory for examples. The guide here will tell you about the basic functions needed.
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -356,13 +347,13 @@ Analyzer Output Directory
By default, when you run ``spack analyze run`` an analyzer output directory will
be created in your spack user directory in your ``$HOME``. The reason we output here
is because the install directory might not always be writable.
is because the install directory might not always be writable.
..code-block::console
~/.spack/
analyzers
Result files will be written here, organized in subfolders in the same structure
as the package, with each analyzer owning it's own subfolder. for example:
@@ -380,11 +371,11 @@ as the package, with each analyzer owning it's own subfolder. for example:
│ └── spack-analyzer-install-files.json
└── libabigail
└── lib
└── spack-analyzer-libabigail-libz.so.1.2.11.xml
└── spack-analyzer-libabigail-libz.so.1.2.11.xml
Notice that for the libabigail analyzer, since results are generated per object,
we honor the object's folder in case there are equivalently named files in
we honor the object's folder in case there are equivalently named files in
different folders. The result files are typically written as json so they can be easily read and uploaded in a future interaction with a monitor.
@@ -426,7 +417,7 @@ and then return the object with a key as the analyzer name. The result data
should be a list of objects, each with a name, ``analyzer_name``, ``install_file``,
and one of ``value`` or ``binary_value``. The install file should be for a relative
path, and not the absolute path. For example, let's say we extract a metric called
``metric`` for ``bin/wget`` using our analyzer ``thebest-analyzer``.
``metric`` for ``bin/wget`` using our analyzer ``thebest-analyzer``.
We might have data that looks like this:
..code-block::python
@@ -482,7 +473,7 @@ Saving Analyzer Results
The analyzer will have ``save_result`` called, with the result object generated
to save it to the filesystem, and if the user has added the ``--monitor`` flag
to upload it to a monitor server. If your result follows an accepted result
format and you don't need to parse it further, you don't need to add this
format and you don't need to parse it further, you don't need to add this
function to your class. However, if your result data is large or otherwise
needs additional parsing, you can define it. If you define the function, it
is useful to know about the ``output_dir`` property, which you can join
@@ -548,7 +539,7 @@ each one (separately) to the monitor:
Notice that this function, if you define it, requires a result object (generated by
``run()``, a monitor (if you want to send), and a boolean ``overwrite`` to be used
to check if a result exists first, and not write to it if the result exists and
to check if a result exists first, and not write to it if the result exists and
overwrite is False. Also notice that since we already saved these files to the analyzer metadata folder, we return early if a monitor isn't defined, because this function serves to send results to the monitor. If you haven't saved anything to the analyzer metadata folder
yet, you might want to do that here. You should also use ``tty.info`` to give
the user a message of "Writing result to $DIRNAME."
@@ -616,7 +607,7 @@ types of hooks in the ``__init__.py``, and then python files in that folder
can use hook functions. The files are automatically parsed, so if you write
a new file for some integration (e.g., ``lib/spack/spack/hooks/myintegration.py``
you can then write hook functions in that file that will be automatically detected,
and run whenever your hook is called. This section will cover the basic kind
and run whenever your hook is called. This section will cover the basic kind
of hooks, and how to write them.
^^^^^^^^^^^^^^
@@ -624,7 +615,7 @@ Types of Hooks
^^^^^^^^^^^^^^
The following hooks are currently implemented to make it easy for you,
the developer, to add hooks at different stages of a spack install or similar.
the developer, to add hooks at different stages of a spack install or similar.
If there is a hook that you would like and is missing, you can propose to add a new one.
"""""""""""""""""""""
@@ -632,9 +623,9 @@ If there is a hook that you would like and is missing, you can propose to add a
"""""""""""""""""""""
A ``pre_install`` hook is run within an install subprocess, directly before
the install starts. It expects a single argument of a spec, and is run in
the install starts. It expects a single argument of a spec, and is run in
a multiprocessing subprocess. Note that if you see ``pre_install`` functions associated with packages these are not hooks
as we have defined them here, but rather callback functions associated with
as we have defined them here, but rather callback functions associated with
a package install.
@@ -657,7 +648,7 @@ here.
This hook is run at the beginning of ``lib/spack/spack/installer.py``,
in the install function of a ``PackageInstaller``,
and importantly is not part of a build process, but before it. This is when
we have just newly grabbed the task, and are preparing to install. If you
we have just newly grabbed the task, and are preparing to install. If you
write a hook of this type, you should provide the spec to it.
..code-block::python
@@ -666,7 +657,7 @@ write a hook of this type, you should provide the spec to it.
"""On start of an install, we want to...
"""
print('on_install_start')
""""""""""""""""""""""""""""
``on_install_success(spec)``
@@ -744,8 +735,8 @@ to trigger after anything is written to a logger. You would add it as follows:
post_install=HookRunner('post_install')
# hooks related to logging
post_log_write=HookRunner('post_log_write')# <- here is my new hook!
post_log_write=HookRunner('post_log_write')# <- here is my new hook!
You then need to decide what arguments my hook would expect. Since this is
related to logging, let's say that you want a message and level. That means
@@ -775,7 +766,7 @@ In this example, we use it outside of a logger that is already defined:
This is not to say that this would be the best way to implement an integration
with the logger (you'd probably want to write a custom logger, or you could
have the hook defined within the logger) but serves as an example of writing a hook.
have the hook defined within the logger) but serves as an example of writing a hook.
----------
Unit tests
@@ -785,6 +776,38 @@ Unit tests
Unit testing
------------
---------------------
Developer environment
---------------------
..warning::
This is an experimental feature. It is expected to change and you should
not use it in a production environment.
When installing a package, we currently have support to export environment
variables to specify adding debug flags to the build. By default, a package
install will build without any debug flag. However, if you want to add them,
you can export:
..code-block::console
export SPACK_ADD_DEBUG_FLAGS=true
spack install zlib
If you want to add custom flags, you should export an additional variable:
..code-block::console
export SPACK_ADD_DEBUG_FLAGS=true
export SPACK_DEBUG_FLAGS="-g"
spack install zlib
These environment variables will eventually be integrated into spack so
they are set from the command line.
------------------
Developer commands
------------------
@@ -795,6 +818,29 @@ Developer commands
``spack doc``
^^^^^^^^^^^^^
.._cmd-spack-style:
^^^^^^^^^^^^^^^
``spack style``
^^^^^^^^^^^^^^^
spack style exists to help the developer user to check imports and style with
mypy, flake8, isort, and (soon) black. To run all style checks, simply do:
..code-block::console
$ spack style
To run automatic fixes for isort you can do:
..code-block::console
$ spack style --fix
You do not need any of these Python packages installed on your system for
the checks to work! Spack will bootstrap install them from packages for
your use.
^^^^^^^^^^^^^^^^^^^
``spack unit-test``
^^^^^^^^^^^^^^^^^^^
@@ -867,6 +913,50 @@ just like you would with the normal ``python`` command.
.._cmd-spack-url:
^^^^^^^^^^^^^^^
``spack blame``
^^^^^^^^^^^^^^^
Spack blame is a way to quickly see contributors to packages or files
in the spack repository. You should provide a target package name or
file name to the command. Here is an example asking to see contributions
for the package "python":
..code-block::console
$ spack blame python
LAST_COMMIT LINES % AUTHOR EMAIL
2 weeks ago 3 0.3 Mickey Mouse <cheddar@gmouse.org>
a month ago 927 99.7 Minnie Mouse <swiss@mouse.org>
2 weeks ago 930 100.0
By default, you will get a table view (shown above) sorted by date of contribution,
with the most recent contribution at the top. If you want to sort instead
by percentage of code contribution, then add ``-p``:
..code-block::console
$ spack blame -p python
And to see the git blame view, add ``-g`` instead:
..code-block::console
$ spack blame -g python
Finally, to get a json export of the data, add ``--json``:
..code-block::console
$ spack blame --json python
^^^^^^^^^^^^^
``spack url``
^^^^^^^^^^^^^
@@ -1211,7 +1301,7 @@ Publishing a release on GitHub
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.