Updates to improve Spack-generated modules for Intel oneAPI compilers:
* intel-oneapi-compilers set CC etc.
* Add a new package intel-oneapi-compilers-classic which can be used to
generate a module which sets CC etc. to older compilers (e.g. icc)
* lmod module logic now updated to treat the intel-oneapi-compilers*
packages as compilers
* acts-dd4hep: new package, separated from new acts@19.1.0
* acts-dd4hep: improved versioning
* acts-dd4hep: don't use curl | sha256sum
* acts: new variant `odd` for Open Data Detector
* acts-dd4hep: style changes
Add spack stacks targeted at Spack + AWS + ARM HPC User Group hackathon. Includes
a list of miniapps and full-apps that are ready to run on both x86_64 and aarch64.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Add two new stacks targeted at x86_64 and arm, representing an initial list of packages
used by current and planned AWS Workshops, and built in conjunction with the ISC22
announcement of the spack public binary cache.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Explicitly import package utilities in all packages, and corresponding fallout.
This includes:
* rename `spack.package` to `spack.package_base`
* rename `spack.pkgkit` to `spack.package`
* update all packages in builtin, builtin_mock and tutorials to include `from spack.package import *`
* update spack style
* ensure packages include the import
* automatically add the new import and remove any/all imports of `spack` and `spack.pkgkit`
from packages when using `--fix`
* add support for type-checking packages with mypy when SPACK_MYPY_CHECK_PACKAGES
is set in the environment
* fix all type checking errors in packages in spack upstream
* update spack create to include the new imports
* update spack repo to inject the new import, injection persists to allow for a deprecation period
Original message below:
As requested @adamjstewart, update all packages to use pkgkit. I ended up using isort to do this,
so repro is easy:
```console
$ isort -a 'from spack.pkgkit import *' --rm 'spack' ./var/spack/repos/builtin/packages/*/package.py
$ spack style --fix
```
There were several line spacing fixups caused either by space manipulation in isort or by packages
that haven't been touched since we added requirements, but there are no functional changes in here.
* [x] add config to isort to make sure this is maintained going forward
referred targets are currently the only minimization criteria for Spack for which we allow
negative values. That means Spack may be incentivized to add nodes to the DAG if they
match the preferred target.
This PR re-norms the minimization criteria so that preferred targets are weighted from 0,
and default target weights are offset by the number of preferred targets per-package to
calculate node_target_weight.
Also fixes a bug in the test for preferred targets that was making the test easier to pass
than it should be.
* Call Numpy package's set_blas_lapack() and setup_build_environment() in Scipy package
* Remove broken link from comment
* Use .package attribute of spec to avoid import
This PR fixes several issues I noticed while trying to get Spack working on Apple M1.
- [x] `build_environment.py` attempts to add `spec['foo'].libs` and `spec['foo'].headers` to our compiler wrappers for all dependencies using a try-except that ignores `NoLibrariesError` and `NoHeadersError` respectively. However, The `libs` and `headers` attributes of the Python package were erroneously using `RuntimeError` instead.
- [x] `spack external find python` (used during bootstrapping) currently has no way to determine whether or not an installation is `+shared`, so previously we would only search for static Python libs. However, most distributions including XCode/Conda/Intel ship shared Python libs. I updated `libs` to search for both shared and static (order based on variant) as a fallback.
- [x] The `headers` attribute was recursively searching in `prefix.include` for `pyconfig.h`, but this could lead to non-deterministic behavior if multiple versions of Python are installed and `pyconfig.h` files exist in multiple `<prefix>/include/pythonX.Y` locations. It's safer to search in `sysconfig.get_path('include')` instead.
- [x] The Python installation that comes with XCode is broken, and `sysconfig.get_paths` is hard-coded to return specific directories. This meant that our logic for `platlib`/`purelib`/`include` where we replace `platbase`/`base`/`installed_base` with `prefix` wasn't working and the `mkdirp` in `setup_dependent_package` was trying to create a directory in root, giving permissions issues. Even if you commented out those `mkdirp` calls, Spack would add the wrong directories to `PYTHONPATH`. Added a fallback hard-coded to `lib/pythonX.Y/site-packages` if sysconfig is broken (this is what distutils always did).
This PR supports the creation of securely signed binaries built from spack
develop as well as release branches and tags. Specifically:
- remove internal pr mirror url generation logic in favor of buildcache destination
on command line
- with a single mirror url specified in the spack.yaml, this makes it clearer where
binaries from various pipelines are pushed
- designate some tags as reserved: ['public', 'protected', 'notary']
- these tags are stripped from all jobs by default and provisioned internally
based on pipeline type
- update gitlab ci yaml to include pipelines on more protected branches than just
develop (so include releases and tags)
- binaries from all protected pipelines are pushed into mirrors including the
branch name so releases, tags, and develop binaries are kept separate
- update rebuild jobs running on protected pipelines to run on special runners
provisioned with an intermediate signing key
- protected rebuild jobs no longer use "SPACK_SIGNING_KEY" env var to
obtain signing key (in fact, final signing key is nowhere available to rebuild jobs)
- these intermediate signatures are verified at the end of each pipeline by a new
signing job to ensure binaries were produced by a protected pipeline
- optionallly schedule a signing/notary job at the end of the pipeline to sign all
packges in the mirror
- add signing-job-attributes to gitlab-ci section of spack environment to allow
configuration
- signing job runs on special runner (separate from protected rebuild runners)
provisioned with public intermediate key and secret signing key
Old concrete specs were slipping through in `_assign_hash`, and `package_hash` was
attempting to recompute a package hash when we could not know the package a time
of concretization.
Part of this was that the logic for `_assign_hash` was hard to understand -- it was
called twice from `_finalize_concretization` and had special cases for both args it
was called with. It's much easier to understand the logic here if we just inline it.
- [x] Get rid of `_assign_hash` and just integrate it with `_finalize_concretization`
- [x] Don't call `_package_hash` at all for already-concrete specs.
- [x] Add regression test.
Use `spack build` as build dir to avoid recursive link error.
```
config.status: linking /var/folders/fy/x2xtwh1n7fn0_0q2kk29xkv9vvmbqb/T/s3j/spack-stage/spack-stage-sed-4.8-wraqsot6ofzvr3vrgusx4mj4mya5xfux/spack-src/GNUmakefile to GNUmakefile
config.status: executing depfiles commands
config.status: executing po-directories commands
config.status: creating po/POTFILES
config.status: creating po/Makefile
==> sed: Executing phase: 'build'
==> [2022-05-25-14:15:51.310333] 'make' '-j8' 'V=1'
make: GNUmakefile: Too many levels of symbolic links
make: stat: GNUmakefile: Too many levels of symbolic links
make: *** No rule to make target `GNUmakefile'. Stop.
```
This PR introduces a new build cache layout and package format, with improvements for
both efficiency and security.
## Old Format
Currently a binary package consists of a `spec.json` file at the root and a `.spack` file,
which is a `tar` archive containing a copy of the `spec.json` format, possibly a detached
signature (`.asc`) file, and a tar-gzip compressed archive containing the install tree.
```
build_cache/
# metadata (for indexing)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json
<arch>/
<compiler>/
<name>-<ver>/
# tar archive
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spack
# tar archive contents:
# metadata (contains sha256 of internal .tar.gz)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json
# signature
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json.asc
# tar.gz-compressed prefix
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.tar.gz
```
After this change, the nesting has been removed so that the `.spack` file is the
compressed archive of the install tree. Now signed binary packages, will take the
form of a clearsigned `spec.json` file (a `spec.json.sig`) at the root, while unsigned
binary packages will contain a `spec.json` at the root.
## New Format
```
build_cache/
# metadata (for indexing, contains sha256 of .spack file)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json
# clearsigned spec.json metadata
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json.sig
<arch>/
<compiler>/
<name>-<ver>/
# tar.gz-compressed prefix (may support more compression formats later)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spack
```
## Benefits
The major benefit of this change is that the signatures on binary packages can be
verified without:
1. Having to download the tarball, or
2. having to extract an unknown tarball.
(1) is an improvement in efficiency; (2) is a security fix: we now ensure that we trust the
binary before we try to run it through `tar`, which avoids potential attacks.
## Backward compatibility
Also after this change, spack should still be able to handle the previous buildcache
structure and binary mirrors with mixed layouts.
This PR builds on #28392 by adding a convenience command to create a local mirror that can be used to bootstrap Spack. This is to overcome the inconvenience in setting up this mirror manually, which has been reported when trying to setup Spack on air-gapped systems.
Using this PR the user can create a bootstrapping mirror, on a machine with internet access, by:
% spack bootstrap mirror --binary-packages /opt/bootstrap
==> Adding "clingo-bootstrap@spack+python %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "gnupg@2.3: %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "patchelf@0.13.1:0.13.99 %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding binary packages from "https://github.com/alalazo/spack-bootstrap-mirrors/releases/download/v0.1-rc.2/bootstrap-buildcache.tar.gz" to the mirror at /opt/bootstrap/local-mirror
To register the mirror on the platform where it's supposed to be used run the following command(s):
% spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources
% spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries
The mirror has to be moved over to the air-gapped system, and registered using the commands shown at prompt. The command has options to:
1. Add pre-built binaries downloaded from Github (default is not to add them)
2. Add development dependencies for Spack (currently the Python packages needed to use spack style)
* bootstrap: refactor bootstrap.yaml to move sources metadata out
* bootstrap: allow adding/removing custom bootstrapping sources
This operation can be performed from the command line since
new subcommands have been added to `spack bootstrap`
* Add --trust argument to spack bootstrap add
* Add a command to generate a local mirror for bootstrapping
* Add a unit test for mirror creation
* Allow Kokkos with OpenMPTarget backend
* Restrict SYCL and OpenMPTarget to C++17 or higher
* Improve C++ standard check for SYCL and OpenMPTarget
* Fix indentation
Currently, environments can either be concretized fully together or fully separately. This works well for users who create environments for interoperable software and can use `concretizer:unify:true`. It does not allow environments with conflicting software to be concretized for maximal interoperability.
The primary use-case for this is facilities providing system software. Facilities provide multiple MPI implementations, but all software built against a given MPI ought to be interoperable.
This PR adds a concretization option `concretizer:unify:when_possible`. When this option is used, Spack will concretize specs in the environment separately, but will optimize for minimal differences in overlapping packages.
* Add a level of indirection to root specs
This commit introduce the "literal" atom, which comes with
a few different "arities". The unary "literal" contains an
integer that id the ID of a spec literal. Other "literals"
contain information on the requests made by literal ID. For
instance zlib@1.2.11 generates the following facts:
literal(0,"root","zlib").
literal(0,"node","zlib").
literal(0,"node_version_satisfies","zlib","1.2.11").
This should help with solving large environments "together
where possible" since later literals can be now solved
together in batches.
* Add a mechanism to relax the number of literals being solved
* Modify spack solve to display the new criteria
Since the new criteria is above all the build criteria,
we need to modify the way we display the output.
Originally done by Greg in #27964 and cherry-picked
to this branch by the co-author of the commit.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Inject reusable specs into the solve
Instead of coupling the PyclingoDriver() object with
spack.config, inject the concrete specs that can be
reused.
A method level function takes care of reading from
the store and the buildcache.
* spack solve: show output of multi-rounds
* add tests for best-effort coconcretization
* Enforce having at least a literal being solved
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Py-x21 now works, needs dependencies
Conflicts:
var/spack/repos/rit-rc/packages/py-x21/package.py
* Added dependencies to py-x21
* Making flake style check happy
* [py-x21] flake8
* [py-x21]
- added homepage
- added placeholder description
- added comment about checksums
* [py-x21] added darwin support and fixed issue with python 3.7 wheel name
* [py-x21] adding checksum hash
* [py-x21] removed duplicate py-pynacl
* [py-x21]
- updated description
- updated version listing to have a different version for each version
of python. Also, versions dependent on sys.platform
- updated url_for_version to not require post concretized information so
that spack checksum works
* [py-x21] isort
Co-authored-by: vehrc <vehrc@rit.edu>
Previously the regex was only checking for presence of quotes as a beginning
or end character and not a matching set. This erroneously identified the
following *single* argument as being quoted:
source bashenvfile &> /dev/null && python3 -c "import os, json; print(json.dumps(dict(os.environ)))"
rocm-5.1.0 removed librocrand.so from ROCM_DIR/rocrand/lib location (but includes are still at this location)
/opt/rocm-5.0.2/lib/librocrand.so
/opt/rocm-5.0.2/rocrand/lib/librocrand.so
/opt/rocm-5.1.0/lib/librocrand.so
drwxr-xr-x 2 root root 617 Mar 8 08:20 /opt/rocm-5.0.2/rocrand/include
drwxr-xr-x 2 root root 617 Mar 31 09:48 /opt/rocm-5.1.0/rocrand/include
Add a config option to strip `-Werror*` or `-Werror=*` from compile lines everywhere.
```yaml
config:
keep_werror: false
```
By default, we strip all `-Werror` arguments out of compile lines, to avoid unwanted
failures when upgrading compilers. You can re-enable `-Werror` in your builds if
you really want to, with either:
```yaml
config:
keep_werror: all
```
or to keep *just* specific `-Werror=XXX` args:
```yaml
config:
keep_werror: specific
```
This should make swapping in newer versions of compilers much smoother when
maintainers have decided to enable `-Werror` by default.
Parse error information is kept for specs, but it doesn't seem like we propagate it
to the user when we encounter an error. This fixes that.
e.g., for this error in a package:
```python
depends_on("python@:3.8", when="0.900:")
```
Before, with no context and no clue that it's even from a particular spec:
```
==> Error: Unexpected token: ':'
```
With this PR:
```
==> Error: Unexpected token: ':'
Encountered when parsing spec:
0.900:
^
```
* Added autotools configure flags to ensure that hwloc finds the correct
version of CUDA that it was concretized against, rather than the first
one that package config finds.
* Added support for finding the correct version of ROCm libraries. Fixed Flake8.
* Fixed guard on finding ROCm library
* [py-h2] py-wheel is implied by PythonPackage
* [py-h2] python dependencies should be type=('build', 'run')
* [py-h2] fixed dependencies for py-h2@4.0.0
* [py-h2] added version 3.2.0
* [py-h2] added version 4.1.0
* [py-h2] Older version requires py-enum34 for older versions of python
Add two new cloud pipelines for E4S on Amazon Linux, include arm and x86 (v3 + v4) stacks.
Notes:
- Updated mpark-variant to remove conflict that no longer exists in Amazon Linux
- Which command on Amazon Linux prefixes on all results when padded_length is too high. In this case, padded_length<=503 works as expected. Chose conservative length of 384.
* Introduce concretizer:unify option to replace spack:concretization
* Deprecate concretization
* Make spack:concretization overrule concretize:unify for now
* Add environment update logic to move from spack:concretization to spack:concretizer:reuse
* Migrate spack:concretization to spack:concretize:unify in all locations
* For new environments make concretizer:unify explicit, so that defaults can be changed in 0.19
* Update h5bench maintainers and versions
* Include version 1.1 for h5bench
* Correct release hash and set default version
* Update .tar.gz version
* Include new version and update runtime
* Update year
* Update package.py
* Update package.py
fixes#30700
To avoid clingo adding penalties for not using the
default value for a variant, it's better to model
the variant as conditional where possible.
* This commit removes the Boost.with_default_variants to variants that packages are precisely dependant upon. This is the third batch of 16 packages with modified boost dependencies.
* style fix
* Update var/spack/repos/builtin/packages/sympol/package.py
Co-authored-by: Tim Haines <thaines.astro@gmail.com>
* fix style
* Apply suggestions from code review
Co-authored-by: Tim Haines <thaines.astro@gmail.com>
* Fix Trilinos boost deps
* Fix style
Co-authored-by: Tim Haines <thaines.astro@gmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Add a `build_type` variant, which allows building optimized compilers,
as well as target libraries (libstdc++ and friends).
The default is `build_type=RelWithDebInfo`, which corresponds to GCC's
default of -O2 -g.
When building with `+bootstrap %gcc`, also add Spack's arch specific
flags using the common denominator between host and new GCC.
It is done by creating a config/spack.mk file in def patch, that looks
as follows:
```
BOOT_CFLAGS := $(filter-out -O% -g%, $(BOOT_CFLAGS)) -O2 -g -march=znver2 -mtune=znver2
CFLAGS_FOR_TARGET := $(filter-out -O% -g%, $(CFLAGS_FOR_TARGET)) -O2 -g -march=znver2 -mtune=znver2
CXXFLAGS_FOR_TARGET := $(filter-out -O% -g%, $(CXXFLAGS_FOR_TARGET)) -O2 -g -march=znver2 -mtune=znver2
```
The oneapi and dpcpp compilers are essentially the same except for which
binary is used foc CXX. Spack will detect them as "mixed toolchain" and
not inject compiler optimization flags. This will be needed once
archspec has entries for the oneapi and dpcpp compilers. This PR detects
when dpcpp and oneapi are in the toolchains list and explicitly sets
`is_mixed_toolchain` to `False`.
* [py-openslide-python] added verion 1.1.2 and set max py-setuptools version for 1.1.1
* [py-openslide-python]
- setuptools required for all possible newer versions
- python is type build run
* [py-openslide-python] use pil provider
* Add version 3.0 and 3.1 and prelim OpenMP support
* Fix flag handler missing spec variable
* Use self.compiler.openmp_flag instead of -fopenmp
* Fix whitespace
Fixes qt configure errors with external openssl on older systems (rhel7)
See
efc02f9cc3/dist/changes-5.15.0 (L346)
This means for now on, `qt ^openssl@1.0` gets you `qt@5.15.4 ~ssl`:
clingo chooses latest qt version **but disables ssl support**.
Error messages for the clingo concretizer have proven challenging. The current messages are incredibly vague and often don't help users at all. Unsat cores in clingo are not guaranteed to be minimal, and lead to cores that are either not useful or need to be post-processed for hours to reach a minimal core.
Following up on an idea from a slack conversation with kwryankrattiger on slack, this PR takes a new approach. We eliminate most integrity constraints and minima/maxima on choice rules in clingo, and instead force invalid states to imply an error predicate. The error predicate can include context on the cause of the error (Package, Version, etc). These error predicates are then heavily optimized against, to ensure that we do not include error facts in the solution when a solution with no error facts could be generated. When post-processing the clingo solution to construct specs, any error facts cause the program to raise an exception. This leads to much more legible error messages. Each error predicate includes a priority and an error message. The error message is formatted by the remaining arguments to produce the error message. The priority is used to ensure that when clingo has a choice of which rules to violate, it chooses the one which will be most informative to the user.
Performance:
"fresh" concretizations appear to suffer a ~20% performance penalty under this branch, while "reuse" concretizations see a speedup of around 33%.
Possible optimizations if users still see unhelpful messages:
There are currently 3 levels of priority of the error messages. Additional priorities are possible, and can allow us finer granularity to ensure more informative error messages are provided in lieu of less informative ones.
Future work:
Improve tests to ensure that every possible rule implying an error message is exercised
A non-existent upstream should not be fatal: it could only mean it is
not deployed yet. In the meantime, it should not block the user to
rebuild anything it needs.
A warning is still emitted, to let the user decide if this is ok or not.
* Fix for xtensor-xsimd
* Add sha256 for all new releases
* renamed ufcx package
* Update sha for ffcx
* fixed hashes and modified fenics-dolfinx to depend on ufcx
* cleaned and fixed dependency types
* use spec.satisfies in cmake_args
* bumped to ufcx@0.4.1
* address PR comments
* fix hashes
* update parmetis in cmake_args to reflect default setting
* update versions
* renamed ufcx package
* fixed hashes and modified fenics-dolfinx to depend on ufcx
* cleaned and fixed dependency types
* use spec.satisfies in cmake_args
* bumped to ufcx@0.4.1
* address PR comments
* fix hashes
* update parmetis in cmake_args to reflect default setting
* update versions
* Add dependency fix
* bump basix to 0.4.2 and address PR comments
* Versioning fixes
* Use xtensor-0.24: and loosen pybind11
* Add conflicts for partitioners
* Updates on partitioners
* use define_from_variant
* Tidy up some dependencies
* Work on multi-variants for graph partitioners
* Fix KaHIP issue.
KaHIP changed the name of its library from 'interface' to 'kahip'. Pin earlier versions of DOLFINx to earlier verisons of KaHIP for proper detection.
Co-authored-by: Chris Richardson <chris@bpi.cam.ac.uk>
Co-authored-by: Garth N. Wells <gnw20@cam.ac.uk>
Fixes missing chgrp on symlinks in package installations, and errors on
symlinks referencing non-existent or non-writable locations.
Note: `os.chown(.., follow_symlinks=False)` is python3 only, but
`os.lchown` exists in both versions.
* Change license dir from hard-coded to a configurable item
* Change config item to be a string not an array
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Trying to compute `dag_hash()` or `package_hash()` on a concrete spec that doesn't have
a `_package_hash` attribute would attempt to recompute the package hash.
This most commonly manifests as a failed lookup of a namespace if you attempt to uninstall
or compute the hashes of packages in exsternal repositories that aren't registered, e.g.:
```console
> spack spec --json c/htno
==> Error: Unknown namespace: myrepo
```
While it wouldn't change the already-assigned `dag_hash` value, this behavior is
incorrect, since the package file for a previously concrete spec:
1. might have changed since concretization,
2. might not exist anymore, or
3. might just not be findable by Spack.
This PR ensures that the package hash can't be computed on older concrete specs. Instead
of calling `package_hash()` from within `to_node_dict()`, we now check for the `_package_hash`
attribute and only add the package_hash to the spec record if it's there.
This PR also handles the tricky semantics of computing `package_hash()` at concretization
time. We have to compute it *before* marking the spec concrete so that `to_node_dict` can
use it. But this means that the logic for `package_hash()` can't rely on `spec.concrete`,
as it is called *during* concretization. Instead of checking for concreteness, `package_hash()`
now checks `_patches_assigned()` to determine whether it should add them to the package
hash.
- [x] Add an assert to `package_hash()` so it can't be called on specs for which it
would be wrong.
- [x] Add an `_assign_hash()` method to handle tricky semantics of `package_hash`
and `dag_hash`.
- [x] Rework concretization to call `_assign_hash()` before and after marking specs
concrete.
- [x] Rework content hash part of package hash to check for `_patches_assigned()`
instead of `spec.concrete`.
- [x] regression test
* [py-tensorflow-hub] applied patch for newer version of zlib
* [py-tensorflow-hub] patch also applies to 0.11.0
* [py-tensorflow-hub] Audit fix
1. patch URL in package py-tensorflow-hub must end with ?full_index=1
Newer versions of gobject-introspection require Meson to build. Convert
the package into a hybrid one that still supports older versions using
Autotools.
* arm-forge: Download via HTTPS
Update download URL to use HTTPS (rather than HTTP)
* arm-forge: Allow +probe to depend on python3
Allow python dependency required for arm-forge+probe to be python3 as
well as 2.7.x
* arm-forge: Add versions up to 22.0.1
By default, libfuse install helper programs like `fusermount3`, which
are mostly useless if not installed with setuid (that is, `+useroot`).
However, their presence makes it complicated to use globally installed
versions, which can be combined with a Spack-installed FUSE library.
In particular, on systems that have a setuid fusermount3 binary, but no
libfuse-dev installed, it is nice to be able to build libfuse with Spack, and
have it call the system setuid executable.
* Correcting include and library paths using patch file for RVS to build
following library files in spack.
libperf.so.0.0
libpebb.so.0.0
libiet.so.0.0
libgst.so.0.0
libpqt.so.0.0
libmem.so.0.0
libbabel.so.0.0
* Correcting include and library paths using patch file for RVS to build
following library files in spack.
libperf.so.0.0
libpebb.so.0.0
libiet.so.0.0
libgst.so.0.0
libpqt.so.0.0
libmem.so.0.0
libbabel.so.0.0
* Replacing ROCM_PATH with RPATH in the deviceid.sh before installing in Spack build.
* Reducing multiple enviroment variable for HIP and HSA path
- Removed gl dependency.
- Specify clang as cmake compiler as gcc was being
improperly picked up. As a result, ffi include
path was needed in C/CXX flags.
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Previously we sorted by hash values for `spack graph`, but changing hashes can make the
test brittle and the node order seem nondeterministic to users.
- [x] Sort nodes in `spack graph` by the default edge order, which takes into account
parent and child names as well as dependency types.
- [x] Update ASCII test output for new order.
The dependency check currently checks whether there are only build
dependencies left for a particular package. However, the database also
contains uninstalled packages, which can cause the check to fail.
For instance, with `bison` and `flex` having already been uninstalled,
`m4` will have the following dependents:
```
bison ('build', 'run')--> m4
flex ('build',)--> m4
libnl ('build',)--> m4
```
`bison` and `flex` should be ignored in this case because they are not
installed anymore.
Fixes#30673
* ceed50: add ceed 5.0.0 and pumi 2.2.7
* libceed-0.10
* ceed50: add omegah
* omega-h: mpi and cuda builds work
* omega-h: fix style
* New package: libfms
* New version: gslib@1.0.7
CEED: add some TODO items for the 5.0 release
* ceed: variant name consistent with package name
* LAGHOS: allow newer versions of MFEM to be used with v3.1
* LIBCEED: add missing 'install' target in 'install_targets'
* CEED: address some TODO items + some tweaks
* MFEM: add new variant for FMS (libfms)
* CEED: v5.0.0 depends on 'libfms' and 'mfem+fms'
* RATEL: add missing 'install' target in 'install_targets'
* CEED: add dependency for v5.0.0 on Ratel v0.1.2
* CEED: add Nek-related dependencies for ceed@5.0.0
* CEED: v5.0.0 depends on MAGMA v2.6.2
* libCEED: set the `CUDA_ARCH` makefile parameter
* libCEED: set the `HIP_ARCH` makefile parameter
Co-authored-by: Jed Brown <jed@jedbrown.org>
Co-authored-by: Veselin Dobrev <dobrev@llnl.gov>
Co-authored-by: Veselin Dobrev <v-dobrev@users.noreply.github.com>
#24556 merged in support for Python's .zip file support via ZipFile.
However as per #30200 ZipFile does not preserve file permissions of
the extracted contents. This PR returns to using the `unzip`
executable on non-Windows systems (as was the case before #24556)
and now uses `tar` on Windows to extract .zip files.
We previously had checks in `directory_layout` to check for build-dependency
conflicts when we weren't storing build dependencies. We don't need
those anymore; we can just rely on the DAG hash now that it includes everything
we know about each spec.
- [x] Remove vestigial code for checking installed spec against concrete spec
in `ensure_installed()`
- [x] Remove `SpecHashCollisionError` -- if specs have the same hash now, they're
the same as far as `DirectoryLayout` should be concerned.
- [x] Convert spec comparison to `dag_hash()` comparison when adding extensions.
The database now stores full hashes, so we need to adjust the criteria we use to
determine if something can be uninstalled. Specifically, it's ok to uninstall thing that
have remaining build-only dependents.
With the original DAG hash, we did not store build dependencies in the database, but
with the full DAG hash, we do. Previously, we'd never tell the concretizer about build
dependencies of things used by hash, because we never had them. Now, we have to avoid
telling the concretizer about them, or they'll unnecessarily constrain build
dependencies for new concretizations.
- [x] Make database track all dependencies included in the `dag_hash`
- [x] Modify spec_clauses so that build dependency information is optional
and off by default.
- [x] `spack diff` asks `spec_clauses` for build dependencies for completeness
- [x] Modify `concretize.lp` so that reuse optimization doesn't affect fresh
installations.
- [x] Modify concretizer setup so that it does *not* prioritize installed versions
over package versions. We don't need this with reuse, so they're low priority.
- [x] Fix `test_installed_deps` for full hash and new concretizer (does not work
for old concretizer with full hash -- leave this for later if we need it)
- [x] Move `test_installed_deps` mock packages to `builtin.mock` for easier debugging
with `spack -m`.
- [x] Fix `test_reuse_installed_packages_when_package_def_changes` for full hash
- [x] update test to use `build_hash` instead of `dag_hash`, as we're testing for
graph structure, and specifically NOT testing for package changes.
- [x] make hash descriptors callable on specs to simplify syntax for invoking them
- [x] make `Spec.spec_hash()` public
This removes all but one usage of runtime hash. The runtime hash was being used to write
historical lockfiles for tests, but we don't need it for that; we can just save those
lockfiles.
- [x] add legacy lockfiles for v1, v2, v3
- [x] fix bugs with v1 lockfile tests (the dummy lockfile we were writing was not actually
a v1 lockfile because it used the new spec file format).
- [x] remove all but one runtime_hash usage -- that one needs a small rework of the
concretizer to really fix, as it's about separate concretization of build
dependencies.
- [x] Document the history of the lockfile format in `environment/__init__.py`
Some test cases had to be modified in a kludgy way so that abstract specs made
concrete would have versions on them. We shouldn't *need* to do this, as the
only reason we care is because the content hash has to be able to get an archive
for a version.
This modifies the content hash so that it can be called on abstract specs,
including only relevant content.
This does NOT add a partial content hash to the DAG hash, as we do not really
want that -- we don't need in-memory spec hashes to need to load package files.
It just makes `Package.content_hash()` less prickly and tests easier to
understand.
`spack monitor` expects a field called `spec_full_hash`, so we shouldn't change that.
Instead, we can pass a `dag_hash` (which is now the full hash) but not change the field
name.
`hashes_final` was used to indicate when a spec was concrete but possibly lacked
`full_hash` or `build_hash` fields. This was only necessary because older Spacks
didn't generate them, and we want to avoid recomputing them, as we likely do not
have the same package files as existed at concretization time.
Now, we don't need to do that -- there is only the DAG hash and specs are either
concrete and have a `dag_hash`, or not concrete and have no `dag_hash`. There's
no middle ground.
Without some enforcement of spec ordering, python 2 produced
different results in the affected test than did python 3. This
change makes the arbitrary but reproducible decision to sort
the specs by their lockfile key alphabetically.
The full hash appears twice in the spec dict now, replacing just
the value replaces it under "hash" and "full_hash". Only replace
the one that appears after "full_hash".
I'm actually not sure what purpose this test served, so maybe it
could be removed, as it may be testing some distinction between
full and dag hash which no longer exists.
For a long time, Spack has used a coarser hash to identify packages
than it likely should. Packages are identified by `dag_hash()`, which
includes only link and run dependencies. Build dependencies are
stripped before hashing, and we have notincluded hashes of build
artifacts or the `package.py` files used to build. This means the
DAG hash actually doesn't represent all the things Spack can build,
and it reduces reproducibility.
We did this because, in the early days, users were (rightly) annoyed
when a new version of CMake, autotools, or some other build dependency
would necessitate a rebuild of their entire stack. Coarsening the hash
avoided this issue and enabled a modicum of stability when only reusing
packages by hash match.
Now that we have `--reuse`, we don't need to be so careful. Users can
avoid unnecessary rebuilds much more easily, and we can add more
provenance to the spec without worrying that frequent hash changes
will cause too many rebuilds.
This commit starts the refactor with the following major change:
- [x] Make `Spec.dag_hash()` include build, run, and link
dependencides and the package hash (it is now equivalent to
`full_hash()`).
It also adds a couple of bugfixes for problems discovered during
the switch:
- [x] Don't add a `package_hash()` in `to_node_dict()` unless
the spec is concrete (fixes breaks on abstract specs)
- [x] Don't add source ids to the package hash for packages without
a known fetch strategy (may mock packages are like this)
- [x] Change how `Spec.patches` is memoized. Using
`llnl.util.lang.memoized` on `Spec` objects causes specs to
be stored in a `dict`, which means they need a hash. But,
`dag_hash()` now includes patch `sha256`'s via the package
hash, which can lead to infinite recursion
For tutorial builds, we should continue to allow deprecated builds to be installed. We
can update them as needed when we update the tutorial, but we don't need to correct them
immediately on deprecation in CI.
- [x] add `deprecated:true` to tutorial `spack.yaml` config.
* updating googletest version to 1.11 to avoid GTEST_DISALLOW_ASSIGN_ error
* limiting the version scope
* modified the version limit
Co-authored-by: mohan babu <mohbabul@amd.com>
Upstream neovim builds with luajit-openresty or luajit in almost all
cases. To support the current usage, a user can specify that they want
lua, but this will allow the use of the normal (faster, better tested
and better maintained) setup.
* Add checksum for py-pylint@2.13.5
* Update dependencies
* Add checksum for py-astroid@2.11.4
* Correct py-toml addition and add py-tomli dependency
* Remove py-pytoml dependency for versions @2.13:
* Modify py-astroid version range
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Discontinue py-astroid dependency @2.8.0:2.8 for new versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Discontinue py-mccabe dependency @0.6.0:0.6 for new versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Remove mccabe and setuptools-scm dependencies
* Update astroid dependencies
* Extend py-typed-ast version range to future releases
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-dill only required for version 2.13.5 and above
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add maccabe dependency and correct setuptools run dependency
* Setuptools fix
* Add setuptools as run dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pyarrow: Add version 7.0.0
* Add version constraints on dependencies
* Add version 8.0.0
* arrow: Add version 8.0.0
* py-pyarrow: Allow version 8.0.0 of arrow
* Bump up rocm release version to rocm-5.1.0
* update rocm-opencl for rocm-5.1.0 release
* update the migraphx,miopen(hip,opencl),mivisionx,rocm-tensile
* update the mlirmiopen checksum version
- [x] Add `mkdir -p` and `chmod` to ensure `/home/spack-test` exists and
has correct permissions.
- [x] Remove version comments from dependabot-managed action commits
- [x] Don't duplicate comment describing required fixes for distros with
patched git
`spack pkg list` tests were broken by #29593 for cases when your `builtin.mock` repo
still has stale backup files (or, really, stale directories) sitting around. This
happens if you switch branches a lot. In this case, things like this were causing
erroneous packages in the mock listing:
```
var/spack/repos/builtin.mock/packages/
foo/
package.py~
```
- [x] make `list_packages` consider only directories with one-deep `package.py` files.
Reworking lua to allow easier substitution of the base lua implementation.
Also adding in a maintained version of luajit and re-factoring the entire stack
to use a custom build-system to centralize functionality like environment
variable management and luarocks installation.
The `lua-lang` virtual is now versioned so that a package that requires
Lua 5.1 semantics can get any lua, but one that requires 5.2 will only
get upstream lua.
The luaposix package requires lua-bit32, but only when built with a
lua conforming to version 5.1. This adds the package, and the
dependencies, but exposed a problem with luarocks dependency
detection. Since we're installing each package in its own "tree" and
there's no environment variable to list extra trees, spack now
generates a luarocks config file that lists all the trees of all the
dependencies, and references it by setting `LUAROCKS_CONFIG`
in the build environment of every LuaPackage. This allows luarocks
to find the spack installed dependencies correctly rather than
trying (and failing) to download them.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Tom Scogland <tscogland@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Some of our `git` tests still fail when `init.defaultBranch` is set to something other
than `master`.
- [x] get rid of all hard-coded `master` refs
- [x] Use `'default'` to key tests that use the default branch
When running on Windows, Spack may generate files in the stage/install
prefixes that do not have write permissions, which prevents the
removal of those directories (e.g. when cleaning stages or uninstalling).
There should be a refactoring to avoid this in the first place, but that
is assumed to be longer term, so the temporary fix is to make such files
writable if they are not. This PR:
* Automatically handles these permissions errors when uninstalling
packages from the Spack root (makes then writable)
* Updates similar already-existing logic when removing Spack-managed
stage directories (the error-handling was assuming all errors were
permissions errors and was therefore handling other errors
inappropriately)
Note: these permissions issues only appear on Windows so this logic is
only applied there (permissions are not modified for this purpose on
Linux etc.).
This also adds special handling for a case where calling `isdir`
on an `os.DirEntry` object would fail for improperly-created symlinks
(e.g. on Windows, using `os.symlink` without `target_is_directory=True`).
Note this specific issue only came up when enabling link_tree tests
(specifically `source_merge_visitor_cant_be_cyclical`).
* create function for translating compiler names on specs/compiler entries in manifest
* add tests for translating compiler names on spec/compiler entries
* use higher-level function in test and add comment to prefer testing via higher-level function
* opensuse clingo check should not fail on account of this pr, but I cannot get it to pass by restarting via CI UI
* Addition of 1.1.9dev version.
* Small style fix -- extra blank line.
* Update var/spack/repos/builtin/packages/py-maestrowf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-maestrowf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-maestrowf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-maestrowf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Additional dependencies and version constraints.
* Revert to py-poetry.
* Remove run from cryptography (build only).
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Force GCC to always provide a C++14 flag
Updated gnu logic so that the c++14 flag for g++ is always propagated.
This fixes issues with build systems that error out if passed an empty
string for a flag.
Engaging in the best kind of software engineering by updating the unit
test to pass with the value it is now passed. This should better match
the expected flag for g++ compiling with the C++14 standard
* Add py-docutils@0.16
* Add sphinx-tabs package
* Update var/spack/repos/builtin/packages/py-sphinx-tabs/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-sphinx-tabs/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-sphinx-tabs/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This ensures that multiple spack instances called from `make` will respect the maximum number of jobs in the POSIX jobserver across packages.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Problem: GCC 9.4.0 catches a bad integer comparison in
resource/hlapi/bindings/c++/reapi_cli_impl.hpp in flux-sched@0.22.0
and current master.
Add a patch to work around the problem until an upstream fix is
available.
* use the init.defaultBranch name, not master
* make tcl and modules/common independent
Both used to use not just the same directory, but the same *file* for
their outputs. In parallel this can cause problems, but it can also
accidentally allow expected failures to pass if the file is left around
by mistake.
* use a non-global misc_cache in tests
* make pkg tests resilient to gitignore
* make source cache and module directories non-global
`make` solves a lot of headaches that would otherwise have to be implemented in Spack:
1. Parallelism over packages through multiple `spack install` processes
2. Orderly output of parallel package installs thanks to `make --sync-output=recurse` or `make -Orecurse` (works well in GNU Make 4.3; macOS is unfortunately on a 16 years old 3.x version, but it's one `spack install gmake` away...)
3. Shared jobserver across packages, which means a single `-j` to rule them all, instead of manually finding a balance between `#spack install processes` & `#jobs per package` (See #30302).
This pr adds the `spack env depfile` command that generates a Makefile with dag hashes as
targets, and dag hashes of dependencies as prerequisites, and a command
along the lines of `spack install --only=packages /hash` to just install
a single package.
It exposes two convenient phony targets: `all`, `fetch-all`. The former installs the environment, the latter just fetches all sources. So one can either use `make all -j16` directly or run `make fetch-all -j16` on a login node and `make all -j16` on a compute node.
Example:
```yaml
spack:
specs: [perl]
view: false
```
running
```
$ spack -e . env depfile --make-target-prefix env | tee Makefile
```
generates
```Makefile
SPACK ?= spack
.PHONY: env/all env/fetch-all env/clean
env/all: env/env
env/fetch-all: env/fetch
env/env: env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww
@touch $@
env/fetch: env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc
@touch $@
env/dirs:
@mkdir -p env/.fetch env/.install
env/.fetch/%: | env/dirs
$(info Fetching $(SPEC))
$(SPACK) -e '/tmp/tmp.7PHPSIRACv' fetch $(SPACK_FETCH_FLAGS) /$(notdir $@) && touch $@
env/.install/%: env/.fetch/%
$(info Installing $(SPEC))
+$(SPACK) -e '/tmp/tmp.7PHPSIRACv' install $(SPACK_INSTALL_FLAGS) --only-concrete --only=package --no-add /$(notdir $@) && touch $@
# Set the human-readable spec for each target
env/%/cdqldivylyxocqymwnfzmzc5sx2zwvww: SPEC = perl@5.34.1%gcc@10.3.0+cpanm+shared+threads arch=linux-ubuntu20.04-zen2
env/%/gv5kin2xnn33uxyfte6k4a3bynhmtxze: SPEC = berkeley-db@18.1.40%gcc@10.3.0+cxx~docs+stl patches=b231fcc arch=linux-ubuntu20.04-zen2
env/%/cuymc7e5gupwyu7vza5d4vrbuslk277p: SPEC = bzip2@1.0.8%gcc@10.3.0~debug~pic+shared arch=linux-ubuntu20.04-zen2
env/%/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: SPEC = diffutils@3.8%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws: SPEC = libiconv@1.16%gcc@10.3.0 libs=shared,static arch=linux-ubuntu20.04-zen2
env/%/yfz2agazed7ohevqvnrmm7jfkmsgwjao: SPEC = gdbm@1.19%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/73t7ndb5w72hrat5hsax4caox2sgumzu: SPEC = readline@8.1%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/trvdyncxzfozxofpm3cwgq4vecpxixzs: SPEC = ncurses@6.2%gcc@10.3.0~symlinks+termlib abi=none arch=linux-ubuntu20.04-zen2
env/%/sbzszb7v557ohyd6c2ekirx2t3ctxfxp: SPEC = pkgconf@1.8.0%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/c4go4gxlcznh5p5nklpjm644epuh3pzc: SPEC = zlib@1.2.12%gcc@10.3.0+optimize+pic+shared patches=0d38234 arch=linux-ubuntu20.04-zen2
# Install dependencies
env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww: env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc
env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p: env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk
env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao: env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu
env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu: env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs
env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs: env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp
env/clean:
rm -f -- env/env env/fetch env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc
```
Then with `make -O` you get very nice orderly output when packages are built in parallel:
```console
$ make -Orecurse -j16
spack -e . install --only-concrete --only=package /c4go4gxlcznh5p5nklpjm644epuh3pzc && touch c4go4gxlcznh5p5nklpjm644epuh3pzc
==> Installing zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc
...
Fetch: 0.00s. Build: 0.88s. Total: 0.88s.
[+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc
spack -e . install --only-concrete --only=package /sbzszb7v557ohyd6c2ekirx2t3ctxfxp && touch sbzszb7v557ohyd6c2ekirx2t3ctxfxp
==> Installing pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp
...
Fetch: 0.00s. Build: 3.96s. Total: 3.96s.
[+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp
```
For Perl, at least for me, using `make -j16` versus `spack -e . install -j16` speeds up the builds from 3m32.623s to 2m22.775s, as some configure scripts run in parallel.
Another nice feature is you can do Makefile "metaprogramming" and depend on packages built by Spack. This example fetches all sources (in parallel) first, print a message, and only then build packages (in parallel).
```Makefile
SPACK ?= spack
.PHONY: env
all: env
spack.lock: spack.yaml
$(SPACK) -e . concretize -f
env.mk: spack.lock
$(SPACK) -e . env depfile -o $@ --make-target-prefix spack
fetch: spack/fetch
@echo Fetched all packages && touch $@
env: fetch spack/env
@echo This executes after the environment has been installed
clean:
rm -rf spack/ env.mk spack.lock
ifeq (,$(filter clean,$(MAKECMDGOALS)))
include env.mk
endif
```
* Use patches from IBM's Open CE project to enable PyTorch to build on
Power systems.
Cherry-pick a patch to allow earlier versions of PyTorch to build with
CUDA 11.4.
* Update var/spack/repos/builtin/packages/py-torch/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* octopus: adding versions up to 11.4
* octopus: add smoke tests
* octopus: add necessary flags for gcc@10
* octopus: update to compilation and dependencies
* octopus: adding new variants
* octopus: remove 'poke' (as this poke is not in spack [yet])
* octopus: allow compilation from git repo develop branch
* octopus: adapt to spack style requirements
* octopus: add maintainer
* octopus: make tests after install optional
Thank you @tldahlgren
* octopus: follow recommended practice for test input data
Move the two configuration files we use for smoke tests into `test`
subdirectory. Thanks @tldahlgren.
* Adding maintainer
with their agreement by email
* octopus: reduce duplication of flags
- part of code review
* octopus: https is preferred over http
* octopus: remove .99 from versioning information
Thanks to https://github.com/spack/spack/pull/26402, we can drop the
"2:3.99" notation when we mean all versions 2.x and 3.x
Examples: b9e72557e8 (diff-b8373d30b3a141c495c2281273ee6184fc513413142afaf2adac1f406cd6b0d7)
(from review)
* octopus: args.extend([x]) -> args.append(x)
(hint from review)
libassimp has been a dependency for all of 5.x but expressing that has
varied significantly throughout the 5.x lifecycle:
v5.0: qt3d uses internal-only libassimp
v5.5: external-only libassimp
v5.6: either internal or external libassimp via autodetection
v5.9: user-selectable internal-vs-external via -assimp
v5.14: additional qtquick3d module uses -assimp
v5.15: qtquick3d switches to the -quick3d-assimp option
* current bug where the incorrect target is setup
* Add mimalloc package
* Add mimalloc as allocator option to pika
* Add mimalloc as allocator option to hpx
* Set git property globally instead of per-version in pika, hpx, and mimalloc packages
Co-authored-by: Mikael Simberg <mikael.simberg@iki.if>
Strictly, `sed` is a `build` and `run` dependency in all gpi-2
versions, whereas `gawk` is a `run` (`build` and `run`) dependency for
gpi-2 versions greater or equal (less) than 1.4.0
Gitlab pipelines run for spack already have other S3 storage locations
configured for storage of binaries, so this PR removes the redundant
per-pipeline mirror. As a result, the "cleanup" jobs will no longer be
generated at the end of each pipeline, removing one possible point of
pipeline failure.
The go-bootstrap package doesn't work on aarch64 platforms, so the only way
to build Go is to use gccgo.
Also, some versions of gccgo have a bug that prevents them from compiling
go (see golang/go#47771), so this patch limits gcc to versions newer than
10.4.0 or 11.3.0.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* new package: pytaridx
* fixed copyright year
* Update git link
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added type in python depends
* added pypi link
* Update package.py
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Metall package: add dependency to GCC for build test
* Package Metall: add v.017
* Package Metall: update the package file
* Update var/spack/repos/builtin/packages/metall/package.py
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
* Metall package: add v0.18 and v0.19
* Metall Package: add v0.20
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
Starting with MPICH 3.4, we offer different datatype engine options
(dataloop or yaksa). The default is 'auto', which will choose based on
the device configuration. Starting with MPICH 4.0, building against an
external yaksa library is supported.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Added support for finding the OpenCV package via the find external
command. Included support for identifying variants based on available
shared libraries.
Added support to finding the OpenBLAS package via the find external
command.
Enabled packages to show that they can be discovered via the find
external command in the info message.
Updated the OpenCV and OpenBLAS packages to use the extensible search
mechanism for library extensions on multiple OS platforms.
Corrected how find externals works on Darwin for OpenCV and OpenBLAS
to accommodate that the version numbers are placed before the file
extension instead of after it, as on Linux.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* librsb: added v1.2.0.10 (#26043)
* librsb: add v1.2.0.11/v1.3.0.0 (#28636)
* librsb: add v1.3.0.1 (#30424)
* unconflict clang
* address apparent style issues
given
https://github.com/spack/spack/runs/6248126997?check_suite_focus=true
and its excerpt
var/spack/repos/builtin/packages/librsb/package.py:27: [E265] block comment should start with '# '
var/spack/repos/builtin/packages/librsb/package.py:52: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:53: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:53: [E501] line too long (89 > 88 characters)
var/spack/repos/builtin/packages/librsb/package.py:54: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:55: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:56: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:57: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:59: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:60: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:62: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:63: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:64: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:66: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:68: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:70: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:71: [E211] whitespace before '('
let these changes flow in.
* +asan+native: mark as conflict; thanks @tldahlgren
* +asan conflict grouped with other conflicts
As suggested as good Spack style by @tldahlgren .
- Keep long lists in alphabetical order for easier reading
- Add a placeholder for Exa.TrkX plugin since we're missing a dep on the
Spack side
- Add support for the ONNX plugin since Spack now has an ONNX runtime
package
- Use spack's pybind11 package now that we're given the option to do so
This adds the newest stable version (and removes old development
versions), a few missing dependencies and workarounds for build
failures. Without the environment variables, sysstat will try creating
directories in `/var/log`, and without `--disable-file-attr`, sysstat
will try to change file ownership.
* gdal: changing behavior of configure for +xml2 with 3.0+
* Update var/spack/repos/builtin/packages/gdal/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add checksum for py-more-itertools@8.12.0 and fix python dependency
* Add checksum for py-prettytable@3.2.0
* Package version 8.11.0 is the only version that requires python 3.6+
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add reference to python@3.6 support when 8.11
* Revert "Add reference to python@3.6 support when 8.11"
This reverts commit 0ba0002193.
* Add python 3.7: requirement
* Revert range for python 3.6
* Revert py-more-itertools modifications
Co-authored-by: aandvalenzuela <andrea.valenzuela.ramirez@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This is an amended version of https://github.com/spack/spack/pull/24894 (reverted in https://github.com/spack/spack/pull/29603). https://github.com/spack/spack/pull/24894
broke all instances of `spack external find` (namely when it is invoked without arguments/options)
because it was mandating the presence of a file which most systems would not have.
This allows `spack external find` to proceed if that file is not present and adds tests for this.
- [x] Add a test which confirms that `spack external find` successfully reads a manifest file
if present in the default manifest path
--- Original commit message ---
Adds `spack external read-cray-manifest`, which reads a json file that describes a
set of package DAGs. The parsed results are stored directly in the database. A user
can see these installed specs with `spack find` (like any installed spec). The easiest
way to use them right now as dependencies is to run
`spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described
in the file to Spack's installation DB and will also install described compilers to the
compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR
registers the origin as "external-db"). In the future, it is assumed users may want
to be able to treat installs registered with this command differently (e.g. they may
want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec
is concrete
* I don't think the hashes of installed-and-concrete specs should change and this
was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied
(external specs may mention compilers that are not registered, and without this
change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do
something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will
remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
* ASP-based solver: discard unknown packages from reuse
This is an add-on to #28259 that cover for the case of
a single package.py being removed from a repository,
rather than an entire custom repository being removed.
* Add unit test
CTest determines whether to enable tests using the BUILD_TESTING variable.
This should be used by projects to conditionally enable the compilation of tests.
Spack knowns which packages have to run tests and can thus automatically define this variable.
I tried to use --overwrite on nvhpc, but nvhpc's install size is 16GB. Seems
better to do os.rename in the same directory than moving the directory to
`/tmp`.
- [x] install --overwrite: use rename instead of tmpdir
- [x] use tempfile
By default `openmpi` needs `rsh` from `openssh`, which is a somewhat
redundant dependency for clusters using slurm. This PR adds a toggle to
allow users to disable the ssh/rsh plm altogether.
This package was not setting FFTW when +mklfft was used with +cuda.
Since both were set to 'True', the default build was not linked to
any FFTW, leading to a run time error. It seems MKL support was
conflated with alternative CPU acceleration support. This PR does the
following:
- adds the altcpu variant to specify non-GPU/CPU acceleration
- sets a conflict between +altcpu and +cuda
- sets an FFTW implementation
- sets fltk+xft when +gui to get a decent looking GUI interface
- sets tbb dependency only when +altcpu
- adds dependency on ctffind
- adds variant and dependency on motioncor2
- sets defaults for
- qsub template location
- ctffind location
- motioncor2 location
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
fixes#28259
This commit discard specs from unknown namespaces from the
ones that can be "reused" during concretization. Previously
Spack would just error out when encountering them.
1. Add version 2022.04.17 (new numbering scheme) and update mbuild resource.
2. Branch 'master' is now 'main'.
3. Old rev 10.2019.03 needs a patch for python vs python3.
The parent thread in the process stdout redirection logic on Windows
was closing a file that was being read in child thread, which lead to
error-based termination of the reader thread. This updates the
interaction to avoid the error.
* Add checksum for py-ipywidgets@7.7.0
* Correct py-widgetsnbextension and py-ipython dependencies
* Update widgetsnbextension dependency to 3.6
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Allow requirement to next versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Revert ipyhton dependencies
* Add widgetsnbextension@3.6.0 checksum
Co-authored-by: aandvalenzuela <andrea.valenzuela.ramirez@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* ASP-based solver: allow configuring target selection
This commit adds a new "concretizer:targets" configuration
section, and two options under it.
- "concretizer:targets:granularity" allows switching from
considering only generic targets to consider all possible
microarchitectures.
- "concretizer:targets:host_compatible" instead controls
whether we can concretize for microarchitectures that
are incompatible with the current host.
* Add documentation
* Add unit-tests
* MAINT: Add a debug flag
* MAINT: Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* openmpi: always require pmix for 4:
`~pmix` is not applicable to version 4+, since it always builds a vendored
copy of pmix (currently 3.2.3).
* pmix: relax version requirements
When the version range was specified, newer versions didn't exist.
Also use normalized spack versions rather than artificial .9.9 /.0.0.
* openmpi: restrict pmix versions
pmix option isn't available for OpenMPI@1, and according to
https://github.com/open-mpi/ompi/issues/7988 , OpenMPI 4.0.1 will not
build with pmix@3.1.5.
* pmix: add newer versions
* OpenMPI: re-express conflicts/configure logic as conditional variants
This relies partly on `self.enable_or_disable` and its ilk to emit an
empty list when the variant isn't applicable.
* ASP-based solver: always consider version of installed packages
fixes#29201
Explicitly add facts for versions of installed software when
using the --reuse option, so that we could consider versions
that are not declared in package.py
The parser is already committing a crime of querying the database for
specs when it encounters a `/hash`. It's helpful, but unfortunately not
helpful when trying to install a specific spec in an environment by
hash. Therefore, consider the environment first, then the database.
This allows the following:
```console
$ spack -e . concretize
==> Starting concretization
==> Environment concretized in 0.27 seconds.
==> Concretized diffutils
- 7vangk4 diffutils@3.8%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
- hyb7ehx ^libiconv@1.16%gcc@10.3.0 libs=shared,static arch=linux-ubuntu20.04-zen2
$ spack -e . install /hyb7ehx
==> Installing libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
...
==> libiconv: Successfully installed libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
Fetch: 0.01s. Build: 17.54s. Total: 17.55s.
[+] /tmp/tmp.VpvYApofVm/store/linux-ubuntu20.04-zen2/gcc-10.3.0/libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
```
1. update for rocm 4.5 and drop support for earlier rocm.
2. no longer use mbedtls or gotcha, they are only for old revs.
3. update version requirements for dyninst and libmonitor
4. begin to deprecate old versions
Fix bug introduced in #30191. `Spec.installed` and `Spec.installed_upstream` should just return
`False` for abstract specs, as they can be called in that context.
- [x] `Spec.installed` returns `False` now instead of asserting that the `Spec`
is concrete.
- [x] `Spec.installed_upstream` returns `False` now instead of asserting that the `Spec`
is concrete.
- [x] `Spec.installed_upstream` no longer caches its result, as install status seems
like a bad thing to cache -- it can easily be invalidated. Calling code should
use transactions if there are peformance issues, as with other places in Spack.
- [x] add tests for `Spec.installed` and `Spec.installed_upstream`
This PR moves the `installed` and `installed_upstream` properties from `PackageBase` to `Spec` and is a step towards being able to reuse specs for which we don't have a `package.py` available. It _should_ be sufficient to complete the concretization step and see the spec in the concretized DAG.
To fully reuse a spec without a package.py though we need a way to serialize enough data to reconstruct the results of calls to:
- `Spec.libs`, `Spec.headers` and `Spec.ommand`
- `Package.setup_dependent_*_environment` and `Package.setup_run_environment`
- [x] Add stub methods to packages with warnings
- [x] Add a missing "root=False" in cmd/fetch.py
- [x] Assert that a spec is concrete before checking installation status
* Add checksum for jupyter-console@6.4.3
* Update py-jupyter-console dependency
* Extend jupyter-client@7.0.0 dependency to newer versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: aandvalenzuela <andrea.valenzuela.ramirez@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pystan: Add new package
* Fix dependencies
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add run dependency to py-setuptools
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-httpstan@4.7.2 and py-pysimdjson@3.2.0
* Dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR updates the list of images we build nightly, deprecating
Ubuntu 16.04 and CentOS 8 and adding Ubuntu 20.04, Ubuntu 22.04
and CentOS Stream. It also removes a lot of duplication by generating
the Dockerfiles during the CI workflow and uploading them as artifacts
for later inspection or reuse.
* ipopt: add goxberry as maintainer
This commit adds 'goxberry' (me, Geoff Oxberry) as a maintainer of the
Ipopt Spack package.
* ipopt: use github url instead of coin-or.org url
This commit changes the package URL for Ipopt from one containing
`coin-or.org` to one containing `github.com`. The rationale for
using `github.com` is as follows:
- The COIN-OR webpage now directs users interested in Ipopt source to
GitHub.
- Ipopt used to have a COIN-OR project homepage actually hosted on
coin-or.org using an SVN-Trac web page. A link to this project
homepage no longer appears within the "Projects" section of
COIN-OR's website.
- COIN-OR issued a 2021-12-15 post on the News section of its web site
(see https://www.coin-or.org/news/) that discusses the impact that
lack of financial support has on COIN-OR software maintenance. It
seems reasonable to suspect that the GitHub project is likely to
outlast the COIN-OR web site.
The sha256 hashes for ipopt@:3.12 downloaded from GitHub differ from
the corresponding COIN-OR versions, so these hashes are also updated.
* ipopt 3.14.5: add new version
This commit adds the latest version of Ipopt, 3.14.5, to the Ipopt
Spack package.
* git: add 2.35.2, explicit version(...)
git 2.35.2 fixes CVE-2022-24765 which seems to only affect Windows. But
nonetheless we should maybe set deprecated=True on older versions... The
restructure allows for that.
* deprecate over CVE-2022-24765
In WarpX 22.04, we introduced the openPMD `thetaMode` for fields in
RZ geometry. That means we need to name the fields differently than
the reconstructured Cartesian slice that we default to in plotfiles.
* ncurses: add wide, nowide headers, libs query parameter options
* readline: only link with libncursesw
Needed for python to detect proper ncurses library #27369
Alter the `install_components/install` script to pass the `-gcc $SPACK_CC`,
`-gpp $SPACK_CXX`, and `-g77 $SPACK_F77` flags to `makelocalrc`. This
ensures that nvhpc is configured to use the spack gcc spec, rather than
whatever gcc is found on the path.
Co-authored-by: Mikael Simberg <simberg@cscs.ch>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Fix test_ci_generate_prune_untouched(), which would fail if run when
the latest commit changed the .gitlab-ci.yml. This change mocks the
get_stack_changed() method in that test to disregard the state of
the current spack repo in favor of a mock repo under test control.
* The configure script on Windows requires that CC/CXX be enclosed
in quotes if the paths to those compiler executables contain
spaces (so unlike most instances of Executable, the arguments
need to contain the quotes)
* OpenSSL requires the nasm package on Windows
* Restore parallel build from 075e942 (accidentally reverted in
#27021)
* py-ipympl: Add new package
* Update var/spack/repos/builtin/packages/py-ipympl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipympl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipympl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipympl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Remove trailing whitespaces
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-webargs: Add new package
* Fix python requirement
* Add run dependency to py-packaging
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
gitlab ci: Set resource requests explicitly
This PR sets resource requests for the Kubernetes executor, which should aid in
better workload scheduling in the cluster. The specific values were derived from
profile data taken from several full "from scratch" rebuilds in a separate worker pool.
Co-authored-by: Zack Galbreath <zack.galbreath@kitware.com>
* serialbox: setup the run and dependent build environments
* Update var/spack/repos/builtin/packages/serialbox/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* rocmlibs: relax rocm-cmake version requirements
The rocm-cmake modules tend to be backwards-compatible, to the extent
that most ROCm math libraries were built using rocm-cmake@master
for a long while without anybody noticing. (That was fixed in
97f0c3ccd9f0a40896998a7580150a514ec3bc37.)
Some packages, like comgr, barely use rocm-cmake for anything, and
we can easily set a very minimal version requirement. For most
packages, however, it would be a lot of effort to determine the
minimum rocm-cmake version required for each release. For those
packages, I just turned the exact version requirement into a
minimum version requirement.
Since I was looking through the CMakeLists.txt for a large number of
libraries, I also took note of the cmake_minimum_required and adjusted
the cmake minimum requirements to match.
* Add rocblas build dependency to hipblas
The rocblas library is required both for both building and linking
hipblas.
* Remove rocm-cmake from vtk-m dependency list
The rocm-cmake package provides CMake scripts that facilitate common
build configuration tasks in the ROCm libraries. It is never needed at
link-time. Also, there are no calls to find_package(ROCM) or
include(ROCM.*)in vtk-m, so this dependency will never be used.
- older versions are no longer available for download so mark them
deprecated
- set manual_download
- set url_for_version
- only install the binary that matches the cuda version
In #26630, I assumed "glu" was needed by glew because it included glu.h, but
actually, glew can be used without glu when GLEW_NO_GLU is defined and this
is documented in the announcement of glew-1.6.0:
> https://www.geeks3d.com/20110430/opengl-glew-1-6-0-available/
> * Define GLEW_NO_GLU for no glu dependency
It is therefore the duty of users of glew to decide if they use glu,
and then they need to have a depends_on("glu").
Thus, move the depends_on("glu") which I changed from "gl" in #26630
to vapor, which itself uses glu as well.
For about a decade GCC has an option `-f[no]-canonical-system-headers`
which basically runs `realpath` on all "system headers", to possibly
reduce the length of paths in diagnostics. [1]
Spack usually installs the "system headers" of GCC in very deeply nested
directories. Calling `realpath` there results in stat calls on every
level, for every header file. On some slow filesystem I have,
`-fno-canonical-system-headers` gives about 5x speedup to compile hello
world in C, meaning that ./configure scripts would be much faster when
using this flag by default.
[1] https://codereview.appspot.com/6495088
Add option to allow using OpenSSL (by default this uses the SSL
implementation that comes with Windows, since that is more likely
to have needed certificates).
* py-awkward: Add new versions
* py-awkward: Update dependencies
* Make setuptools a runtime dependency as well
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Don't rely on NASM's nmake to export install target. Spack
now handles NASM installation; the install tree structure
mimics NASM Windows installer behavior.
* Add dependency on perl
we switched to an optional sphinx based way of
generating docs, so remove pandoc, which can cause
issues with latex conflicts.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
Bug fixes for package netcdf-cxx4 so that it builds on macOS semi
case-sensitive filesystems; this includes additional changes to build
netcdf-cxx4 consistently with netcdf-fortran.
* netcdf-fortran: remove unused config_flags
* netcdf-fortran: avoid building without the optimization flags
* netcdf-cxx4: do not enforce autoreconf. This was a rudiment from the
times when the package was fetched with git, which broke timestamp
order of the automatically generated Autoconf files.
* netcdf-cxx4: inject PIC flags for C++ when '+pic'
* netcdf-cxx4: inject C/CXXFLAGS via the wrapper
* netcdf-cxx4: fix the underlinking problem for platforms other than darwin
(add netcdf-c libs netcdf-cxx4 ldlibs flags)
* netcdf-cxx4: remove redundant extension of CPPFLAGS
* netcdf-cxx4: only need to use MPI compiler wrapper when building C
(vs both C and C++)
* netcdf-cxx4: remove variant 'static'
This makes it consistent with other packages from the NetCDF
constellation: always build the static libraries and additionally
build the shared ones when '+shared'.
* netcdf-cxx4: do not configure --with/--without-pic.
This makes it consistent with other packages from the NetCDF
constellation: build the shared libraries with the PIC flag and
the static ones without it (the default for Autotools) when
'~pic', and build the static libraries with PIC when '+pic' (to
make them injectable into other shared libraries).
* netcdf-cxx4: run the tests serially
* netcdf-cxx4: build the plugins only when the tests are run
Co-authored-by: Sergey Kosukhin <sergey.kosukhin@mpimet.mpg.de>
gitlab ci: Remove code for relating CDash builds
Relating CDash builds to their dependencies was a seldom used feature. Removing
it will make it easier for us to reorganize our CDash projects & build groups in the
future by eliminating the needs to keep track of CDash build ids in our binary mirrors.
* Allow packages to add a 'submodules' property that determines when ad-hoc Git-commit-based versions should initialize submodules
* add support for ad-hoc git-commit-based versions to instantiate submodules if the associated package has a 'submodules' property and it indicates this should happen for the associated spec
* allow Package-level submodule request to influence all explicitly-defined version() in the Package
* skip test on windows which fails because of long paths
* Set CUDA architectures in ArrayFire based on cuda_arch
The cuda_arch flag was not recognized by the ArrayFire package and
therefore any setting was not respected. This commit adds the appropriate
cmake flags if cuda_arch is specified. If no cuda_arch is specified,
then the flag is set to "Auto" which checks the installed compute
architectures on the build system.
* ArrayFire only requires boost headers to build. Update version to 1.75
ArrayFire only requires boost headers at build time. This commit also
updates the version to 1.75 to avoid some errors in Boost Compute
* Disable tests in ArrayFire by default
* Add support for ArrayFire v3.8.1
* Add maintainer for ArrayFire package
* Remove test variant from ArrayFire. Use comprehensions
* Reduce boost requirement in ArrayFire
* Address cuda_arch suggestions
* Add commit hashes to Release versions of ArrayFire
* Fix style issues in ArrayFire package
Ubuntu patched git v2.25.1 with a security fix that also
introduced a breaking change, so v2.25.1 behaves like
v2.35.2 with respect to the use cases in CVE-2022-24765
* llvm7_intel.patch required for intel@19.1.3 too
* apply llvm7_intel.patch forall intel@19.0 and intel@19.1
Co-authored-by: Daryl W. Grunau <dwg@lanl.gov>
Spack added support in #24639 for ad-hoc Git-commit-hash-based
versions: A user can install a package x@hash, where X is a package
that stores its source code in a Git repository, and the hash refers
to a commit in that repository which is not recorded as an explicit
version in the package.py file for X.
A couple issues were found relating to this:
* If an environment defines an alternative package repo (i.e. with
repos.yaml), and spack.yaml contains user Specs with ad-hoc
Git-commit-hash-based versions for packages in that repo,
then as part of retrieving the data needed for version comparisons
it will attempt to retrieve the package before the environment's
configuration is instantiated.
* The bookkeeping information added to compare ad-hoc git versions was
being stripped from Specs during concretization (such that user
Specs which succeeded before concretizing would then fail after)
This addresses the issues:
* The first issue is resolved by deferring access to the associated
Package until the versions are actually compared to one another.
* The second issue is resolved by ensuring that the Git bookkeeping
information is explicitly applied to Specs after they are concretized.
This also:
* Resolves an ambiguity in the mock_git_version_info fixture used to
create a tree of Git commits and provide a list where each index
maps to a known commit.
* Isolates the cache used for Git repositories in tests using the
mock_git_version_info fixture
* Adds a TODO which points out that if the remote Git repository
overwrites tags, that Spack will then fail when using
ad-hoc Git-commit-hash-based versions
This commit updates the `gpg publish` command to work with the mirror
arguments, when trying to push keys to a mirror.
- [x] update `gpg publish command
- [x] add test for publishing GPG keys and rebuilding the key index within a mirror
* zstd: bring back libs=shared,static and compression=zlib,lz4,lzma variants
Should make building `gcc+binutils ^zstd libs=static` a bit easier (this
is the case where we don't control the compiler wrappers of gcc because
of bootstrapping, nor of ld because of how gcc invokes the linker).
In a typical call to spack, the OperatingSystem gets instantiated
multiple times. For macOS, each one requires a call to `sw_vers`, which
is done through the Executable helper class. Memoizing
reduces the call count from "spac spec" from three to one.
Currently environments are indexed by build hashes. When looking into this bug I noticed there is a disconnect between environments that are concretized in memory for the first time and environments that are read from a `spack.lock`. The issue is that specs read from a `spack.lock` don't have a full hash, since they are indexed by a build hash which is strictly coarser. They are also marked "final" as they are read from a file, so we can't compute additional hashes.
This bugfix PR makes "first concretization" equivalent to re-reading the specs from a corresponding `spack.lock`, and doing so unveiled a few tests were we were making wrong assumptions and relying on the fact that a `spack.lock` file was not there already.
* Add unit test
* Modify mpich to trigger jobs in pipelines
* Fix two failing unit tests
* Fix another full_hash vs. build_hash mismatch in tests
* Ignore top-level module config; add auto-update
In Spack 0.17 we got module sets (modules:[name]:[prop]), and for
backwards compat modules:[prop] was short for modules:default:[prop].
But this makes it awkward to define default config for the "default"
module set.
Since 0.17 is branched off, we can now deprecate top-level module config
(that is, just ignore it with a warning).
This PR does that, and it implements `spack config update modules` to
make upgrading easy (we should have added that to 0.17 already...)
It also removes references to `dotkit` stuff which was already
deprecated in 0.13 and could have been removed in 0.14.
Prefix inspections are the only exception, since the top-level prefix inspections
used for `spack load` and `spack env activate`.
Spack currently allows dependencies to be concretized for an
architecture incompatible with the root. This commit adds rules
to make this situation impossible by design.
* Extract the MetaPathFinder and Loaders for packages in their own classes
https://peps.python.org/pep-0451/
Currently, RepoPath and Repo implement the (deprecated) interface of
MetaPathFinder (find_module) and of Loader (load_module). This commit
extracts both of them and places the code in their own classes.
The MetaPathFinder interface is updated to contain both the deprecated
"find_module" (for Python 2.7 support) and the recommended "find_spec".
Update of the Loader interface is deferred at a subsequent commit.
* Move the lines to be prepended inside "RepoLoader"
Also adjust the naming of a few variables too
* Remove spack.util.imp, since code is only used in spack.repo
* Remove support from loading Python modules Python > 3 but < 3.5
* Remove `Repo._create_namespace`
This function was interacting badly with the MetaPathFinder
and causing issues with "normal" imports. Removing the
function allows to do things like:
```python
import spack.pkg.builtin.mpich
cls = spack.pkg.builtin.mpich.Mpich
```
* Remove code needed to trigger the Singleton evaluation
The finder is coded in a way to trigger the Singleton,
so we don't need external code now that we register it
at module level into `sys.meta_path`.
* Add unit tests
OpenMPI includes cuda_runtime.h, which errors with `#error --
unsupported GNU version! gcc versions later than 9 are not supported!`
By inheriting CudaPackage, the proper conflicts between `cuda` and
`gcc`/`clang` are added.
* mesa, mesa18: Implement the swr variant consistently between mesa and mesa18
* mesa: Bump to 21.3.7
* mesa: Build release by default tie swr to release builds
* mesa, mesa18: re-enable the llvm variant by default
This reverts the change made in #29360
Some servers require `User-Agent` to be set, and otherwise error with
access denied. One such example is mpich.
To fix this, set `User-Agent: Spackbot/[version]` as a header.
Apparently by convention, it should include the word `bot`.
#27021 broke fetching for CVS-based packages because:
- The mirror logic was using URL parsing to extract a path from the
CVS repository location
- #27021 added sanity checks to enforce that strings passed to the
URL parser were actually URLs
This replaces the call to "url_util.parse" with logic that is
customized for CVS. This implies that VCSFetchStrategy should
rename the "url_attr" attribute to something more generic, but
that should be handled separately.
* mpich: add 3.4.3, 4.0, 4.0.1
* mpich: add url_for_version function
For versions 4.0 and up, get tarballs from GitHub. This will help with
CI builds, since the MPICH website denies the urllib user-agent from
downloading release tarballs.
* mpich: disable cuda support
MPICH is failing to build in CI due to a configuration script bug in
detecting CUDA support. Disable CUDA support by default until we add a
proper variant.
Allow declaring possible values for variants with an associated condition. If the variant takes one of those values, the condition is imposed as a further constraint.
The idea of this PR is to implement part of the mechanisms needed for modeling [packages with multiple build-systems]( https://github.com/spack/seps/pull/3). After this PR the build-system directive can be implemented as:
```python
variant(
'build-system',
default='cmake',
values=(
'autotools',
conditional('cmake', when='@X.Y:')
),
description='...',
)
```
Modifications:
- [x] Allow conditional possible values in variants
- [x] Add a unit-test for the feature
- [x] Add documentation
* tests for rewiring pure specs to spliced specs
* relocate text, binaries, and links
* using llnl.util.symlink for windows compat.
Note: This does not include CLI hooks for relocation.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
From the tempfile module docs:
The default directory is chosen from a platform-dependent list, but the
user of the application can control the directory location by setting
the TMPDIR, TEMP or TMP environment variables
missing dependencies
- boost
- lzo
Also, turn off libuv. This does not build properly with libuv so it is
not a dependency. However, configure will look for libuv on the system
and try to use it if found, thus breaking the build.
- Add variants for various common build flags, including support for both versions of the Racket VM environment.
- Prevent `-j` flags to `make`, which has been known to cause problems with Racket builds.
- Prefer the minimal release to improve install times. Bells and whistles carry their own runtime dependencies and should be installed via `raco`. An enterprising user may even create a `RacketPackage` class to make spack aware of `raco` installed packages.
- Match the official version numbering scheme.
- Update to version 1.2.12.
- Mark older versions as deprecated because they have security bugs.
- mfem: Update list of system library directories
- zlib patch: cc patch
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Update "spack external find --all" to also find library-only packages.
A Package can add a ".libraries" attribute, which is a list of regular
expressions to use to find libraries associated with the Package.
"spack external find --all" will search LD_LIBRARY_PATH for potential
libraries.
This PR adds examples for NCCL, RCCL, and hipblas packages. These
examples specify the suffix ".so" for the regular expressions used
to find libraries, so generally are only useful for detecting library
packages on Linux.
Do not prompt user with checksum warning when using git commit hashes
as versions. Spack was incorrectly reporting this as a potential
problem: it would display a prompt asking the user whether they
want to proceed if Spack was running in a terminal, or it would
terminate the running instance of Spack if running as part of a
script.
* rocm-cmake: remove ldconfig variant
The packages built for `rocm-cmake~ldconfig` and `rocm-cmake+ldconfig`
are identical, so the variant is unnecessary.
The `ROCM_DISABLE_LDCONFIG` option changes how `rocm_create_package`
generates DEB and RPM packages with CPack. rocm-cmake itself uses
`rocm_create_package`, however, this option is has no effect because
Spack does not build the CPack packages. It is also unnecessary on
rocm-cmake, because rocm-cmake does not contain any shared libraries
for ldconfig to configure. The rocm-cmake package is purely composed
of CMake scripts.
* Tighten CMake version dependency
* Improve package description
* Add pl2bat to PATH: Windows on Perl requires the script pl2bat.bat
and Perl to be available to the installer via the PATH. The build
and dependent environments of Perl on Windows have the install
prefix bin added to the PATH.
* symlink with win32file module instead of using Executable to
call mklink (mklink is a shell function and so is not accessible
in this manner).
* py-marshmallow: Add new package
* Modify py-packaging dependency type
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add run dependency to py-packaging
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We've previously generated CI pipelines for PRs, and they rebuild any packages that don't have
a binary in an existing build cache. The assumption we were making was that ALL prior merged
builds would be in cache, but due to the way we do security in the pipeline, they aren't. `develop`
pipelines can take a while to catch up with the latest PRs, and while it does that, there may be a
bunch of redundant builds on PRs that duplicate things being rebuilt on `develop`. Until we can
do better caching of PR builds, we'll have this problem.
We can do better in PRs, though, by *only* rebuilding things in the CI environment that are actually
touched by the PR. This change computes exactly what packages are changed by a PR branch and
*only* includes those packages' dependents and dependencies in the generated pipeline. Other
as-yet unbuilt packages are pruned from CI for the PR.
For `develop` pipelines, we still want to build everything to ensure that the stack works, and to ensure
that `develop` catches up with PRs. This is especially true since we do not do rebuilds for *every* commit
on `develop` -- just the most recent one after each `develop` pipeline finishes. Since we skip around,
we may end up missing builds unless we ensure that we rebuild everything.
We differentiate between `develop` and PR pipelines in `.gitlab-ci.yml` by setting
`SPACK_PRUNE_UNTOUCHED` for PRs. `develop` will still have the old behavior.
- [x] Add `SPACK_PRUNE_UNTOUCHED` variable to `spack ci`
- [x] Refactor `spack pkg` command by moving historical package checking logic to `spack.repo`
- [x] Implement pruning logic in `spack ci` to remove untouched packages
- [x] add tests
* py-pysimdjson: Add new package
* Cleanup
* Fix python requirement
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* libtiff: add missing dependencies
- gl
- glu
- freeglut
* Make X/GL only for Darwin/Mac
* Catch the force_autoreconf property
* add platform=darwin to the autotools deps as well
* Update var/spack/repos/builtin/packages/libtiff/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Fixes the following error on %clang@13.0.1
>> 2413 bison: error while loading shared libraries: libtextstyle.so.0: cannot open shared object file: No such file or directory
>> 2414 make[2]: *** [<builtin>: getdate.c] Error 127
VecCore's new home is on github (hashes have changed even though commit
IDs and presumably contents are the same), and it does not need any configuration
options. See discussion at https://gitlab.cern.ch/VecGeom/VecCore/-/merge_requests/1 .
Updated flecsi spackage to better support changes in control variables
in post 2.1.0 releases while also making legacy versions clearer as to
what is a tagged release and what is a rolling-ish development branch
* py-reportlab: add missing dependency on freetype
* Add missing dependencies
* Update var/spack/repos/builtin/packages/py-reportlab/package.py
Use pil virtual.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* ExaGO: Handling of cuda architectures and amdgpu targets changed
to effectively handle multiple targets. See #28441.
* Add ROCm support to ExaGO and update ROCm support in HiOp
* ExaGO+rocm requires HiOp+rocm
* Newer versions of CMake may set HIP_CLANG_INCLUDE_PATH incorrectly:
add comments to the ExaGO/HiOp packages explaining how to address
this problem if it occurs.
* cmake: use CMAKE_INSTALL_RPATH_USE_LINK_PATH
Spack has a heuristic to add rpaths for packages it knows are required,
but it's really a heuristic, and it does not work when the dependencies
put their libraries in a different folder than `<prefix>/lib{64,}`.
CMake patches binaries after install with the "install rpaths", which by
default are provided by Spack and its heuristic through
`CMAKE_INSTALL_RPATH`.
CMake however knows better what libraries are effectively being linked
to, and has an option to include those in the install rpath too, through
`CMAKE_INSTALL_RPATH_USE_LINK_PATH`.
These two CMake options are complementary, repeated rpaths seem to be
filtered, and the "use link path" paths are appended to Spack's
heuristic "install rpath".
So, it seems like a good idea to enable "use link path" by default, so
that:
- `dlopen` by library name uses Spack's heuristic search paths
- linked libraries in non-standard locations within a prefix get an
rpath thanks to CMake.
* docs
- Use define/define_from_variant
- Remove unused "fortran_flags"
- Fix CUDA architectures when using multiple (needs semicolon not comma
separators)
- Add `when=` variant restrictions to simplify logic
Add output of build- and install-time tests to info command
Enable dependencies, variants, and versions by default (i.e., provide --no*
options; add gcc to test_info_fields to increase coverage for c_names->v_names
* New package: spiner
* Update dependencies for spiner package
* Update var/spack/repos/builtin/packages/spiner/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* Update var/spack/repos/builtin/packages/spiner/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* Remove versions that can't be installed and use ports-of-call@1.1.0
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* py-torch: fix build with fujitsu-ssl2
* fix to use fujitsu-ssl2 in py-torch v1.5.0 to v1.11.0
* fix to use fujitsu-ssl2 in py-torch v1.2.0 to v1.11.0
* Delete fj-ssl2.patch
* renamed the patches
* Rename fj-ssl2.1.5.patch to fj-ssl2_1.5.patch
* Delete fj-ssl2_1.5.patch
We shouldn't be using "remove_linked_tree" to remove the lock file,
since that function expects to receive a directory path as an
argument.
Also, as a further measure to avoid regression, this commit restores
the "ignore_errors=True" argument on linux and adds a unit test
checking that "remove_linked_tree" doesn't change file permissions
as a side effect of a failure to remove.
* Fix py-onnx-runtime recipe
* Add missing dependencies
* Update var/spack/repos/builtin/packages/py-cerberus/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Better fix for py-onnx-runtime
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* omegah: v10.1.0
this version is from the SCOREC fork of Omega_h
* prefix version with scorec
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Reduces the number of stat calls to a bare minimum:
- Single pass over src prefixes
- Handle projection clashes in memory
Symlinked directories in the src prefixes are now conditionally
transformed into directories with symlinks in the dst dir. Notably
`intel-mkl`, `cuda` and `qt` has top-level symlinked directories that
previously resulted in empty directories in the view. We now avoid
cycles and possible exponential blowup by only expanding symlinks that:
- point to dirs deeper in the folder structure;
- are a fixed depth of 2.
* py-cffi: add compiler flags to fix build with clang
For %clang@13.0.1, this avoids the
```
clang-13: warning: optimization flag '-ffat-lto-objects' is not supported [-Wignored-optimization-argument]
```
warning being turned into an error, and fixes this link error:
```
build/temp.linux-x86_64-3.10/c/_cffi_backend.o: file not recognized: file format not recognized
```
* style
Currently `old_root` is computed by reading the symlink at `self.root`.
We should be more defensive in removing it by checking that it is in the
same directory as the new root. Otherwise, in the worst case, when
someone runs `spack env create --with-view=./view -d .` and `view`
already exists and is a symlink to `/`, Spack effectively runs `rm -rf /`.
`file` was used to detect Python scripts with shebangs, so that the interpreter could be changed from <python prefix> to <view path>. With this change, we detect shebangs using Python instead, so that `file` is no longer required.
The number of commit characters in patch files fetched from GitHub can change,
so we should use `full_index=1` to enforce full commit hashes (and a stable
patch `sha256`).
Similarly, URLs for branches like `master` don't give us stable patch files,
because branches are moving targets. Use specific tags or commits for those.
- [x] update all github patch URLs to use `full_index=1`
- [x] don't use `master` or other branches for patches
- [x] add an audit check and a test for `?full_index=1`
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Known issues reports only 2 issues, among the bugs reported on GitHub.
One of the two is also outdated, since the issue has been solved
with the new concretizer. Thus, this commit removes the section.
* This commit removes the Boost.with_default_variants to variants
that packages are precisely dependant upon. This is the first batch
of 20 packages with modified boost dependencies.
* Style fixes
* Tested bridger: works for gcc-4.9.3 and gcc-8.3.1
Commit 26ff443 made the Gitlab pipeline failing on develop
(while it was not failing in the original PR) due to errors in the
fetcher. This change preserves the new versions, but will give
some time for use to sync our tarball mirror for better reliability
* vecgeom: fix cuda arch
* vecgeom: change 'options' to 'args'
* vecgeom: add spec to locals
* vecgeom: suppress architecture specializations when cuda
- constrain samtools to version 1.13
- replace lzma dependency with xz
- add missing dependencies for libdeflate and openssl
- explicitly set LD_FLAGS for dependencies in makefile
From the release announcement: "This is a special bugfix release ahead of
schedule to address a memory leak that was happening on certain function calls
when using Cython. The memory leak consisted of a small constant amount of bytes
in certain function calls from Cython code. Although in most cases this was not
very noticeable, it was very impactful for long-running applications and certain
usage patterns. Check bpo-46347 for more information."
When you install Spack from a tarball, it will always show an exact
version for Spack itself, even when you don't download a tagged commit:
```
$ wget -q https://github.com/spack/spack/archive/refs/heads/develop.tar.gz
$ tar -xf develop.tar.gz
$ ./spack-develop/bin/spack --version
0.16.2
```
This PR sets the Spack version to `0.18.0.dev0` on develop, following [PEP440](https://github.com/spack/spack/pull/25267#issuecomment-896340234) as
suggested by Adam Stewart.
```
spack (fix/set-dev-version)$ spack --version
0.18.0.dev0 (git 0.17.1-1526-e270464ae0)
spack (fix/set-dev-version)$ mv .git .git_
spack $ spack --version
0.18.0.dev0
```
- [x] Update the release guide
- [x] Add __version__ to spack's __init__.py
- [x] Use PEP 440 canonical version strings
- [x] Make spack --version output [actual version] (git version)
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* rivet: fix dependency build types
If it isn't a python package, there is no good reason to change the default build type to remove link
* rivet: turn swig into build dependency
* Add tests to ensure google cloud storage urls work as mirrors
This commit adds two tests to track that GCS buckets can work as
mirrors, and can be parsed as valid URLs.
Currently, gs:// format URLs are not correctly parsed.
* Fix URL parsing for GCS buckets
This commit adds GCS bucket URLs as valid URLs.
* lower priority of package-provided urls
This change favors urls found in a scraped page over those provided by
the package from `url_for_version`. In most cases this doesn't matter,
but R specifically returns known bad URLs in some cases, and the
fallback path for a failed fetch uses `fetch_remote_versions` to find a
substitute. This fixes that problem.
fixes#29204
* consider what links actually exist in all cases
Checksum was only actually scraping when called with no versions. It
now always scrapes and then selects URLs from the set of URLs known to
exist whenever possible.
fixes#25831
* bow to the wrath of flake8
* test-fetch urls from package, prefer if successful
* Update lib/spack/spack/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* reword as suggested
* re-enable mypy specific ignore and ignore pyflakes
* remove flake8 ignore from .flake8
* address review comments
* address comments
* add sneaky missing substitute
I missed this one because we call substitute on a URL that doesn't
contain a version component. I'm not sure how that's supposed to work,
but apparently it's required by at least one mock package, so back in it
goes.
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Adds `spack external read-cray-manifest`, which reads a json file that describes a set of package DAGs. The parsed results are stored directly in the database. A user can see these installed specs with `spack find` (like any installed spec). The easiest way to use them right now as dependencies is to run `spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described in the file to Spack's installation DB and will also install described compilers to the compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR registers the origin as "external-db"). In the future, it is assumed users may want to be able to treat installs registered with this command differently (e.g. they may want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec is concrete
* I don't think the hashes of installed-and-concrete specs should change and this was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied (external specs may mention compilers that are not registered, and without this change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* Use same cxx value as root
* Remove pointer syntax from non-pointer type in source
* Run patch function before build
* Use raw string in filter_file and merge edit function with patch
* Escape parentheses
* Use gDirectory from ROOT instead of CurrentDirectory function
This PR removes a few outdated sections from the "Basics" part of the
documentation. It also makes a few topic under the environment section
more prominent by removing an unneeded spack.yaml subsection and
promoting everything under it.
* Make boost composable
Currently Boost enables a few components through variants by default,
which means that if you want to use only what you need and no more, you
have to explicitly disable these variants, leading to concretization
errors whenever a second package explicitly needs those components.
For instance if package A only needs `+component_a` it might depend on
`boost +component_a ~component_b`. And if packge B only needs
`+component_b` it might depend on `boost ~component_a +component_b`. If
package C now depends on both A and B, this leads to unsatisfiable
variants and hence a concretization error.
However, if we default to disabling all components, package A can simply
depend on `boost +component_a` and package B on `boost +component_b` and
package C will concretize to depending on `boost +component_a
+component_b`, and whatever you install, you get the bare minimum.
* Fix style
* Added composable boost dependencies for folly
* fixing akantu merge issue
* hpctoolkit boost dependencies already defined
* Fix Styles
* Fixup style once more
* Adding isort fix
* isort one more time
* Fix for package audit issue
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Ryan O'Malley <rd.omalley@comcast.net>
Consolidate Spack's internal filepath logic to a select
few places and refactor to consistent internal useage of
os.path utilities. Creates a prefix, and a series of utilities
in the path utility module that facilitate handling paths
in a platform agnostic manner.
Convert Windows paths to posix paths internally
Prefer posixpath.join instead of os.path.join
Updated util/ directory to account for Windows integration
Co-authored-by: Stephen Crowell <stephen.crowell@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Module template format for windows (#23041)
* Incorporate new search location
* Add external user option
* proper doc string
* Explicit commands in getting started
* raise during chgrp on Win
recover installer changes
Notate admin privleges
Windows phase install hooks
Find external python and install ninja (#23496)
Allow external find python to find windows python and spack install ninja
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Fixup common tests
* Remove requirement for Python 2.6
* Skip new failing test
Windows: Update url util to handle Windows paths (#27959)
* update url util to handle windows paths
* Update tests to handle fixed url handling
* canonicalize path only when the path type matches the host platform
* Skip some url tests on Windows
Co-authored-by: Omar Padron <omar.padron@kitware.com>
Use threading.TIMEOUT_MAX when available (#24246)
This value was introduced in Python 3.2. Specifying a timeout greater than
this value will raise an OverflowError.
Co-authored-by: Lou Lawrence <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Add compiler hint to the root spec for Windows
Reporters on Windows (#26038)
Reporters use Jinja2 as the templating engine, and Jinja2 indexes
templates by Unix separators, even on Windows, so search using Unix paths
on all systems.
Support patching on win via git (#25871)
Handle GRP on windows
CMake - Windows Bootstrap (#25825)
Remove hardcoded cmake compiler (#26410)
Revert breaking cmake changes
Ensure no autotools on Windows
Perl on Windows (#26612)
Python source build windows (#26313)
Reconfigure sysconf for Windows
Python2.6 compatibility
Fxixup new sbang tests for windows
Ruby support (#28287)
Add NASM support (#28319)
Add mock Ninja package for testing
* Style fixes
* Use Python's zipfile, if available
The compression libs are optional in Python. Rely on python as a
first attempt then fall back to `unzip`
MSVC's internal CMake and Ninja now detected by spack external find and added to packages.yaml
Saving progress on packaging zlib for Windows
Fixing the shared CMake flag
* Loading Intel's ifx Fortran compiler into MSVC; if there are multiple
versions of MSVC installed and detected, ifx will only be placed into
the first block written in compilers.yaml. The version number of ifx can
be detected using MSVC's version flag (instead of /QV) by using
ignore_version_errors. This commit also provides support for detection
of Intel compilers in their own compiler block by adding ifx.exe to the
fc/f77_name blocks inside intel.py
* Giving CMake a Fortran compiler argument
* Adding patch file for removing duplicated mangling header for versions 3.9.1 and older; static and shared now successfully building on Windows
* Have netlib-lapack depend on ninja@1.10
Co-authored-by: John R. Cary <cary@txcorp.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Making a default config.yaml for Windows
Small path length for build_stage
Provide more prerequisite details, mention default config.yaml
Killing an unnecessary setvars call
Replacing some lost changes, proofreading, updating windows-supported package list
Co-authored-by: John Parent <john.parent@kitware.com>
* Add 'make-installer' command for Windows
* Add '--bat' arg to env activate, env deactivate and unload commands
* An equivalent script to setup-env on linux: spack_cmd.bat. This script
has a wrapper to evaluate cd, load/unload, env activate/deactivate.(#21734)
* Add spacktivate and config editor (#22049)
* spack_cmd: will find python and spack on its own. It preferentially
tries to use python on your PATH (#22414)
* Ignore Windows python installer if found (#23134)
* Bundle git in windows installer (#23597)
* Add Windows section to Getting Started document
(#23131), (#23295), (#24240)
Co-authored-by: Stephen Crowell <stephen.crowell@kitware.com>
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Co-authored-by: Ben Cowan <benc@txcorp.com>
Update Installer CI
Co-authored-by: John Parent <john.parent@kitware.com>
Made the vcvars batch script location a member variable of the msvc compiler subclass, initialized from the compiler executable path. Added a setup_custom_environment() method to the msvc subclass that sources the vcvars script, dumps the environment, and copies the relevant environment variables to the Spack environment. Added class variables to the Windows OS and MSVC compiler subclasses to enable finding the compiler executables and determining their versions.
* Fixed path and uid issues.
* Added needed import statement; kluged .exe extension.
* Got package to build. Some manual intervention necessary, including sourcing the MSVC setup script and having certain configuration parameters.
* Removed CMake executable suffix hack.
To provide Windows-compatible functionality, spack code should use
llnl.util.symlink instead of os.symlink. On non-Windows platforms
and on Windows where supported, os.symlink will still be used.
Use junctions when symlinks aren't supported on Windows (#22583)
Support islink for junctions (#24182)
Windows: Update llnl/util/filesystem
* Use '/' as path separator on Windows.
* Recognizing that Windows paths start with '<Letter>:/' instead of '/'
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
os.rename() fails on Windows if file already exists.
Create getuid utility function (#21736)
On Windows, replace os.getuid with ctypes.windll.shell32.IsUserAnAdmin().
Tests: Use getuid util function
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
1. Forwarding sys.stdin, e.g. use input_multiprocess_fd,
gives an error on Windows. Skipping for now
3. subprocess_context needs to serialize for Windows, like it does
for Mac.
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Snapshot of some MSVC infrastructure added during experiments a while ago. Rebasing from spack/develop.
* Added platform and OS definitions for Windows.
* Updated Windows platform file to conform to new archspec use.
* Added Windows as a platform; introduced some debugging code.
* Added type annotations.
* Fixed copyright.
* Removed print statements.
* Ensure `spack arch` returns correctly on Windows (#21428)
* Correctly identify windows as 'windows-Windows10-AMD64'
* python: allow versions with garbage suffix
Ubuntu 22.04 preview python prints version as 3.10.2+, the + causes
version parsing to fail and breaks detection.
* Add version comment
* match VALID_VERSION regex
* libiconv: compile with pic even when static build
* lmod: require shared lua
It seems to be unable to detect lua-posix when using a static lua:
```
Error: The follow lua module(s) are missing: posix
```
Re-work the checks and comparisons around commit versions, when no
commit version is involved the overhead is now in the noise, where one
is the overhead is now constant rather than linear.
develop in the version string. The versions from the HDF5 code were not
matching because 'develop-' is not part of the HDF5 version. Also, the
develop-x.x versions in spack omit the release version (third) number
because the branch spans all of the release versions.
* Update: py-cmake
Add additional dependencies as declared by the `py-cmake` repository.
Note: for either from-source or from-binary builds, this downloads
additional software via the network. We might want to propose upstream
patches to make this work on nodes without internet connection.
* Add Review Comments + Newest Version
* Add: Ninja
Preferred generator according to outputs and upstream repo logic
* Attempt to use resource() for CMake source
* [py-watchdog] switched to pypi and audited dependencies
* [py-watchdog] added version 2.1.6
* [py-watchdog] updated dependencies for old versions
* [py-watchdog] added when for variant
* [py-watchdog] added some newlines to make flake8 happy
* hsa-rocr-dev, llvm-amdgpu: change dependency libelf to elf
Change the libelf dependency to the virtual elf for two rocm packages.
This allows other packages (hpctoolkit) to combine rocm and dyninst
(with elfutils) while still being able to build rocm with libelf when
needed, eg darwin.
* add comment describing include path for libelf vs elfutils
fixes#29446
The new setup_*_environment functions have been falling back
to calling the old functions and warn the user since #11115.
This commit removes the fallback behavior and any use of:
- setup_environment
- setup_dependent_environment
in the codebase
Change the internal representation of `Spec` to allow for multiple dependencies or
dependents stemming from the same package. This change permits to represent cases
which are frequent in cross compiled environments or to bootstrap compilers.
Modifications:
- [x] Substitute `DependencyMap` with `_EdgeMap`. The main differences are that the
latter does not support direct item assignment and can be modified only through its
API. It also provides a `select_by` method to query items.
- [x] Reworked a few public APIs of `Spec` to get list of dependencies or related edges.
- [x] Added unit tests to prevent regression on #11983 and prove the synthetic construction
of specs with multiple deps from the same package.
Since #22845 went in first, this PR reuses that format and thus it should not change hashes.
The same package may be present multiple times in the list of dependencies with different
associated specs (each with its own hash).
* The new version of Wonton requires the new version of Jali
* Wonton: versions after 1.2.10 don't require boost at all
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* environment.py: allow link:run
Some users want minimal views, excluding run-type dependencies, since
those type of dependencies are covered by rpaths and the symlinked
libraries in the view aren't used anyways.
With this change, an environment like this:
```
spack:
specs: ['py-flake8']
view:
default:
root: view
link: run
```
includes python packages and python, but no link type deps of python.
* ECP-SDK/VTK-m: Update ROCm variant
VTK-m set contraint for when rocm/kokkos are available.
SDK Make ROCmPackage and propagate amdgpu_arch and rocm variant to
VTK-m.
Note: SDK has to check vtk-m@ 1.7: and :1.6 explicitly in orderer to have 1.7
be selected by default if +rocm in the SDK.
* ECP-SDK: Enable ROCm + VTK-m constraints
* Adding Panzer as Default
* Set Panzer as non-default
* Updated the conflict for Panzer.
* Updated the conflict for Panzer.
* Resolve the issue with Stratimikos and Thyra
* Fixing stk build issues.
* Fixing stk build issues.
* Adding another conflict for Thrya
* cray-libsci: only be a provider for scalapack with +mpi
If a package explicitly links the scalapack provider we might otherwise end up with different variants of libsci being linked: the explicitly linked one and the one added by the Cray compiler wrappers.
* cp2k: require cray-libsci+openmp with +openmp for consistency
otherwise we might get 2 different libsci linked: one explicitly, the other one via the Cray compiler wrappers, leading at least to segfaults during cleanup
* cp2k: depend on cray-fftw+openmp with +openmp
* hdf5: mark +fortran+shared conflict for older version
This version was only activated unintentionally by silo's conflict
statement, but `@1.8.15+shared+fortran+cxx` errors out in configure:
```
CMake Error at CMakeLists.txt:814 (message):
**** Shared FORTRAN libraries are unsupported ****
```
* silo: refine hdf5 conflicts to avoid building old version
Before this, `silo+hdf5` concretized to 1.10.7 or sometimes 1.8.15. Now
I've verified it works for the following configurations:
```
silo@4.10.2 patches=7b5a1dc,952d3c9
^ hdf5@1.10.7 api=default
silo@4.10.2 patches=7b5a1dc,952d3c9,eb2a3a0
^ hdf5@1.10.8 api=v18
silo@4.10.2 patches=7b5a1dc,952d3c9,eb2a3a0
^ hdf5@1.12.1 api=v110
silo@4.11-bsd patches=eb2a3a0
^ hdf5@1.12.1 api=v110
silo@4.11-bsd patches=eb2a3a0
^ hdf5@1.10.8 api=default
silo@4.11-bsd patches=eb2a3a0
^ hdf5@1.12.1 api=default
```
and verified that the following fail:
```
silo@4.10.2 ^hdf5@1.12.1 api=default
silo@4.11 ^hdf5 api=v18
silo@4.11-bsd ^hdf5@1.13.0 api=v12
silo@4.11-bsd ^hdf5@1.13.0 api=default
```
and have updated the constraints to match. Hdf5 no longer has to be
downgraded to work with Silo.
* silo: fix dependency conflicts
* py-h5py: shorten and add comments to py-h5py hdf5 dependencies
* e4s: remove slightly outdated hdf5 requirement
* e4s: remove excessive hdf5 variant constraints
These I think are holdovers from the old concretizer.
- `hdf5_compat` can be expressed as `+hdf5 ^hdf5@1.8`
- The extra variants on hdf5 shouldn't break conduit
- axom unnecessarily restricts hdf5 version
* conduit: restore hdf5_compat flag
New versions don't try to configure docs targets at all when the
BUILD_DOCS option is turned off. This avoids CMake warnings
when docs dependencies are not found.
Speeds up comparison on `Version` by ~2.5x, e.g.
```python
In [1]: v = spack.version.Version('1.0.0'); w = spack.version.Version('1.0.2')
In [2]: %timeit v < w
1.47 µs ± 5.59 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
535 ns ± 1.75 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
* Bugfix in var/spack/repos/builtin/packages/esmf/package.py
* Bug fixes in var/spack/repos/builtin/packages/esmf/package.py to build ESMF on macOS with clang+gfortran and on cray
* Add maintainer to var/spack/repos/builtin/packages/esmf/package.py
* Fix style errors
* Fix more style errors
* py-jupytext: add version 0.13.6
From da3fcc305d:
markdown-it-py v2.0 implements some internal changes, but won't affect jupytext
* py-jupytext: keep mdit-py version restricted to 1
* py-jupytext: update dependencies
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add HiOp v0.5.4, update magma constraint
* Add v2.6.2rc1 to magma, make hiop depend on it
* Update var/spack/repos/builtin/packages/hiop/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The 'multicore' backend always uses SMP, so reverse
the logic of the `conflict` clause. This resolves an issue
where the '+smp' default caused the 'backend' to switch
away from 'multicore' unintentionally (#29234).
fixes#29203
This PR fixes a subtle bug we have when importing
Spack packages as Python modules that can lead to
multiple module objects being created for the same
package.
It also fixes all the places in unit-tests where
"relying" on the old bug was crucial to have a new
"clean" state of the package class.
This commit reverts the GCS fetch strategy to before commit:
d759612523
The previous commit added some s3 syntax to handle connections, but
added them into the GCS fetch strategy in a way that prevents GCS from
working anymore.
* rocmcc compiler: initial commit based on aocc and clang
Co-authored-by: luker <luke.roskop@hpe.com>
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
Recipes that are not actually required for LBANN or DiHydrogen to
build. These should be concretized within the same environment or
installed via PIP using the same Python that installed LBANN.
Removing these will help eliminate build time failures that are
actually associated with Python tools, not LBANN.
The status displayed in the terminal title could be wrong when doing
distributed builds. For instance, doing `spack install glib` in two
different terminals could lead to the current package being reported as
`40/29` due to the way Spack handles retrying locks.
Work around this by keeping track of the package IDs that were already
encountered to avoid counting packages twice.
* HIP: Change mesa18 dep to gl
* Mesa: Conflict with llvm-amdgpu when +llvm and swr
* Add def for suffix
* Disable llvm suffix patch.
* LLVM: Remove version suffix patches
* ECP-SDK: ParaView 5.11: required for CUDA
* Add conflict with ParaView@master
Because of the additional constraints for cuda, ParaView@master may be
selected unintentionally. Prefer older versions of ParaView without cuda
to master with cuda.
* hypre: Add releases 2.21.0 and 2.22.0
* Revert "hypre: Add releases 2.21.0 and 2.22.0"
This reverts commit 8921cdb3ac.
* Address external linkage failures in elfutils 0.185:
https://bugs.gentoo.org/794601https://sourceware.org/pipermail/elfutils-devel/2021q2/003862.html
Encountered while building within a Spack environment.
* Revert "Address external linkage failures in elfutils 0.185:"
This reverts commit 76b93e4504.
* paraview: The ninja generator has problems with XL and CCE
See https://gitlab.kitware.com/paraview/paraview/-/issues/21223
* paraview: Add variant to allow choice of cmake generator.
This will be necessary until problems with cmake+ninja on XL and
CCE builds can be resolved.
See https://gitlab.kitware.com/paraview/paraview/-/issues/21223
* paraview: ninja generator problems with XL/CCE
By popular preference, abandon the idea of a special variant
and select the generator based on compiler.
* Greg Becker suggested using the dedicated "generator" method to
pass the choice of makefile generator to cmake.
* paraview: The build errors I saw before with paraview%cce + ninja
have not reappeared in subsequent testing, so I'm dropping it from this
PR. If they re-occur I'll report the issue separately to KitWare.
* py-nbclassic: add 0.3.5
* Update var/spack/repos/builtin/packages/py-nbclassic/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix style
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add a new test to catch exit code failure
fixes#29226
This introduces a new unit test that checks the return
code of `spack unit-test` when it is supposed to fail.
This is to prevent bugs like the one introduced in #25601
in which CI didn't catch a missing return statement.
In retrospective it seems that the shell test we have right
now all go through `tty.die` or similar code paths which
call `sys.exit(a)` explicitly. This new test instead checks
`spack unit-test` which relies on the return code from
command invocation in case of errors.
* Add 'develop' version for dmtcp
* Update var/spack/repos/builtin/packages/dmtcp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The `spack external find binutils` command was failing to find my system
binutils because the regex was not matching. The name of the executable
follows the string 'GNU' that I tested with three different
installations so I changed the regex to look for that. On my CentOS-7
system, the version had the RPM details so I set the version to capture
the first three parts of the version.
The system compiler on RHEL7 fails to build the latest linux-uuid.
```
util-linux-uuid@2.37.4%gcc@4.8.5 arch=linux-rhel7-haswell
```
results in:
```
libuuid/src/unparse.c:42:73: error: expected ';', ',' or ')' before 'fmt'
static void uuid_fmt(const uuid_t uuid, char *buf, char const *restrict fmt)
```
It looks like it's assuming C99 by default so there may be a better way
to handle this... but this at least works
See https://github.com/spack/spack/pull/28468/files#r809156986
If we exit before generating the:
error("Dependencies must have compatible OS's with their dependents").
...
facts we'll output a problem that is effectively
different by the one solved by clingo.
* cmd/checksum: prefer url matching url_from_version
This is a minimal change toward getting the right archive from places
like github. The heuristic is:
* if an archive url exists, take its version
* generate a url from the package with pkg.url_from_version
* if they match
* stop considering other URLs for this version
* otherwise, continue replacing the url for the version
I doubt this will always work, but it should address a variety of
versions of this bug. A good test right now is `spack checksum gh`,
which checksums macos binaries without this, and the correct source
packages with it.
fixes#15985
related to #14129
related to #13940
* add heuristics to help create as well
Since create can't rely on an existing package, this commit adds another
pair of heuristics:
1. if the current version is a specifically listed archive, don't
replace it
2. if the current url matches the result of applying
`spack.url.substitute_version(a, ver)` for any a in archive_urls,
prefer it and don't replace it
fixes#13940
* clean up style and a lingering debug import
* ok flake8, you got me
* document reference_package argument
* Update lib/spack/spack/util/web.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* try to appease sphinx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We can see what is in the bootstrap store with `spack find -b`, and you can clean it with `spack
clean -b`, but we can't do much else with it, and if there are bootstrap issues they can be hard to
debug.
We already have `spack --mock`, which allows you to swap in the mock packages from the command
line. This PR introduces `spack -b` / `spack --bootstrap`, which runs all of spack with
`ensure_bootstrap_configuration()` set. This means that you can run `spack -b find`, `spack -b
install`, `spack -b spec`, etc. to see what *would* happen with bootstrap configuration, to remove
specific bootstrap packages, etc. This will hopefully make developers' lives easier as they deal
with bootstrap packages.
This PR also uses a `nullcontext` context manager. `nullcontext` has been implemented in several
other places in Spack, and this PR consolidates them to `llnl.util.lang`, with a note that we can
delete the function if we ever reqyire a new enough Python.
- [x] introduce `spack --bootstrap` option
- [x] consolidated all `nullcontext` usages to `llnl.util.lang`
* py-imageio: add 2.16.0
* Update var/spack/repos/builtin/packages/py-imageio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Some "concrete" versions on the command line, e.g. `qt@5` are really
meant to satisfy some actual concrete version from a package. We should
only assume the user is introducing a new, unknown version on the CLI
if we, well, don't know of any version that satisfies the user's
request. So, if we know about `5.11.1` and `5.11.3` and they ask for
`5.11.2`, we'd ask the solver to consider `5.11.2` as a solution. If
they just ask for `5`, though, `5.11.1` or `5.11.3` are fine solutions,
as they satisfy `@5`, so use them.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* geant4-data: use build+run-only depends
* geant4: point to dependent datadir
This is "used" in the configure step to set up the Geant4Config.cmake
file's persistent pointers to the data directory, but the dependency
is still listed as "run" -- though I'm not sure this is the right behavior
since the geant4 installation really does change as a function of the
data directory, and the installation is incomplete/erroneous
without using one.
* Style
* trilinos: disable dl on macOS
* py-sphinx-argparse: add explicit poetry dependency
* libzmq: fix libbsd dependency
libbsd is *always* required when +libbsd (introduced in #28503) . #20893
had previously removed the macos dependency because libbsd wasn't always
enabled. Libbsd support is only available after 4.3.2 so change it to a
conflict rather than bumping the dependency.
* hdf5: work around GCC11.2 monterey fortran bug
* go-bootstrap: mark conflict for monterey
* py-tensorflow: add versions 2.5.0 and 2.6.0
- add version 2.5.0
- add version 2.6.0
- add patches for newer protobuf
- set constraints
* Remove import os. left over from testing
* Remove unused patch file
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-clang dependency
* Adjust py-clang constraint
* Build tensorflow with tensorboard
- tensorflow
- added 2.6.1 and 2.6.2 versions
- tensorboard
- have bazel use number of jobs set by spack
- add versions and constraints
- new package: py-tensorboard-data-server
- use wheel for py-tensorboard-plugin-wit
This package can not build with newer versions of bazel that are
needed for newer versions of py-tensorboard.
* Update var/spack/repos/builtin/packages/py-clang/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Remove empty line at end of file
* Fix import sorting
* Adjust python dependencies on py-clang
* Add version 2.7.0 of pt-tensorflow and py-tensorboard
* Adjust bazel constraints
* bazel-4 support begins with py-tensorflow-2.7.0
* Adjust dependencies
* Loosen cuda constraint on versions > 2.5
Tensorflow-2.5 and above can use cuda up to version 11.4.
* Add constraints to patch
The 0008-Fix-protobuf-errors-when-using-system-protobuf.patch patch
should only apply to versions 2.5 and above.
* Adjust constraints
- versions 2.4 and below need protobuf-3.12 and below
- versions 2.4 and above can use up to cuda-11.4
- versions 2.2 and below can not use cudnn-8
- the null_linker_bin patch should only be applied to versions 2.5 and
above.
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix py-grpcio dependency for version 2.7
Also, make sure py-h5py mpi specs are consistent.
* Add llvm as run dependency.
* Fix python spec for py-tensorboard
* Fix py-google-auth spec for py-tensorboard
* Do not override the pip spec for tensorboard-plugin-wit
* Converted py-tensorboard-plugin-wit to wheel only package
* Fix bazel dependency spec in tensorflow
* Adjust pip masks
- allow tensorboard to be specified in pip constraints
- mask tensorflow-estimator
* Remove blank line at end of file
* Adjust pip constraints in setup.py
Also, adjust constraint on a patch that is fixed in 2.7
* Fix flake8 error
Adjust formatting for consistency.
* Get bazel dep right
* Fix old cudnn dependency, caught in audit test
* Adjust the regex to ensure proper line is changed
* Add py-libclang package
- Stripped the py-clang package down to just version 5
- added comments to indicate the purpose of py-clang and that
py-libclang should be preferred
- set dependencies accordingly in py-tensorflow
* Remove cap on py-h5py dependency for v2.7
* Add TODO entries for tensorflow-io-gcs-filesystem
* Edit some comments
* Add phases and select python in PATH for tensorboard-data-server
* py-libclang
- remove py-wheel dependency
- remove raw string notation in filter_file
* py-tensorboard-data-server
- remove py-wheel dep
- remove py-pip dep
- use python from package class
* py-tensorboard-plugin-wit
- switch to PythonPackage
- add version 1.8.1
- remove unneeded code
* Add comment as to why a wheel is need for tensorboard-plugin-wit
* remove which pip from tensorboard-data-server
* Fix dependency specs in tensorboard
* tweak dependencies for tensorflow
* fix python constraint
* Use llvm libs property
* py-tensorboard-data-server
- merge build into install
- use std_pip_args
* remove py-clang dependency
* remove my edits to py-tensorboard-plugin-wit
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
See https://github.com/spack/spack/issues/25353#issuecomment-1041868116
This commit changes the default behavior of
```
$ spack external find
```
from searching all the possible packages Spack knows about to
search only for the ones tagged as being a "build-tool".
It also introduces a `--all` option to restore the old behavior.
Prefer `sw_vers` to `platform.mac_ver`. In anaconda3 installation, for example, the latter reports 10.16 on Monterey -- I think this is affected by how and where the python instance was built.
Use MACOSX_DEPLOYMENT_TARGET if present to override the operating system choice.
It will be useful for metrics gathering and possibly debugging to
have this environment variable available in the runner pods that
do the actual rebuilds.
Since Spack does not install external packages, this commit skips them by
default when running stand-alone tests. The assumption is that such packages
have likely undergone an acceptance test process.
However, the tests can be run against installed externals using
```
% spack test run --externals ...
```
fixes#28260
Since we iterate over different variants from many packages, the variant
values may have types which are not comparable, which causes errors
at runtime. This is not a real issue though, since we don't need the facts
to be ordered. Thus, to avoid needless sorting, the sorted function has
been removed and a comment has been added to tip any developer that
might need to inspect these clauses for debugging to add back sorting
on the first two items only.
It's kind of difficult to add a test for this, since the error depends on
whether Python sorting algorithm ever needs to compare the third
value of a tuple being ordered.
* extensions: allow multiple "extends" directives
This will allow multiple extends directives in a package as long as only one of
them is selected as a dependency in the concrete spec.
* document the option to have multiple extends
Reuse previously was a very invasive change that required parameters to be added to all
the methods that called `concretize()` on a `Spec` object. With the addition of
concretizer configuration, we can use the config system to simplify this argument
passing and keep the code cleaner.
We decided that concretizer config options should be read at `Solver` instantiation
time, and if config changes between instnatiation of a particular solver and
`solve()` invocation, the `Solver` should use the settings from `__init__()`.
- [x] remove `reuse` keyword argument from most concretize functions
- [x] refactor usages to use `spack.config.override("concretizer:reuse", True)`
- [x] rework argument passing in `Solver` so that parameters are set from config
at instantiation time
`--reuse` was previously handled individually by each command that
needed it. We are growing more concretization options, and they'll
need their own section for commands that support them.
Now there are two concretization options:
* `--reuse`: Attempt to reuse packages from installs and buildcaches.
* `--fresh`: Opposite of reuse -- traditional spack install.
To handle thes, this PR adds a `ConfigSetAction` for `argparse`, so
that you can write argparse code like this:
```
subgroup.add_argument(
'--reuse', action=ConfigSetAction, dest="concretizer:reuse",
const=True, default=None,
help='reuse installed dependencies/buildcaches when possible'
)
```
With this, you don't need to add logic to pull the argument out and
handle it; the `ConfigSetAction` just does it for you. This can probably
be used to clean up some other commands later, as well.
Code that was previously passing `reuse=True` around everywhere has
been refactored to use config, and config is set from the CLI using
a new `add_concretizer_args()` function in `spack.cmd.common.arguments`.
- [x] Add `ConfigSetAction` to simplify concretizer config on the CLI
- [x] Refactor code so that it does not pass `reuse=True` to every function.
- [x] Refactor commands to use `add_concretizer_args()` and to pass
concretizer config using the config system.
Config scopes were different for `config` and `mutable_config`,
and `mutable_config` did not have a command line scope.
- [x] Fix by consolidating the creation logic for the two fixtures.
The concretizer is going to grow to have many more configuration,
and we really need some structured config for that.
* We have the `config:concretizer` option that chooses the solver,
but extending that is awkward (we'd need to replace a string with
a `dict`) and the solver choice will be deprecated eventually.
* We have the `concretization` option in environments, but it's
not a top-level config section -- it's just for environments,
and it also only admits a string right now.
To avoid overlapping with either of these and to allow the most
extensibility in the future, this adds a new `concretizer` config
section that can be used in and outside of environments. There
is only one option right now: `reuse`. This can expand to include
other options later.
Likely, we will soon deprecate `config:concretizer` and warn when
the user doesn't use `clingo`, and we will eventually (sometime later)
move the `together` / `separately` options from `concretization` into
the top-level `concretizer` section.
This commit just adds the new section and schema. Fully wiring it
up is TBD.
The solver has a lot of configuration associated with it. Rather
than adding arguments to everything, we should encapsulate that
in a class. This is the start of that work; it replaces `solve()`
and its kwargs with a class and properties.
* Add 'stable' to the list of infinity version names.
Rename libunwind 1.5-head to 1.5-stable.
* Add stable to the infinite version list in packaging_guide.rst.
* py-etelemetry: add 0.3.0
* Update var/spack/repos/builtin/packages/py-etelemetry/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
No version of py-nbconvert@5: can be concretized due to conflicting versions
of flit-core that are required. This issue could be solved by separate
concretization of build deps.
* archspec: remove pyproject.toml to workaround PEP517
If pyproject.toml is in the folder, that is preferred to the
setup.py packaged by poetry itself. Adding a dependency on
poetry for deploying a pure Python package seems wasteful,
since the package to be deployed just needs to be copied in
place, so we don't want to built rust for that.
* archspec: patch pyproject.toml to comply to PEP517
See https://python-poetry.org/docs/pyproject/#poetry-and-pep-517
* Fix style issues
The new HDF5 version 1.12 API causes compiler errors due to modified function prototypes. Note that version 1.11 is the development version of HDF5 1.12.
* py-numba: add 0.55.1
* Remove comment
* Pin down py-llvmlite version for older py-numba releases
* Remove py-llvmlite deps for releases not in spack
* Set upper bounds for python and py-numpy
* Add stricter upper bound to py-numpy for releases <=0.47
Setting Spack's `$prefix` to `$DESTDIR` and not to `$PREFIX` install the
package in `$prefix/usr/local` and not in `$prefix`, thus when it is
loaded the executable `direnv` in not "seen" by the environment.
* Added support to LBANN, Hydrogen, DiHydrogen, and Aluminum to capture
a gcc-toolchain cxxflags argument and pass it to a CMAKE_CUDA_FLAG
argument when set. This helps deal with compiling with clang on
systems with old base gcc installations.
* Added a dependency on py-scipy when enabling tests on LBANN.
* Updated the C++ standard for Hydrogen to C++17.
* Added a new variant +apps to enable (or disable) python packages that
are used by applications in the LBANN repo, but are not strictly
required for building and using LBANN.
* Added a run time dependency for both py-pytest and py-scipy so that
they are activated in any environment.
* Added support for building LBANN, Hydrogen, and DiHydrogen with the
IBM ESSL BLAS library. This requires explicit identification of
additional LAPACK libraries, since ESSL does not implement LAPACK, but
is found by CMake.
* Fixed a bug in the LBANN dependency on OpenCV for Power architectures.
The +powerpc variant is only required for GCC toolchains and causes
Clang to break. Switched to only enabling when using %gcc on power.
- Installation often hangs building the documentation. This happens when
doxygen and latex are found. To avoid the issue, comment-out that part
of the code until an explicit cmake variable to disable documentation
generation is available.
* sundials: fix smoke tests
* sundials: add new version
* use cmake+make instead of make for tests, fix style
* use cmake_bin workaround from https://github.com/spack/spack/pull/28622
Note that the SDK is not the same as the system version: using
apple-clang@13 is a better match than `os=monterey` since this actually
fails on bigsur as well, as long as xcode 13 is being used.
* core: Make platform environment an instance not class method
In preparation for accessing data constructed in __init__.
* macos: set consistent macosx deployment target
This should silence numerous warnings from mixed gcc/macos toolchains.
* perl: prevent too-new deployment target version
```
*** Unexpected MACOSX_DEPLOYMENT_TARGET=11
***
*** Please either set it to a valid macOS version number (e.g., 10.15) or to empty.
```
* Stylin'
* Add deployment target overrides to failing autoconf packages
* Move configure workaround to base autoconf package
This reverts commit 3c119eaf8b4fb37c943d503beacf5ad2aa513d4c.
* Stylin'
* macos: add utility functions for SDK
These aren't yet used but should probably be added to spack debug
report.
* Remove node_target_satisfies/3 in favor of target_satisfies/2
When emitting input facts we don't need to couple target with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Remove compiler_version_satisfies/4 in favor of compiler_version_satisfies/3
When emitting input facts we don't need to couple compilers with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Introduce heuristic in the ASP-program
With heuristic we can drive clingo to make better
initial guesses, which lead to fewer choices and
conflicts in the overall solve
This improves the stand-alone tests for slate by providing most
of the dependencies to the test framework and enabling stand-alone
tests on all versions except the oldest.
* AMReX: +tiny_profile
The tiny profiler options in AMReX are by default off but needed
by WarpX. Adds a new variant to control it.
* Add Erik Palmer as Co-Maintainer
... so he receives pings on updates of the package for review.
The version of the ONNX submodule was updated between the PyTorch
1.9 and 1.10 releases, which fixed builds with newer protobuf but
broke builds with older protobuf.
Also this adds minimum version reqs for numpy/typing-extensions
(which were not present before).
* gcc: revise patch range on darwin
* gcc: add conflict to work around bootstrap failure
closes#23296 . See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100340
.
```
Comparing stages 2 and 3
Bootstrap comparison failure!
gcc/tree-ssa-operands.o differs
gcc/tree-ssanames.o differs
gcc/ipa-inline.o differs
gcc/tree-ssa-pre.o differs
gcc/gimple-loop-interchange.o differs
...
```
639 total differences.
* gcc: bump conflict up to correct later version
* Fix reindex with uninstalled deps
When a prefix of a dep is removed, and the db is reindexed, it is added
through the dependent, but until now it incorrectly listed the spec as
'installed'.
There was also some questionable behavior in the db when the same spec
was added multiple times, it would always be marked installed.
* Always reserve path
* Only add installed spec's prefixes to install prefixes set
* Improve warning, and ensure ensure only ensures
* test: reindex with every file system remnant removed except for the old index; it should give a database with nothing installed, including records with installed==False,external==False,ref_count==0,explicit=True, and these should be removable from the database
* stacks: add regression tests for matrix expansion
* Use constrain semantics to construct spec lists for stacks
* Fix semantics for constraining an anonymous spec. Add tests
Since in Spack we pull binaries out of the `warpx` package, we don't
need `py-cmake` to build `py-warpx`.
Generally, `py-cmake` in `pyproject.toml` is just a mean for us to
tell `pip` to make a `cmake` CLI tool available.
* added package gptune with all its dependencies: adding py-autotune, pygmo, py-pyaml, py-autotune, py-gpy, py-lhsmdu, py-hpbandster, pagmo2, py-opentuner; modifying superlu-dist, py-scikit-optimize
* adding gptune package
* minor fix for macos spack test
* update patch for py-scikit-optimize; update test files for gptune
* fixing gptune package style error
* fixing unit tests
* a few changes reviewed in the PR
* improved gptune package.py with a few newly added/improved dependencies
* fixed a few style errors
* minor fix on package name py-pyro4
* fixing more style errors
* Update var/spack/repos/builtin/packages/py-scikit-optimize/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* resolved a few issues in the PR
* fixing file permissions
* a few minor changes
* style correction
* minor correction to jq package file
* Update var/spack/repos/builtin/packages/py-pyro4/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixing a few issues in the PR
* adding py-selectors34 required by py-pyro4
* improved the superlu-dist package
* improved the superlu-dist package
* moree changes to gptune and py-selectors34 based on the PR
* Update var/spack/repos/builtin/packages/py-selectors34/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* improved gptune package: 1. addressing comments of tldahlgren in PR 26936; 2. adding variant openmpi
* fixing style issue of gptune
* changing file mode
* improved gptune package: add variant mpispawn which depends on openmpi; add variant superlu and hypre for installing the drivers; modified hypre package file to add a gptune variant
* fixing style error
* corrected pddrive_spawn path in gptune test; enforcing gcc>7
* fixing style error
* setting environment variables when loading gptune
* removing debug print in hypre/package.py
* adding superlu-dist v7.2.0; fixing an issue with CMAKE_INSTALL_LIBDIR
* changing site_packages_dir to python_platlib
* not using python3.9 for py-gpy, which causes due to dropped support of tp_print
* more replacement of site_packages_dir
* fixing a few dependencies in gptune; added a gptune version
* adding url for gptune
* minor correction of gptune
* updating versions in butterflypack
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [py-xonsh] added py-xonsh package
* [py-xonsh] change dependency to python 3.6
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add sticky variants
* Add unit tests for sticky variants
* Add documentation for sticky variants
* Revert "Revert 19736 because conflicts are avoided by clingo by default (#26721)"
This reverts commit 33ef7d57c1.
* Add stickiness to "allow-unsupported-compiler"
- To retrieve the correct spack version we need to get it from the git
repo.
- Recommend installing the package with root for production
- Add Tomas as maintainer to sarus' spack package
- Add the option to disable unit tests in latest versions
Fixes the following build failure when building with gcc 11:
478 ../../../../CPP/7zip/Archive/Wim/WimHandler.cpp: In member function 'virtual LONG NArchive::NWim::CHandler::GetArchiveProperty(PROPID, PROPVARIANT*)':
>> 479 ../../../../CPP/7zip/Archive/Wim/WimHandler.cpp:308:11: error: use of an operand of type 'bool' in 'operator++' is forbidden in C++17
480 308 | numMethods++;
481 | ^~~~~~~~~~
>> 482 ../../../../CPP/7zip/Archive/Wim/WimHandler.cpp:318:9: error: use of an operand of type 'bool' in 'operator++' is forbidden in C++17
483 318 | numMethods++;
484 | ^~~~~~~~~~
* opencv: add new version, variant, and patch
- added version 4.5.4
- added tesseract variant
- added patch to not add system paths
* Add leptonica depends and contrib conflicts
* Add dependencies for 1394 support
- new package: libraw1394
- add sdl dependency to libdc1394
- add conflict for openjpeg and jasper
* Adjust dependencies and conflicts for opencv modules
* rewrite of opencv
- all prebuilt apps are now variants and can be installed
- core is no longer a variant. It was always built anyway so it was not
really a variant.
- contrib is no longer a variant. All of the contrib modules are now
available as variants.
- components that can not be built with Spack are no longer variants.
They are set to 'off' to prevent pulling from system.
- handle the case where a module and a component have the same name
- use `with when` framework
- adjust dependencies and conflicts
- new package: libraw1394
- have libdc1394 depend on libraw1394
- patch to find clp
- patch to find onnx
- patch for cvv to find Qt
- format with black
* Incorporate recommended changes
- fix variants and dependencies on packages that depend on opencv
- remove opencv-3.2 and patches
- add some new patches to handle different versions
- cntk needs further work
- the openvslam package was markde deprecated as it is no longer an
active project and the repository has no code
* Remove gmake dependency.
* Remove sdl support
SDL is only used in an example case, but the examples are not built.
* remove openvslam
* Remove opencv+flann variant from 3dtk
* Back out cfitsio constraint from py-astropy
* remove opencv+flann variant from dlib
* remove boost constraint from 3dtk
* Remove non-opencv related bohrium changes
* Adjustments for cntk
- protobuf constraint at version 3.10
- need specific variants for opencv
- improve patch
* Deprecate CNTK package
* variant tweaks
- added appropriate conflicts for cublas
- made cuda/cudev relationship explicit
- moved openx to pending components as it needs an openvx package
* fix isort style error
* Use date version from kaldi rather than commit
* Revert changes from a bad rebase
* Add +flann to 3dtk and dlib
* Use compression support with libtiff
* remove `+datasets` from opencv dependency
The py-torchgeo package does not need opencv+datasets.
* fix typo
zip --> zlib
* added package gptune with all its dependencies: adding py-autotune, pygmo, py-pyaml, py-autotune, py-gpy, py-lhsmdu, py-hpbandster, pagmo2, py-opentuner; modifying superlu-dist, py-scikit-optimize
* adding gptune package
* minor fix for macos spack test
* update patch for py-scikit-optimize; update test files for gptune
* fixing gptune package style error
* fixing unit tests
* a few changes reviewed in the PR
* improved gptune package.py with a few newly added/improved dependencies
* fixed a few style errors
* minor fix on package name py-pyro4
* fixing more style errors
* Update var/spack/repos/builtin/packages/py-scikit-optimize/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* resolved a few issues in the PR
* fixing file permissions
* a few minor changes
* style correction
* minor correction to jq package file
* Update var/spack/repos/builtin/packages/py-pyro4/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixing a few issues in the PR
* adding py-selectors34 required by py-pyro4
* improved the superlu-dist package
* improved the superlu-dist package
* moree changes to gptune and py-selectors34 based on the PR
* Update var/spack/repos/builtin/packages/py-selectors34/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* improved gptune package: 1. addressing comments of tldahlgren in PR 26936; 2. adding variant openmpi
* fixing style issue of gptune
* changing file mode
* improved gptune package: add variant mpispawn which depends on openmpi; add variant superlu and hypre for installing the drivers; modified hypre package file to add a gptune variant
* fixing style error
* corrected pddrive_spawn path in gptune test; enforcing gcc>7
* fixing style error
* setting environment variables when loading gptune
* removing debug print in hypre/package.py
* adding superlu-dist v7.2.0; fixing an issue with CMAKE_INSTALL_LIBDIR
* changing site_packages_dir to python_platlib
* not using python3.9 for py-gpy, which causes due to dropped support of tp_print
* more replacement of site_packages_dir
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* espnet first build with depends
* fixed flake8
* updated to lastest version and removed python dependency
* changed to pypi and version 2.17.2
* [py-kaldiio] depends on py-pytest-runner
* [py-kaldiio] updated copyright
Co-authored-by: Sid Pendelberry <sid@rit.edu>
* prmon: make sure integration tests do not run in parallel
Some integration tests fail if not run on an otherwise idle machine.
* prmon: run unittests based on googletest
* prmon: fix checksums
* superlu-dist: use CMakePackage helper functions
* Fix#28609
It's OK to have CUDA in the dependency tree as long as it's not being
used for superlu-cuda.
* Update prmon package to latest versions
Add recent versions of the prmon package
Add the spdlog dependency for versions >=3
Add package developers as additional maintainers
Update name of development branch to 'main'
* Correct checksum for v3.0.1
at-spi2-core is automatically selecting dbus-broker and enabling systemd if it finds dbus-broker-launch which some systems might have even without systemd being part of the actual spack environment. This is not ideal for a spack package.
ucx has the configure option --[enable|disable]-backtrace detail.
This option is not explicitely set by spack, causing problems on my system, because
./configure does not find the bfd.h header file / libbfd.so library.
Added variant + dependencies (binutils). Disabled by default
* trilinos: version 12 requires cxxstd=11
* trilinos: use cmake version 3.21 or old when trilinos version 12
* conflict cxxstd=17 and cmake@3.2.[01]
* trilinos: version 12 requires cxxstd=11.
* Trilinos_CXX11_FLAGS is set to ' ' to avoid inject C++11 flag.
* set Trilinos_CXX11_FLAGS only version 12 or older.
* trilinos: update dependencies
Use the tribits deps to clarify some dependencies, and group some together
using `with` statements, eliminating some transitive conflict duplication.
* trilinos: Restricit cuda incompatibility
* e4s: vastly reduce number of packages in trilinos-cuda build
Not clear who the customers of cuda-enabled trilinos are, or what options
they need, or which sets of options conflict...
* e4s: remove ~wrapper from trilinos+cuda
* VTK-m: Make vtk-m consistent with ROCmPackage
* VTKm: Add kokkos variant
Specifying +kokkos will enable kokkos backend.
Specifying +kokkos with +rocm will require a kokkos with a ROCm backend.
Specifying +cuda enables VTK-m native CUDA backend. VTK-m native cuda backend
is not compatible with the kokkos +cuda backend.
* VTK-m: Add cuda_native variant
Required to allow specifying a vtk-m spec the selects a
cuda_arch and predictably propagate that to the underlying kokkos
dependency.
This also makes explicit selecting kokkos with a cuda backend or using
the VTK-m cuda backend.
* Mesa(18): Use libllvm virtual package
* Mesa patch configuration
Patch Mesa to define LLVM_VERSION_SUFFIX if llvm is pre-release
* Patch llvm-config to define LLVM_VERSION_SUFFIX
* Add a new version to track development
The released versions do not properly install via cmake which leads to
errors when linking against the library. These upstream problems have
been addressed on the glm development branch.
* Move git to class level and remove redundant depends
* vecgeom: require exact version of veccore
Fixes configure error from downstream package:
```
CMake Error at /rnsdhpc/code/spack/opt/spack/apple-clang/cmake/7zgbrwt/share/cmake-3.22/Modules/CMakeFindDependencyMacro.cmake:47 (find_package):
Could not find a configuration file for package "VecCore" that is
compatible with requested version "0.8.0".
The following configuration files were considered but not accepted:
/rnsdhpc/code/spack/var/spack/environments/celeritas/.spack-env/view/lib/cmake/VecCore/VecCoreConfig.cmake, version: 0.6.0
```
* veccore: add new versions
* Add flags to cabana to enable hypre and heffte when they are part of spec. Also add googletest to build dependencies
* Fixed mixed spaced and tabs
* Update package.py
* Update package.py
* Update package.py
* Modified to request specifically heFFTe version 2.0.0 due to
limitations in heFFTe cmakefiles.
* Update var/spack/repos/builtin/packages/cabana/package.py
Co-authored-by: Christoph Junghans <christoph.junghans@gmail.com>
* Integrated more heffte and hypre versions into cabana requests
Co-authored-by: Christoph Junghans <christoph.junghans@gmail.com>
* ParaView/VTK: Constrain version for ADIOS2 patch.
Older available versions of ParaView/VTK predate
ADIODS2 support.
ParaView lower bound is 5.8 and VTK lower bound is 8.2.0
* ParaView: Gate the ADIOS2 by verison
It seems that spack reads the output of `setup_run_environment` to build the actual spack modules and lmod modules. So, any output here will used verbatim on the shell.
This patch fixes https://github.com/spack/spack/issues/26733
1. adding latest release 3.5.0
2. updating cmake requirement to match that of Kokkos
3. adding logic to depend on the right version of Kokkos by default
* Kokkos: updating package list, maintainers and minimum cmake version
* Kokkos: updating maintainers list
Updating maintainers list to have the correct GitHub handle for Jan.
`spack license update-copyright-year` was updating license headers but not the MIT
license file. Make it do that and add a test.
Also simplify the way we bump the latest copyright year so that we only need to
update it in one place.
* [kaldi] Added version 2021-11-16
* [kaldi] Added logic for new version and when cuda 11 is used
* [kaldi] Added patch file when cuda 11 as cub is now built into it
* [kaldi] removed .999 and simplified some logic
Co-authored-by: Doug Heckman <dahdco@rit.edu>
* add py-ats package
* add new 7.0.10 tag
* add myself as a maintainer
* add dependencies for python and setuptools
* style
* added todo for flux
* words
* update versions users should use
* Use pip to bootstrap pip
* Bootstrap wheel from source
* Update PythonPackage to install using pip
* Update several packages
* Add wheel as base class dep
* Build phase no longer exists
* Add py-poetry package, fix py-flit-core bootstrapping
* Fix isort build
* Clean up many more packages
* Remove unused import
* Fix unit tests
* Don't directly run setup.py
* Typo fix
* Remove unused imports
* Fix issues caught by CI
* Remove custom setup.py file handling
* Use PythonPackage for installing wheels
* Remove custom phases in PythonPackages
* Remove <phase>_args methods
* Remove unused import
* Fix various packages
* Try to test Python packages directly in CI
* Actually run the pipeline
* Fix more packages
* Fix mappings, fix packages
* Fix dep version
* Work around bug in concretizer
* Various concretization fixes
* Fix gitlab yaml, packages
* Fix typo in gitlab yaml
* Skip more packages that fail to concretize
* Fix? jupyter ecosystem concretization issues
* Solve Jupyter concretization issues
* Prevent duplicate entries in PYTHONPATH
* Skip fenics-dolfinx
* Build fewer Python packages
* Fix missing npm dep
* Specify image
* More package fixes
* Add backends for every from-source package
* Fix version arg
* Remove GitLab CI stuff, add py-installer package
* Remove test deps, re-add install_options
* Function declaration syntax fix
* More build fixes
* Update spack create template
* Update PythonPackage documentation
* Fix documentation build
* Fix unit tests
* Remove pip flag added only in newer pip
* flux: add explicit dependency on jsonschema
* Update packages that have been added since this was branched off of develop
* Move Python 2 deprecation to a separate PR
* py-neurolab: add build dep on py-setuptools
* Use wheels for pip/wheel
* Allow use of pre-installed pip for external Python
* pip -> python -m pip
* Use python -m pip for all packages
* Fix py-wrapt
* Add both platlib and purelib to PYTHONPATH
* py-pyyaml: setuptools is needed for all versions
* py-pyyaml: link flags aren't needed
* Appease spack audit packages
* Some build backend is required for all versions, distutils -> setuptools
* Correctly handle different setup.py filename
* Use wheels for py-tomli to avoid circular dep on py-flit-core
* Fix busco installation procedure
* Clarify things in spack create template
* Test other Python build backends
* Undo changes to busco
* Various fixes
* Don't test other backends
* Add new package to spack. survey is a lightweight application performance tool that also gathers system information and stores it as metadata.
* Add maintainer and note about source access.
* Update the man path per spack reviewer suggestion.
* Remove redundant settings for PYTHONPATH, PATH, and MANPATH.
* Move to a one mpi collector approach for cce/tce integration.
* Add pyyaml dependency
* Make further spack reviewer changes to python type specs, mpi args, build type variant.
* Add reviewer requested changes.
* Add reviewer docstring requested changes.
* Add more updates from spack reviewer comments.
* Update the versions to use tags, not branches
* Redo dashes to fix issue with spack testing.
Co-authored-by: Jim Galarowicz <jgalarowicz@newmexicoconsortium.org>
When `spack compiler list` is run without being restricted to a
particular scope, and no compilers are found, say that none are
available, and hint that the use should run spack compiler find to
auto detect compilers.
* Improve docs
* Check if stdin is a tty
* add a test
Backport a patch for v1.3.4 that fixes an unsigned typedef problem
on macOS: https://github.com/xiph/ogg/pull/64
Also add v1.3.5 that has this issue fixed.
spack paths can be long and this overflows (at least) these buffers
inside of the bundled T1lib inside of the grace distribution, leading
to crashes on startup.
Charm++ versions below 7.0.0 have build issues on macOS, mainly due to the
pre-7.0.0 `VERSION` file conflicting with other version files on the
system: https://github.com/UIUC-PPL/charm/issues/2844. Specifically, it
conflicts with LLVM's `<version>` header that was added in llvm@7.0.0 to
comply with the C++20 standard:
https://en.cppreference.com/w/cpp/header/version. The conflict only occurs
on case-insensitive file systems, as typically used on macOS machines.
Many packages implement logic at the class level to handle complex dependencies and
conflicts. Others have started using `with when("@1.0"):` blocks since we added that
capability. The loops and other control logic can cause some pure directive logic not to
be removed by our package hashing logic -- and in many cases that's a lot of code that
will cause unnecessary rebuilds.
This commit changes the unparser so that it will descend into these blocks. Specifically:
1. Descend into loops, if statements, and with blocks at the class level.
2. Don't look inside function definitions (in or outside a class).
3. Don't look at nested class definitions (they don't have directives)
4. Add logic to *remove* empty loops/with blocks/if statements if all directives
in them were removed.
This allows our package hash to ignore a lot of pure metadata that it was not ignoring
before, and makes it less sensitive.
In addition, we add `maintainers` and `tags` to the list of metadata attributes that
Spack should remove from packages when constructing canonoical source for a package
hash.
- [x] Make unparser handle if/for/while/with at class level.
- [x] Add tests for control logic removal.
- [x] Add a test to ensure that all packages are not only unparseable, but also
that their canonical source is still compilable. This is a test for
our control logic removal.
- [x] Add another unparse test package that has complex logic.
These are the unit tests from astunparse, converted to pytest, with a few backports from
upstream cpython. These should hopefully keep `unparser.py` well covered as we change it.
We can't tell `print(a, b, c)` and `print((a, b, c))` apart -- both of these expressions
generate different ASTs in Python 2 and Python 3. However, we can decide that we don't
care. This commit treats both of them the same when `py_ver_consistent` is set with
`unparse()`.
This means that the package hash won't notice changes from printing a tuple to printing
multiple values, but we don't care, because this is extremely unlikely to affect the build.
More than likely this is just an error message for the user of the package.
- [x] treat `print(a, b, c)` and `print((a, b, c))` the same in py2 and py3
- [x] add another package parsing test -- legion -- that exercises this feature
To make it easier to see how package hashes change and how they are computed, add two
commands:
* `spack pkg source <spec>`: dumps source code for a package to the terminal
* `spack pkg source --canonical <spec>`: dumps canonicalized source code for a
package to the terminal. It strips comments, directives, and known-unused
multimethods from the package. It is used to generate package hashes.
* `spack pkg hash <spec>`: This gives the package hash for a particular spec.
It is generated from the canonical source code for the spec.
- [x] `add spack pkg source` and `spack pkg hash`
- [x] add tests
- [x] fix bug in multimethod resolution with boolean `@when` values
Co-authored-by: Greg Becker <becker33@llnl.gov>
We are planning to switch to using full hashes for Spack specs, which means that the
package hash will be included in the deployment descriptor. This means we need a more
robust package hash than simply dumping the `repr` of the AST.
The AST repr that we previously used for package content is unreliable because it can
vary between python versions (Python's AST actually changes fairly frequently).
- [x] change `package_hash`, `package_ast`, and `canonical_source` to accept a string for
alternate source instead of a filename.
- [x] consolidate package hash tests in `test/util/package_hash.py`.
- [x] remove old `package_content` method.
- [x] make `package_hash` do what `canonical_source_hash` was doing before.
- [x] modify `content_hash` in `package.py` to use the new `package_hash` function.
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Our package hash is supposed to be consistent from python version to python version.
Test this by adding some known unparse inputs and ensuring that they always have the
same canonical hash. This test relies on the fact that we run Spack's unit tests
across many python versions. We can't compute for several python versions within the
same test run so we precompute the hashes and check them in CI.
Package hashing was not properly handling multimethods. In particular, it was removing
any functions that had decorators from the output, so we'd miss things like
`@run_after("install")`, etc.
There were also problems with handling multiple `@when`'s in a single file, and with
handling `@when` functions that *had* to be evaluated dynamically.
- [x] Rework static `@when` resolution for package hash
- [x] Ensure that functions with decorators are not removed from output
- [x] Add tests for many different @when scenarios (multiple @when's,
combining with other decorators, default/no default, etc.)
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Previously we used `directives.__all__` to get directive names, but it wasn't
quite right -- it included `DirectiveMeta`, etc. It's not wrong, but it's also
not the clearest way to do this.
- [x] Refactor `@directive` to track names in `directive_names` global
- [x] Rename `_directive_names` to `_directive_dict_names` in `DirectiveMeta`
- [x] Add a test for `RemoveDirectives`
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Some packages use top-level unassigned strings instead of comments, either just after a
docstring on in the body somewhere else. Ignore those strings becasue they have no
effect on package behavior.
- [x] adjust RemoveDocstrings to remove all free-standing strings.
- [x] move tests for util/package_hash.py to test/util/package_hash.py
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Python 2 and 3 represent string literals differently in the AST. Python 2 requires '\x'
literals, and Python 3 source is always unicode, and allows unicode to be written
directly. These also unparse differently by default.
- [x] modify unparser to write both out the way `repr` would in Python 2 when
`py_ver_consistent` is provided.
Backport operator precedence algorithm from here:
397b96f6d7
This eliminates unnecessary parentheses from our unparsed output and makes Spack's unparser
consistent with the one in upstream Python 3.9+, with one exception.
Our parser normalizes argument order when `py_ver_consistent` is set, so that star arguments
in function calls come last. We have to do this because Python 2's AST doesn't have information
about their actual order.
If we ever support only Python 3.9 and higher, we can easily switch over to `ast.unparse`, as
the unparsing is consistent except for this detail (modulo future changes to `ast.unparse`)
Previously, there were differences in the unparsed code for Python 2.7 and for 3.5-3.10.
This makes unparsed code the same across these Python versions by:
1. Ensuring there are no spaces between unary operators and
their operands.
2. Ensuring that *args and **kwargs are always the last arguments,
regardless of the python version.
3. Always unparsing print as a function.
4. Not putting an extra comma after Python 2 class definitions.
Without these changes, the same source can generate different code for different
Python versions, depending on subtle AST differences.
One place where single source will generate an inconsistent AST is with
multi-argument print statements, e.g.:
```
print("foo", "bar", "baz")
```
In Python 2, this prints a tuple; in Python 3, it is the print function with
multiple arguments. Use `from __future__ import print_function` to avoid
this inconsistency.
Add `astunparse` as `spack_astunparse`. This library unparses Python ASTs and we're
adding it under our own name so that we can make modifications to it.
Ultimately this will be used to make `package_hash` consistent across Python versions.
Add an abstraction around libllvm to allow libllvm
providers to be specified for all packages.
This is targeting allowing mesa to build against
llvm-amdgpu or intel-llvm or llvm or any other
custom llvm variant that arises for specific GPU
toolchains
* Python: set default config_vars
* Add missing commas
* dso_suffix not present for some reason
* Remove use of default_site_packages_dir
* Use config_vars during bootstrapping too
* Catch more errors
* Fix unit tests
* Catch more errors
* Update docstring
2022-01-10 12:00:06 -06:00
7626 changed files with 95473 additions and 42160 deletions
==> Adding "clingo-bootstrap@spack+python %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "gnupg@2.3: %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "patchelf@0.13.1:0.13.99 %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding binary packages from "https://github.com/alalazo/spack-bootstrap-mirrors/releases/download/v0.1-rc.2/bootstrap-buildcache.tar.gz" to the mirror at /opt/bootstrap/local-mirror
To register the mirror on the platform where it's supposed to be used run the following command(s):
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse2"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse2"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse2"
}
]
}
},
@@ -1192,6 +1400,20 @@
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
]
}
},
@@ -1246,6 +1468,20 @@
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
]
}
},
@@ -1301,6 +1537,20 @@
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse4.2"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse4.2"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse4.2"
}
]
}
},
@@ -1360,6 +1610,22 @@
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
]
}
},
@@ -1422,6 +1688,22 @@
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
]
}
},
@@ -1485,6 +1767,22 @@
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
]
}
},
@@ -1543,6 +1841,30 @@
"name":"znver3",
"flags":"-march={name} -mtune={name}"
}
],
"intel":[
{
"versions":"16.0:",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.