* Add checksum for jupyter-console@6.4.3
* Update py-jupyter-console dependency
* Extend jupyter-client@7.0.0 dependency to newer versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: aandvalenzuela <andrea.valenzuela.ramirez@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pystan: Add new package
* Fix dependencies
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add run dependency to py-setuptools
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-httpstan@4.7.2 and py-pysimdjson@3.2.0
* Dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR updates the list of images we build nightly, deprecating
Ubuntu 16.04 and CentOS 8 and adding Ubuntu 20.04, Ubuntu 22.04
and CentOS Stream. It also removes a lot of duplication by generating
the Dockerfiles during the CI workflow and uploading them as artifacts
for later inspection or reuse.
* ipopt: add goxberry as maintainer
This commit adds 'goxberry' (me, Geoff Oxberry) as a maintainer of the
Ipopt Spack package.
* ipopt: use github url instead of coin-or.org url
This commit changes the package URL for Ipopt from one containing
`coin-or.org` to one containing `github.com`. The rationale for
using `github.com` is as follows:
- The COIN-OR webpage now directs users interested in Ipopt source to
GitHub.
- Ipopt used to have a COIN-OR project homepage actually hosted on
coin-or.org using an SVN-Trac web page. A link to this project
homepage no longer appears within the "Projects" section of
COIN-OR's website.
- COIN-OR issued a 2021-12-15 post on the News section of its web site
(see https://www.coin-or.org/news/) that discusses the impact that
lack of financial support has on COIN-OR software maintenance. It
seems reasonable to suspect that the GitHub project is likely to
outlast the COIN-OR web site.
The sha256 hashes for ipopt@:3.12 downloaded from GitHub differ from
the corresponding COIN-OR versions, so these hashes are also updated.
* ipopt 3.14.5: add new version
This commit adds the latest version of Ipopt, 3.14.5, to the Ipopt
Spack package.
* git: add 2.35.2, explicit version(...)
git 2.35.2 fixes CVE-2022-24765 which seems to only affect Windows. But
nonetheless we should maybe set deprecated=True on older versions... The
restructure allows for that.
* deprecate over CVE-2022-24765
In WarpX 22.04, we introduced the openPMD `thetaMode` for fields in
RZ geometry. That means we need to name the fields differently than
the reconstructured Cartesian slice that we default to in plotfiles.
* ncurses: add wide, nowide headers, libs query parameter options
* readline: only link with libncursesw
Needed for python to detect proper ncurses library #27369
Alter the `install_components/install` script to pass the `-gcc $SPACK_CC`,
`-gpp $SPACK_CXX`, and `-g77 $SPACK_F77` flags to `makelocalrc`. This
ensures that nvhpc is configured to use the spack gcc spec, rather than
whatever gcc is found on the path.
Co-authored-by: Mikael Simberg <simberg@cscs.ch>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Fix test_ci_generate_prune_untouched(), which would fail if run when
the latest commit changed the .gitlab-ci.yml. This change mocks the
get_stack_changed() method in that test to disregard the state of
the current spack repo in favor of a mock repo under test control.
* The configure script on Windows requires that CC/CXX be enclosed
in quotes if the paths to those compiler executables contain
spaces (so unlike most instances of Executable, the arguments
need to contain the quotes)
* OpenSSL requires the nasm package on Windows
* Restore parallel build from 075e942 (accidentally reverted in
#27021)
* py-ipympl: Add new package
* Update var/spack/repos/builtin/packages/py-ipympl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipympl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipympl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ipympl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Remove trailing whitespaces
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-webargs: Add new package
* Fix python requirement
* Add run dependency to py-packaging
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
gitlab ci: Set resource requests explicitly
This PR sets resource requests for the Kubernetes executor, which should aid in
better workload scheduling in the cluster. The specific values were derived from
profile data taken from several full "from scratch" rebuilds in a separate worker pool.
Co-authored-by: Zack Galbreath <zack.galbreath@kitware.com>
* serialbox: setup the run and dependent build environments
* Update var/spack/repos/builtin/packages/serialbox/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* rocmlibs: relax rocm-cmake version requirements
The rocm-cmake modules tend to be backwards-compatible, to the extent
that most ROCm math libraries were built using rocm-cmake@master
for a long while without anybody noticing. (That was fixed in
97f0c3ccd9f0a40896998a7580150a514ec3bc37.)
Some packages, like comgr, barely use rocm-cmake for anything, and
we can easily set a very minimal version requirement. For most
packages, however, it would be a lot of effort to determine the
minimum rocm-cmake version required for each release. For those
packages, I just turned the exact version requirement into a
minimum version requirement.
Since I was looking through the CMakeLists.txt for a large number of
libraries, I also took note of the cmake_minimum_required and adjusted
the cmake minimum requirements to match.
* Add rocblas build dependency to hipblas
The rocblas library is required both for both building and linking
hipblas.
* Remove rocm-cmake from vtk-m dependency list
The rocm-cmake package provides CMake scripts that facilitate common
build configuration tasks in the ROCm libraries. It is never needed at
link-time. Also, there are no calls to find_package(ROCM) or
include(ROCM.*)in vtk-m, so this dependency will never be used.
- older versions are no longer available for download so mark them
deprecated
- set manual_download
- set url_for_version
- only install the binary that matches the cuda version
In #26630, I assumed "glu" was needed by glew because it included glu.h, but
actually, glew can be used without glu when GLEW_NO_GLU is defined and this
is documented in the announcement of glew-1.6.0:
> https://www.geeks3d.com/20110430/opengl-glew-1-6-0-available/
> * Define GLEW_NO_GLU for no glu dependency
It is therefore the duty of users of glew to decide if they use glu,
and then they need to have a depends_on("glu").
Thus, move the depends_on("glu") which I changed from "gl" in #26630
to vapor, which itself uses glu as well.
For about a decade GCC has an option `-f[no]-canonical-system-headers`
which basically runs `realpath` on all "system headers", to possibly
reduce the length of paths in diagnostics. [1]
Spack usually installs the "system headers" of GCC in very deeply nested
directories. Calling `realpath` there results in stat calls on every
level, for every header file. On some slow filesystem I have,
`-fno-canonical-system-headers` gives about 5x speedup to compile hello
world in C, meaning that ./configure scripts would be much faster when
using this flag by default.
[1] https://codereview.appspot.com/6495088
Add option to allow using OpenSSL (by default this uses the SSL
implementation that comes with Windows, since that is more likely
to have needed certificates).
* py-awkward: Add new versions
* py-awkward: Update dependencies
* Make setuptools a runtime dependency as well
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Don't rely on NASM's nmake to export install target. Spack
now handles NASM installation; the install tree structure
mimics NASM Windows installer behavior.
* Add dependency on perl
we switched to an optional sphinx based way of
generating docs, so remove pandoc, which can cause
issues with latex conflicts.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
Bug fixes for package netcdf-cxx4 so that it builds on macOS semi
case-sensitive filesystems; this includes additional changes to build
netcdf-cxx4 consistently with netcdf-fortran.
* netcdf-fortran: remove unused config_flags
* netcdf-fortran: avoid building without the optimization flags
* netcdf-cxx4: do not enforce autoreconf. This was a rudiment from the
times when the package was fetched with git, which broke timestamp
order of the automatically generated Autoconf files.
* netcdf-cxx4: inject PIC flags for C++ when '+pic'
* netcdf-cxx4: inject C/CXXFLAGS via the wrapper
* netcdf-cxx4: fix the underlinking problem for platforms other than darwin
(add netcdf-c libs netcdf-cxx4 ldlibs flags)
* netcdf-cxx4: remove redundant extension of CPPFLAGS
* netcdf-cxx4: only need to use MPI compiler wrapper when building C
(vs both C and C++)
* netcdf-cxx4: remove variant 'static'
This makes it consistent with other packages from the NetCDF
constellation: always build the static libraries and additionally
build the shared ones when '+shared'.
* netcdf-cxx4: do not configure --with/--without-pic.
This makes it consistent with other packages from the NetCDF
constellation: build the shared libraries with the PIC flag and
the static ones without it (the default for Autotools) when
'~pic', and build the static libraries with PIC when '+pic' (to
make them injectable into other shared libraries).
* netcdf-cxx4: run the tests serially
* netcdf-cxx4: build the plugins only when the tests are run
Co-authored-by: Sergey Kosukhin <sergey.kosukhin@mpimet.mpg.de>
gitlab ci: Remove code for relating CDash builds
Relating CDash builds to their dependencies was a seldom used feature. Removing
it will make it easier for us to reorganize our CDash projects & build groups in the
future by eliminating the needs to keep track of CDash build ids in our binary mirrors.
* Allow packages to add a 'submodules' property that determines when ad-hoc Git-commit-based versions should initialize submodules
* add support for ad-hoc git-commit-based versions to instantiate submodules if the associated package has a 'submodules' property and it indicates this should happen for the associated spec
* allow Package-level submodule request to influence all explicitly-defined version() in the Package
* skip test on windows which fails because of long paths
* Set CUDA architectures in ArrayFire based on cuda_arch
The cuda_arch flag was not recognized by the ArrayFire package and
therefore any setting was not respected. This commit adds the appropriate
cmake flags if cuda_arch is specified. If no cuda_arch is specified,
then the flag is set to "Auto" which checks the installed compute
architectures on the build system.
* ArrayFire only requires boost headers to build. Update version to 1.75
ArrayFire only requires boost headers at build time. This commit also
updates the version to 1.75 to avoid some errors in Boost Compute
* Disable tests in ArrayFire by default
* Add support for ArrayFire v3.8.1
* Add maintainer for ArrayFire package
* Remove test variant from ArrayFire. Use comprehensions
* Reduce boost requirement in ArrayFire
* Address cuda_arch suggestions
* Add commit hashes to Release versions of ArrayFire
* Fix style issues in ArrayFire package
Ubuntu patched git v2.25.1 with a security fix that also
introduced a breaking change, so v2.25.1 behaves like
v2.35.2 with respect to the use cases in CVE-2022-24765
* llvm7_intel.patch required for intel@19.1.3 too
* apply llvm7_intel.patch forall intel@19.0 and intel@19.1
Co-authored-by: Daryl W. Grunau <dwg@lanl.gov>
Spack added support in #24639 for ad-hoc Git-commit-hash-based
versions: A user can install a package x@hash, where X is a package
that stores its source code in a Git repository, and the hash refers
to a commit in that repository which is not recorded as an explicit
version in the package.py file for X.
A couple issues were found relating to this:
* If an environment defines an alternative package repo (i.e. with
repos.yaml), and spack.yaml contains user Specs with ad-hoc
Git-commit-hash-based versions for packages in that repo,
then as part of retrieving the data needed for version comparisons
it will attempt to retrieve the package before the environment's
configuration is instantiated.
* The bookkeeping information added to compare ad-hoc git versions was
being stripped from Specs during concretization (such that user
Specs which succeeded before concretizing would then fail after)
This addresses the issues:
* The first issue is resolved by deferring access to the associated
Package until the versions are actually compared to one another.
* The second issue is resolved by ensuring that the Git bookkeeping
information is explicitly applied to Specs after they are concretized.
This also:
* Resolves an ambiguity in the mock_git_version_info fixture used to
create a tree of Git commits and provide a list where each index
maps to a known commit.
* Isolates the cache used for Git repositories in tests using the
mock_git_version_info fixture
* Adds a TODO which points out that if the remote Git repository
overwrites tags, that Spack will then fail when using
ad-hoc Git-commit-hash-based versions
This commit updates the `gpg publish` command to work with the mirror
arguments, when trying to push keys to a mirror.
- [x] update `gpg publish command
- [x] add test for publishing GPG keys and rebuilding the key index within a mirror
* zstd: bring back libs=shared,static and compression=zlib,lz4,lzma variants
Should make building `gcc+binutils ^zstd libs=static` a bit easier (this
is the case where we don't control the compiler wrappers of gcc because
of bootstrapping, nor of ld because of how gcc invokes the linker).
In a typical call to spack, the OperatingSystem gets instantiated
multiple times. For macOS, each one requires a call to `sw_vers`, which
is done through the Executable helper class. Memoizing
reduces the call count from "spac spec" from three to one.
Currently environments are indexed by build hashes. When looking into this bug I noticed there is a disconnect between environments that are concretized in memory for the first time and environments that are read from a `spack.lock`. The issue is that specs read from a `spack.lock` don't have a full hash, since they are indexed by a build hash which is strictly coarser. They are also marked "final" as they are read from a file, so we can't compute additional hashes.
This bugfix PR makes "first concretization" equivalent to re-reading the specs from a corresponding `spack.lock`, and doing so unveiled a few tests were we were making wrong assumptions and relying on the fact that a `spack.lock` file was not there already.
* Add unit test
* Modify mpich to trigger jobs in pipelines
* Fix two failing unit tests
* Fix another full_hash vs. build_hash mismatch in tests
* Ignore top-level module config; add auto-update
In Spack 0.17 we got module sets (modules:[name]:[prop]), and for
backwards compat modules:[prop] was short for modules:default:[prop].
But this makes it awkward to define default config for the "default"
module set.
Since 0.17 is branched off, we can now deprecate top-level module config
(that is, just ignore it with a warning).
This PR does that, and it implements `spack config update modules` to
make upgrading easy (we should have added that to 0.17 already...)
It also removes references to `dotkit` stuff which was already
deprecated in 0.13 and could have been removed in 0.14.
Prefix inspections are the only exception, since the top-level prefix inspections
used for `spack load` and `spack env activate`.
Spack currently allows dependencies to be concretized for an
architecture incompatible with the root. This commit adds rules
to make this situation impossible by design.
* Extract the MetaPathFinder and Loaders for packages in their own classes
https://peps.python.org/pep-0451/
Currently, RepoPath and Repo implement the (deprecated) interface of
MetaPathFinder (find_module) and of Loader (load_module). This commit
extracts both of them and places the code in their own classes.
The MetaPathFinder interface is updated to contain both the deprecated
"find_module" (for Python 2.7 support) and the recommended "find_spec".
Update of the Loader interface is deferred at a subsequent commit.
* Move the lines to be prepended inside "RepoLoader"
Also adjust the naming of a few variables too
* Remove spack.util.imp, since code is only used in spack.repo
* Remove support from loading Python modules Python > 3 but < 3.5
* Remove `Repo._create_namespace`
This function was interacting badly with the MetaPathFinder
and causing issues with "normal" imports. Removing the
function allows to do things like:
```python
import spack.pkg.builtin.mpich
cls = spack.pkg.builtin.mpich.Mpich
```
* Remove code needed to trigger the Singleton evaluation
The finder is coded in a way to trigger the Singleton,
so we don't need external code now that we register it
at module level into `sys.meta_path`.
* Add unit tests
OpenMPI includes cuda_runtime.h, which errors with `#error --
unsupported GNU version! gcc versions later than 9 are not supported!`
By inheriting CudaPackage, the proper conflicts between `cuda` and
`gcc`/`clang` are added.
* mesa, mesa18: Implement the swr variant consistently between mesa and mesa18
* mesa: Bump to 21.3.7
* mesa: Build release by default tie swr to release builds
* mesa, mesa18: re-enable the llvm variant by default
This reverts the change made in #29360
Some servers require `User-Agent` to be set, and otherwise error with
access denied. One such example is mpich.
To fix this, set `User-Agent: Spackbot/[version]` as a header.
Apparently by convention, it should include the word `bot`.
#27021 broke fetching for CVS-based packages because:
- The mirror logic was using URL parsing to extract a path from the
CVS repository location
- #27021 added sanity checks to enforce that strings passed to the
URL parser were actually URLs
This replaces the call to "url_util.parse" with logic that is
customized for CVS. This implies that VCSFetchStrategy should
rename the "url_attr" attribute to something more generic, but
that should be handled separately.
* mpich: add 3.4.3, 4.0, 4.0.1
* mpich: add url_for_version function
For versions 4.0 and up, get tarballs from GitHub. This will help with
CI builds, since the MPICH website denies the urllib user-agent from
downloading release tarballs.
* mpich: disable cuda support
MPICH is failing to build in CI due to a configuration script bug in
detecting CUDA support. Disable CUDA support by default until we add a
proper variant.
Allow declaring possible values for variants with an associated condition. If the variant takes one of those values, the condition is imposed as a further constraint.
The idea of this PR is to implement part of the mechanisms needed for modeling [packages with multiple build-systems]( https://github.com/spack/seps/pull/3). After this PR the build-system directive can be implemented as:
```python
variant(
'build-system',
default='cmake',
values=(
'autotools',
conditional('cmake', when='@X.Y:')
),
description='...',
)
```
Modifications:
- [x] Allow conditional possible values in variants
- [x] Add a unit-test for the feature
- [x] Add documentation
* tests for rewiring pure specs to spliced specs
* relocate text, binaries, and links
* using llnl.util.symlink for windows compat.
Note: This does not include CLI hooks for relocation.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
From the tempfile module docs:
The default directory is chosen from a platform-dependent list, but the
user of the application can control the directory location by setting
the TMPDIR, TEMP or TMP environment variables
missing dependencies
- boost
- lzo
Also, turn off libuv. This does not build properly with libuv so it is
not a dependency. However, configure will look for libuv on the system
and try to use it if found, thus breaking the build.
- Add variants for various common build flags, including support for both versions of the Racket VM environment.
- Prevent `-j` flags to `make`, which has been known to cause problems with Racket builds.
- Prefer the minimal release to improve install times. Bells and whistles carry their own runtime dependencies and should be installed via `raco`. An enterprising user may even create a `RacketPackage` class to make spack aware of `raco` installed packages.
- Match the official version numbering scheme.
- Update to version 1.2.12.
- Mark older versions as deprecated because they have security bugs.
- mfem: Update list of system library directories
- zlib patch: cc patch
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Update "spack external find --all" to also find library-only packages.
A Package can add a ".libraries" attribute, which is a list of regular
expressions to use to find libraries associated with the Package.
"spack external find --all" will search LD_LIBRARY_PATH for potential
libraries.
This PR adds examples for NCCL, RCCL, and hipblas packages. These
examples specify the suffix ".so" for the regular expressions used
to find libraries, so generally are only useful for detecting library
packages on Linux.
Do not prompt user with checksum warning when using git commit hashes
as versions. Spack was incorrectly reporting this as a potential
problem: it would display a prompt asking the user whether they
want to proceed if Spack was running in a terminal, or it would
terminate the running instance of Spack if running as part of a
script.
* rocm-cmake: remove ldconfig variant
The packages built for `rocm-cmake~ldconfig` and `rocm-cmake+ldconfig`
are identical, so the variant is unnecessary.
The `ROCM_DISABLE_LDCONFIG` option changes how `rocm_create_package`
generates DEB and RPM packages with CPack. rocm-cmake itself uses
`rocm_create_package`, however, this option is has no effect because
Spack does not build the CPack packages. It is also unnecessary on
rocm-cmake, because rocm-cmake does not contain any shared libraries
for ldconfig to configure. The rocm-cmake package is purely composed
of CMake scripts.
* Tighten CMake version dependency
* Improve package description
* Add pl2bat to PATH: Windows on Perl requires the script pl2bat.bat
and Perl to be available to the installer via the PATH. The build
and dependent environments of Perl on Windows have the install
prefix bin added to the PATH.
* symlink with win32file module instead of using Executable to
call mklink (mklink is a shell function and so is not accessible
in this manner).
* py-marshmallow: Add new package
* Modify py-packaging dependency type
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add run dependency to py-packaging
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We've previously generated CI pipelines for PRs, and they rebuild any packages that don't have
a binary in an existing build cache. The assumption we were making was that ALL prior merged
builds would be in cache, but due to the way we do security in the pipeline, they aren't. `develop`
pipelines can take a while to catch up with the latest PRs, and while it does that, there may be a
bunch of redundant builds on PRs that duplicate things being rebuilt on `develop`. Until we can
do better caching of PR builds, we'll have this problem.
We can do better in PRs, though, by *only* rebuilding things in the CI environment that are actually
touched by the PR. This change computes exactly what packages are changed by a PR branch and
*only* includes those packages' dependents and dependencies in the generated pipeline. Other
as-yet unbuilt packages are pruned from CI for the PR.
For `develop` pipelines, we still want to build everything to ensure that the stack works, and to ensure
that `develop` catches up with PRs. This is especially true since we do not do rebuilds for *every* commit
on `develop` -- just the most recent one after each `develop` pipeline finishes. Since we skip around,
we may end up missing builds unless we ensure that we rebuild everything.
We differentiate between `develop` and PR pipelines in `.gitlab-ci.yml` by setting
`SPACK_PRUNE_UNTOUCHED` for PRs. `develop` will still have the old behavior.
- [x] Add `SPACK_PRUNE_UNTOUCHED` variable to `spack ci`
- [x] Refactor `spack pkg` command by moving historical package checking logic to `spack.repo`
- [x] Implement pruning logic in `spack ci` to remove untouched packages
- [x] add tests
* py-pysimdjson: Add new package
* Cleanup
* Fix python requirement
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* libtiff: add missing dependencies
- gl
- glu
- freeglut
* Make X/GL only for Darwin/Mac
* Catch the force_autoreconf property
* add platform=darwin to the autotools deps as well
* Update var/spack/repos/builtin/packages/libtiff/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Fixes the following error on %clang@13.0.1
>> 2413 bison: error while loading shared libraries: libtextstyle.so.0: cannot open shared object file: No such file or directory
>> 2414 make[2]: *** [<builtin>: getdate.c] Error 127
VecCore's new home is on github (hashes have changed even though commit
IDs and presumably contents are the same), and it does not need any configuration
options. See discussion at https://gitlab.cern.ch/VecGeom/VecCore/-/merge_requests/1 .
Updated flecsi spackage to better support changes in control variables
in post 2.1.0 releases while also making legacy versions clearer as to
what is a tagged release and what is a rolling-ish development branch
* py-reportlab: add missing dependency on freetype
* Add missing dependencies
* Update var/spack/repos/builtin/packages/py-reportlab/package.py
Use pil virtual.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* ExaGO: Handling of cuda architectures and amdgpu targets changed
to effectively handle multiple targets. See #28441.
* Add ROCm support to ExaGO and update ROCm support in HiOp
* ExaGO+rocm requires HiOp+rocm
* Newer versions of CMake may set HIP_CLANG_INCLUDE_PATH incorrectly:
add comments to the ExaGO/HiOp packages explaining how to address
this problem if it occurs.
* cmake: use CMAKE_INSTALL_RPATH_USE_LINK_PATH
Spack has a heuristic to add rpaths for packages it knows are required,
but it's really a heuristic, and it does not work when the dependencies
put their libraries in a different folder than `<prefix>/lib{64,}`.
CMake patches binaries after install with the "install rpaths", which by
default are provided by Spack and its heuristic through
`CMAKE_INSTALL_RPATH`.
CMake however knows better what libraries are effectively being linked
to, and has an option to include those in the install rpath too, through
`CMAKE_INSTALL_RPATH_USE_LINK_PATH`.
These two CMake options are complementary, repeated rpaths seem to be
filtered, and the "use link path" paths are appended to Spack's
heuristic "install rpath".
So, it seems like a good idea to enable "use link path" by default, so
that:
- `dlopen` by library name uses Spack's heuristic search paths
- linked libraries in non-standard locations within a prefix get an
rpath thanks to CMake.
* docs
- Use define/define_from_variant
- Remove unused "fortran_flags"
- Fix CUDA architectures when using multiple (needs semicolon not comma
separators)
- Add `when=` variant restrictions to simplify logic
Add output of build- and install-time tests to info command
Enable dependencies, variants, and versions by default (i.e., provide --no*
options; add gcc to test_info_fields to increase coverage for c_names->v_names
* New package: spiner
* Update dependencies for spiner package
* Update var/spack/repos/builtin/packages/spiner/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* Update var/spack/repos/builtin/packages/spiner/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* Remove versions that can't be installed and use ports-of-call@1.1.0
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* py-torch: fix build with fujitsu-ssl2
* fix to use fujitsu-ssl2 in py-torch v1.5.0 to v1.11.0
* fix to use fujitsu-ssl2 in py-torch v1.2.0 to v1.11.0
* Delete fj-ssl2.patch
* renamed the patches
* Rename fj-ssl2.1.5.patch to fj-ssl2_1.5.patch
* Delete fj-ssl2_1.5.patch
We shouldn't be using "remove_linked_tree" to remove the lock file,
since that function expects to receive a directory path as an
argument.
Also, as a further measure to avoid regression, this commit restores
the "ignore_errors=True" argument on linux and adds a unit test
checking that "remove_linked_tree" doesn't change file permissions
as a side effect of a failure to remove.
* Fix py-onnx-runtime recipe
* Add missing dependencies
* Update var/spack/repos/builtin/packages/py-cerberus/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Better fix for py-onnx-runtime
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* omegah: v10.1.0
this version is from the SCOREC fork of Omega_h
* prefix version with scorec
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Reduces the number of stat calls to a bare minimum:
- Single pass over src prefixes
- Handle projection clashes in memory
Symlinked directories in the src prefixes are now conditionally
transformed into directories with symlinks in the dst dir. Notably
`intel-mkl`, `cuda` and `qt` has top-level symlinked directories that
previously resulted in empty directories in the view. We now avoid
cycles and possible exponential blowup by only expanding symlinks that:
- point to dirs deeper in the folder structure;
- are a fixed depth of 2.
* py-cffi: add compiler flags to fix build with clang
For %clang@13.0.1, this avoids the
```
clang-13: warning: optimization flag '-ffat-lto-objects' is not supported [-Wignored-optimization-argument]
```
warning being turned into an error, and fixes this link error:
```
build/temp.linux-x86_64-3.10/c/_cffi_backend.o: file not recognized: file format not recognized
```
* style
Currently `old_root` is computed by reading the symlink at `self.root`.
We should be more defensive in removing it by checking that it is in the
same directory as the new root. Otherwise, in the worst case, when
someone runs `spack env create --with-view=./view -d .` and `view`
already exists and is a symlink to `/`, Spack effectively runs `rm -rf /`.
`file` was used to detect Python scripts with shebangs, so that the interpreter could be changed from <python prefix> to <view path>. With this change, we detect shebangs using Python instead, so that `file` is no longer required.
The number of commit characters in patch files fetched from GitHub can change,
so we should use `full_index=1` to enforce full commit hashes (and a stable
patch `sha256`).
Similarly, URLs for branches like `master` don't give us stable patch files,
because branches are moving targets. Use specific tags or commits for those.
- [x] update all github patch URLs to use `full_index=1`
- [x] don't use `master` or other branches for patches
- [x] add an audit check and a test for `?full_index=1`
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Known issues reports only 2 issues, among the bugs reported on GitHub.
One of the two is also outdated, since the issue has been solved
with the new concretizer. Thus, this commit removes the section.
* This commit removes the Boost.with_default_variants to variants
that packages are precisely dependant upon. This is the first batch
of 20 packages with modified boost dependencies.
* Style fixes
* Tested bridger: works for gcc-4.9.3 and gcc-8.3.1
Commit 26ff443 made the Gitlab pipeline failing on develop
(while it was not failing in the original PR) due to errors in the
fetcher. This change preserves the new versions, but will give
some time for use to sync our tarball mirror for better reliability
* vecgeom: fix cuda arch
* vecgeom: change 'options' to 'args'
* vecgeom: add spec to locals
* vecgeom: suppress architecture specializations when cuda
- constrain samtools to version 1.13
- replace lzma dependency with xz
- add missing dependencies for libdeflate and openssl
- explicitly set LD_FLAGS for dependencies in makefile
From the release announcement: "This is a special bugfix release ahead of
schedule to address a memory leak that was happening on certain function calls
when using Cython. The memory leak consisted of a small constant amount of bytes
in certain function calls from Cython code. Although in most cases this was not
very noticeable, it was very impactful for long-running applications and certain
usage patterns. Check bpo-46347 for more information."
When you install Spack from a tarball, it will always show an exact
version for Spack itself, even when you don't download a tagged commit:
```
$ wget -q https://github.com/spack/spack/archive/refs/heads/develop.tar.gz
$ tar -xf develop.tar.gz
$ ./spack-develop/bin/spack --version
0.16.2
```
This PR sets the Spack version to `0.18.0.dev0` on develop, following [PEP440](https://github.com/spack/spack/pull/25267#issuecomment-896340234) as
suggested by Adam Stewart.
```
spack (fix/set-dev-version)$ spack --version
0.18.0.dev0 (git 0.17.1-1526-e270464ae0)
spack (fix/set-dev-version)$ mv .git .git_
spack $ spack --version
0.18.0.dev0
```
- [x] Update the release guide
- [x] Add __version__ to spack's __init__.py
- [x] Use PEP 440 canonical version strings
- [x] Make spack --version output [actual version] (git version)
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* rivet: fix dependency build types
If it isn't a python package, there is no good reason to change the default build type to remove link
* rivet: turn swig into build dependency
* Add tests to ensure google cloud storage urls work as mirrors
This commit adds two tests to track that GCS buckets can work as
mirrors, and can be parsed as valid URLs.
Currently, gs:// format URLs are not correctly parsed.
* Fix URL parsing for GCS buckets
This commit adds GCS bucket URLs as valid URLs.
* lower priority of package-provided urls
This change favors urls found in a scraped page over those provided by
the package from `url_for_version`. In most cases this doesn't matter,
but R specifically returns known bad URLs in some cases, and the
fallback path for a failed fetch uses `fetch_remote_versions` to find a
substitute. This fixes that problem.
fixes#29204
* consider what links actually exist in all cases
Checksum was only actually scraping when called with no versions. It
now always scrapes and then selects URLs from the set of URLs known to
exist whenever possible.
fixes#25831
* bow to the wrath of flake8
* test-fetch urls from package, prefer if successful
* Update lib/spack/spack/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* reword as suggested
* re-enable mypy specific ignore and ignore pyflakes
* remove flake8 ignore from .flake8
* address review comments
* address comments
* add sneaky missing substitute
I missed this one because we call substitute on a URL that doesn't
contain a version component. I'm not sure how that's supposed to work,
but apparently it's required by at least one mock package, so back in it
goes.
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Adds `spack external read-cray-manifest`, which reads a json file that describes a set of package DAGs. The parsed results are stored directly in the database. A user can see these installed specs with `spack find` (like any installed spec). The easiest way to use them right now as dependencies is to run `spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described in the file to Spack's installation DB and will also install described compilers to the compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR registers the origin as "external-db"). In the future, it is assumed users may want to be able to treat installs registered with this command differently (e.g. they may want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec is concrete
* I don't think the hashes of installed-and-concrete specs should change and this was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied (external specs may mention compilers that are not registered, and without this change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* Use same cxx value as root
* Remove pointer syntax from non-pointer type in source
* Run patch function before build
* Use raw string in filter_file and merge edit function with patch
* Escape parentheses
* Use gDirectory from ROOT instead of CurrentDirectory function
This PR removes a few outdated sections from the "Basics" part of the
documentation. It also makes a few topic under the environment section
more prominent by removing an unneeded spack.yaml subsection and
promoting everything under it.
* Make boost composable
Currently Boost enables a few components through variants by default,
which means that if you want to use only what you need and no more, you
have to explicitly disable these variants, leading to concretization
errors whenever a second package explicitly needs those components.
For instance if package A only needs `+component_a` it might depend on
`boost +component_a ~component_b`. And if packge B only needs
`+component_b` it might depend on `boost ~component_a +component_b`. If
package C now depends on both A and B, this leads to unsatisfiable
variants and hence a concretization error.
However, if we default to disabling all components, package A can simply
depend on `boost +component_a` and package B on `boost +component_b` and
package C will concretize to depending on `boost +component_a
+component_b`, and whatever you install, you get the bare minimum.
* Fix style
* Added composable boost dependencies for folly
* fixing akantu merge issue
* hpctoolkit boost dependencies already defined
* Fix Styles
* Fixup style once more
* Adding isort fix
* isort one more time
* Fix for package audit issue
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Ryan O'Malley <rd.omalley@comcast.net>
Consolidate Spack's internal filepath logic to a select
few places and refactor to consistent internal useage of
os.path utilities. Creates a prefix, and a series of utilities
in the path utility module that facilitate handling paths
in a platform agnostic manner.
Convert Windows paths to posix paths internally
Prefer posixpath.join instead of os.path.join
Updated util/ directory to account for Windows integration
Co-authored-by: Stephen Crowell <stephen.crowell@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Module template format for windows (#23041)
* Incorporate new search location
* Add external user option
* proper doc string
* Explicit commands in getting started
* raise during chgrp on Win
recover installer changes
Notate admin privleges
Windows phase install hooks
Find external python and install ninja (#23496)
Allow external find python to find windows python and spack install ninja
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Fixup common tests
* Remove requirement for Python 2.6
* Skip new failing test
Windows: Update url util to handle Windows paths (#27959)
* update url util to handle windows paths
* Update tests to handle fixed url handling
* canonicalize path only when the path type matches the host platform
* Skip some url tests on Windows
Co-authored-by: Omar Padron <omar.padron@kitware.com>
Use threading.TIMEOUT_MAX when available (#24246)
This value was introduced in Python 3.2. Specifying a timeout greater than
this value will raise an OverflowError.
Co-authored-by: Lou Lawrence <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Add compiler hint to the root spec for Windows
Reporters on Windows (#26038)
Reporters use Jinja2 as the templating engine, and Jinja2 indexes
templates by Unix separators, even on Windows, so search using Unix paths
on all systems.
Support patching on win via git (#25871)
Handle GRP on windows
CMake - Windows Bootstrap (#25825)
Remove hardcoded cmake compiler (#26410)
Revert breaking cmake changes
Ensure no autotools on Windows
Perl on Windows (#26612)
Python source build windows (#26313)
Reconfigure sysconf for Windows
Python2.6 compatibility
Fxixup new sbang tests for windows
Ruby support (#28287)
Add NASM support (#28319)
Add mock Ninja package for testing
* Style fixes
* Use Python's zipfile, if available
The compression libs are optional in Python. Rely on python as a
first attempt then fall back to `unzip`
MSVC's internal CMake and Ninja now detected by spack external find and added to packages.yaml
Saving progress on packaging zlib for Windows
Fixing the shared CMake flag
* Loading Intel's ifx Fortran compiler into MSVC; if there are multiple
versions of MSVC installed and detected, ifx will only be placed into
the first block written in compilers.yaml. The version number of ifx can
be detected using MSVC's version flag (instead of /QV) by using
ignore_version_errors. This commit also provides support for detection
of Intel compilers in their own compiler block by adding ifx.exe to the
fc/f77_name blocks inside intel.py
* Giving CMake a Fortran compiler argument
* Adding patch file for removing duplicated mangling header for versions 3.9.1 and older; static and shared now successfully building on Windows
* Have netlib-lapack depend on ninja@1.10
Co-authored-by: John R. Cary <cary@txcorp.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Making a default config.yaml for Windows
Small path length for build_stage
Provide more prerequisite details, mention default config.yaml
Killing an unnecessary setvars call
Replacing some lost changes, proofreading, updating windows-supported package list
Co-authored-by: John Parent <john.parent@kitware.com>
* Add 'make-installer' command for Windows
* Add '--bat' arg to env activate, env deactivate and unload commands
* An equivalent script to setup-env on linux: spack_cmd.bat. This script
has a wrapper to evaluate cd, load/unload, env activate/deactivate.(#21734)
* Add spacktivate and config editor (#22049)
* spack_cmd: will find python and spack on its own. It preferentially
tries to use python on your PATH (#22414)
* Ignore Windows python installer if found (#23134)
* Bundle git in windows installer (#23597)
* Add Windows section to Getting Started document
(#23131), (#23295), (#24240)
Co-authored-by: Stephen Crowell <stephen.crowell@kitware.com>
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Co-authored-by: Ben Cowan <benc@txcorp.com>
Update Installer CI
Co-authored-by: John Parent <john.parent@kitware.com>
Made the vcvars batch script location a member variable of the msvc compiler subclass, initialized from the compiler executable path. Added a setup_custom_environment() method to the msvc subclass that sources the vcvars script, dumps the environment, and copies the relevant environment variables to the Spack environment. Added class variables to the Windows OS and MSVC compiler subclasses to enable finding the compiler executables and determining their versions.
* Fixed path and uid issues.
* Added needed import statement; kluged .exe extension.
* Got package to build. Some manual intervention necessary, including sourcing the MSVC setup script and having certain configuration parameters.
* Removed CMake executable suffix hack.
To provide Windows-compatible functionality, spack code should use
llnl.util.symlink instead of os.symlink. On non-Windows platforms
and on Windows where supported, os.symlink will still be used.
Use junctions when symlinks aren't supported on Windows (#22583)
Support islink for junctions (#24182)
Windows: Update llnl/util/filesystem
* Use '/' as path separator on Windows.
* Recognizing that Windows paths start with '<Letter>:/' instead of '/'
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
os.rename() fails on Windows if file already exists.
Create getuid utility function (#21736)
On Windows, replace os.getuid with ctypes.windll.shell32.IsUserAnAdmin().
Tests: Use getuid util function
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
1. Forwarding sys.stdin, e.g. use input_multiprocess_fd,
gives an error on Windows. Skipping for now
3. subprocess_context needs to serialize for Windows, like it does
for Mac.
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Snapshot of some MSVC infrastructure added during experiments a while ago. Rebasing from spack/develop.
* Added platform and OS definitions for Windows.
* Updated Windows platform file to conform to new archspec use.
* Added Windows as a platform; introduced some debugging code.
* Added type annotations.
* Fixed copyright.
* Removed print statements.
* Ensure `spack arch` returns correctly on Windows (#21428)
* Correctly identify windows as 'windows-Windows10-AMD64'
* python: allow versions with garbage suffix
Ubuntu 22.04 preview python prints version as 3.10.2+, the + causes
version parsing to fail and breaks detection.
* Add version comment
* match VALID_VERSION regex
* libiconv: compile with pic even when static build
* lmod: require shared lua
It seems to be unable to detect lua-posix when using a static lua:
```
Error: The follow lua module(s) are missing: posix
```
Re-work the checks and comparisons around commit versions, when no
commit version is involved the overhead is now in the noise, where one
is the overhead is now constant rather than linear.
develop in the version string. The versions from the HDF5 code were not
matching because 'develop-' is not part of the HDF5 version. Also, the
develop-x.x versions in spack omit the release version (third) number
because the branch spans all of the release versions.
* Update: py-cmake
Add additional dependencies as declared by the `py-cmake` repository.
Note: for either from-source or from-binary builds, this downloads
additional software via the network. We might want to propose upstream
patches to make this work on nodes without internet connection.
* Add Review Comments + Newest Version
* Add: Ninja
Preferred generator according to outputs and upstream repo logic
* Attempt to use resource() for CMake source
* [py-watchdog] switched to pypi and audited dependencies
* [py-watchdog] added version 2.1.6
* [py-watchdog] updated dependencies for old versions
* [py-watchdog] added when for variant
* [py-watchdog] added some newlines to make flake8 happy
* hsa-rocr-dev, llvm-amdgpu: change dependency libelf to elf
Change the libelf dependency to the virtual elf for two rocm packages.
This allows other packages (hpctoolkit) to combine rocm and dyninst
(with elfutils) while still being able to build rocm with libelf when
needed, eg darwin.
* add comment describing include path for libelf vs elfutils
fixes#29446
The new setup_*_environment functions have been falling back
to calling the old functions and warn the user since #11115.
This commit removes the fallback behavior and any use of:
- setup_environment
- setup_dependent_environment
in the codebase
Change the internal representation of `Spec` to allow for multiple dependencies or
dependents stemming from the same package. This change permits to represent cases
which are frequent in cross compiled environments or to bootstrap compilers.
Modifications:
- [x] Substitute `DependencyMap` with `_EdgeMap`. The main differences are that the
latter does not support direct item assignment and can be modified only through its
API. It also provides a `select_by` method to query items.
- [x] Reworked a few public APIs of `Spec` to get list of dependencies or related edges.
- [x] Added unit tests to prevent regression on #11983 and prove the synthetic construction
of specs with multiple deps from the same package.
Since #22845 went in first, this PR reuses that format and thus it should not change hashes.
The same package may be present multiple times in the list of dependencies with different
associated specs (each with its own hash).
* The new version of Wonton requires the new version of Jali
* Wonton: versions after 1.2.10 don't require boost at all
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* environment.py: allow link:run
Some users want minimal views, excluding run-type dependencies, since
those type of dependencies are covered by rpaths and the symlinked
libraries in the view aren't used anyways.
With this change, an environment like this:
```
spack:
specs: ['py-flake8']
view:
default:
root: view
link: run
```
includes python packages and python, but no link type deps of python.
* ECP-SDK/VTK-m: Update ROCm variant
VTK-m set contraint for when rocm/kokkos are available.
SDK Make ROCmPackage and propagate amdgpu_arch and rocm variant to
VTK-m.
Note: SDK has to check vtk-m@ 1.7: and :1.6 explicitly in orderer to have 1.7
be selected by default if +rocm in the SDK.
* ECP-SDK: Enable ROCm + VTK-m constraints
* Adding Panzer as Default
* Set Panzer as non-default
* Updated the conflict for Panzer.
* Updated the conflict for Panzer.
* Resolve the issue with Stratimikos and Thyra
* Fixing stk build issues.
* Fixing stk build issues.
* Adding another conflict for Thrya
* cray-libsci: only be a provider for scalapack with +mpi
If a package explicitly links the scalapack provider we might otherwise end up with different variants of libsci being linked: the explicitly linked one and the one added by the Cray compiler wrappers.
* cp2k: require cray-libsci+openmp with +openmp for consistency
otherwise we might get 2 different libsci linked: one explicitly, the other one via the Cray compiler wrappers, leading at least to segfaults during cleanup
* cp2k: depend on cray-fftw+openmp with +openmp
* hdf5: mark +fortran+shared conflict for older version
This version was only activated unintentionally by silo's conflict
statement, but `@1.8.15+shared+fortran+cxx` errors out in configure:
```
CMake Error at CMakeLists.txt:814 (message):
**** Shared FORTRAN libraries are unsupported ****
```
* silo: refine hdf5 conflicts to avoid building old version
Before this, `silo+hdf5` concretized to 1.10.7 or sometimes 1.8.15. Now
I've verified it works for the following configurations:
```
silo@4.10.2 patches=7b5a1dc,952d3c9
^ hdf5@1.10.7 api=default
silo@4.10.2 patches=7b5a1dc,952d3c9,eb2a3a0
^ hdf5@1.10.8 api=v18
silo@4.10.2 patches=7b5a1dc,952d3c9,eb2a3a0
^ hdf5@1.12.1 api=v110
silo@4.11-bsd patches=eb2a3a0
^ hdf5@1.12.1 api=v110
silo@4.11-bsd patches=eb2a3a0
^ hdf5@1.10.8 api=default
silo@4.11-bsd patches=eb2a3a0
^ hdf5@1.12.1 api=default
```
and verified that the following fail:
```
silo@4.10.2 ^hdf5@1.12.1 api=default
silo@4.11 ^hdf5 api=v18
silo@4.11-bsd ^hdf5@1.13.0 api=v12
silo@4.11-bsd ^hdf5@1.13.0 api=default
```
and have updated the constraints to match. Hdf5 no longer has to be
downgraded to work with Silo.
* silo: fix dependency conflicts
* py-h5py: shorten and add comments to py-h5py hdf5 dependencies
* e4s: remove slightly outdated hdf5 requirement
* e4s: remove excessive hdf5 variant constraints
These I think are holdovers from the old concretizer.
- `hdf5_compat` can be expressed as `+hdf5 ^hdf5@1.8`
- The extra variants on hdf5 shouldn't break conduit
- axom unnecessarily restricts hdf5 version
* conduit: restore hdf5_compat flag
New versions don't try to configure docs targets at all when the
BUILD_DOCS option is turned off. This avoids CMake warnings
when docs dependencies are not found.
Speeds up comparison on `Version` by ~2.5x, e.g.
```python
In [1]: v = spack.version.Version('1.0.0'); w = spack.version.Version('1.0.2')
In [2]: %timeit v < w
1.47 µs ± 5.59 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
535 ns ± 1.75 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
* Bugfix in var/spack/repos/builtin/packages/esmf/package.py
* Bug fixes in var/spack/repos/builtin/packages/esmf/package.py to build ESMF on macOS with clang+gfortran and on cray
* Add maintainer to var/spack/repos/builtin/packages/esmf/package.py
* Fix style errors
* Fix more style errors
* py-jupytext: add version 0.13.6
From da3fcc305d:
markdown-it-py v2.0 implements some internal changes, but won't affect jupytext
* py-jupytext: keep mdit-py version restricted to 1
* py-jupytext: update dependencies
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add HiOp v0.5.4, update magma constraint
* Add v2.6.2rc1 to magma, make hiop depend on it
* Update var/spack/repos/builtin/packages/hiop/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The 'multicore' backend always uses SMP, so reverse
the logic of the `conflict` clause. This resolves an issue
where the '+smp' default caused the 'backend' to switch
away from 'multicore' unintentionally (#29234).
fixes#29203
This PR fixes a subtle bug we have when importing
Spack packages as Python modules that can lead to
multiple module objects being created for the same
package.
It also fixes all the places in unit-tests where
"relying" on the old bug was crucial to have a new
"clean" state of the package class.
This commit reverts the GCS fetch strategy to before commit:
d759612523
The previous commit added some s3 syntax to handle connections, but
added them into the GCS fetch strategy in a way that prevents GCS from
working anymore.
* rocmcc compiler: initial commit based on aocc and clang
Co-authored-by: luker <luke.roskop@hpe.com>
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
Recipes that are not actually required for LBANN or DiHydrogen to
build. These should be concretized within the same environment or
installed via PIP using the same Python that installed LBANN.
Removing these will help eliminate build time failures that are
actually associated with Python tools, not LBANN.
The status displayed in the terminal title could be wrong when doing
distributed builds. For instance, doing `spack install glib` in two
different terminals could lead to the current package being reported as
`40/29` due to the way Spack handles retrying locks.
Work around this by keeping track of the package IDs that were already
encountered to avoid counting packages twice.
* HIP: Change mesa18 dep to gl
* Mesa: Conflict with llvm-amdgpu when +llvm and swr
* Add def for suffix
* Disable llvm suffix patch.
* LLVM: Remove version suffix patches
* ECP-SDK: ParaView 5.11: required for CUDA
* Add conflict with ParaView@master
Because of the additional constraints for cuda, ParaView@master may be
selected unintentionally. Prefer older versions of ParaView without cuda
to master with cuda.
* hypre: Add releases 2.21.0 and 2.22.0
* Revert "hypre: Add releases 2.21.0 and 2.22.0"
This reverts commit 8921cdb3ac.
* Address external linkage failures in elfutils 0.185:
https://bugs.gentoo.org/794601https://sourceware.org/pipermail/elfutils-devel/2021q2/003862.html
Encountered while building within a Spack environment.
* Revert "Address external linkage failures in elfutils 0.185:"
This reverts commit 76b93e4504.
* paraview: The ninja generator has problems with XL and CCE
See https://gitlab.kitware.com/paraview/paraview/-/issues/21223
* paraview: Add variant to allow choice of cmake generator.
This will be necessary until problems with cmake+ninja on XL and
CCE builds can be resolved.
See https://gitlab.kitware.com/paraview/paraview/-/issues/21223
* paraview: ninja generator problems with XL/CCE
By popular preference, abandon the idea of a special variant
and select the generator based on compiler.
* Greg Becker suggested using the dedicated "generator" method to
pass the choice of makefile generator to cmake.
* paraview: The build errors I saw before with paraview%cce + ninja
have not reappeared in subsequent testing, so I'm dropping it from this
PR. If they re-occur I'll report the issue separately to KitWare.
* py-nbclassic: add 0.3.5
* Update var/spack/repos/builtin/packages/py-nbclassic/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix style
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add a new test to catch exit code failure
fixes#29226
This introduces a new unit test that checks the return
code of `spack unit-test` when it is supposed to fail.
This is to prevent bugs like the one introduced in #25601
in which CI didn't catch a missing return statement.
In retrospective it seems that the shell test we have right
now all go through `tty.die` or similar code paths which
call `sys.exit(a)` explicitly. This new test instead checks
`spack unit-test` which relies on the return code from
command invocation in case of errors.
* Add 'develop' version for dmtcp
* Update var/spack/repos/builtin/packages/dmtcp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The `spack external find binutils` command was failing to find my system
binutils because the regex was not matching. The name of the executable
follows the string 'GNU' that I tested with three different
installations so I changed the regex to look for that. On my CentOS-7
system, the version had the RPM details so I set the version to capture
the first three parts of the version.
The system compiler on RHEL7 fails to build the latest linux-uuid.
```
util-linux-uuid@2.37.4%gcc@4.8.5 arch=linux-rhel7-haswell
```
results in:
```
libuuid/src/unparse.c:42:73: error: expected ';', ',' or ')' before 'fmt'
static void uuid_fmt(const uuid_t uuid, char *buf, char const *restrict fmt)
```
It looks like it's assuming C99 by default so there may be a better way
to handle this... but this at least works
See https://github.com/spack/spack/pull/28468/files#r809156986
If we exit before generating the:
error("Dependencies must have compatible OS's with their dependents").
...
facts we'll output a problem that is effectively
different by the one solved by clingo.
* cmd/checksum: prefer url matching url_from_version
This is a minimal change toward getting the right archive from places
like github. The heuristic is:
* if an archive url exists, take its version
* generate a url from the package with pkg.url_from_version
* if they match
* stop considering other URLs for this version
* otherwise, continue replacing the url for the version
I doubt this will always work, but it should address a variety of
versions of this bug. A good test right now is `spack checksum gh`,
which checksums macos binaries without this, and the correct source
packages with it.
fixes#15985
related to #14129
related to #13940
* add heuristics to help create as well
Since create can't rely on an existing package, this commit adds another
pair of heuristics:
1. if the current version is a specifically listed archive, don't
replace it
2. if the current url matches the result of applying
`spack.url.substitute_version(a, ver)` for any a in archive_urls,
prefer it and don't replace it
fixes#13940
* clean up style and a lingering debug import
* ok flake8, you got me
* document reference_package argument
* Update lib/spack/spack/util/web.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* try to appease sphinx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We can see what is in the bootstrap store with `spack find -b`, and you can clean it with `spack
clean -b`, but we can't do much else with it, and if there are bootstrap issues they can be hard to
debug.
We already have `spack --mock`, which allows you to swap in the mock packages from the command
line. This PR introduces `spack -b` / `spack --bootstrap`, which runs all of spack with
`ensure_bootstrap_configuration()` set. This means that you can run `spack -b find`, `spack -b
install`, `spack -b spec`, etc. to see what *would* happen with bootstrap configuration, to remove
specific bootstrap packages, etc. This will hopefully make developers' lives easier as they deal
with bootstrap packages.
This PR also uses a `nullcontext` context manager. `nullcontext` has been implemented in several
other places in Spack, and this PR consolidates them to `llnl.util.lang`, with a note that we can
delete the function if we ever reqyire a new enough Python.
- [x] introduce `spack --bootstrap` option
- [x] consolidated all `nullcontext` usages to `llnl.util.lang`
* py-imageio: add 2.16.0
* Update var/spack/repos/builtin/packages/py-imageio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Some "concrete" versions on the command line, e.g. `qt@5` are really
meant to satisfy some actual concrete version from a package. We should
only assume the user is introducing a new, unknown version on the CLI
if we, well, don't know of any version that satisfies the user's
request. So, if we know about `5.11.1` and `5.11.3` and they ask for
`5.11.2`, we'd ask the solver to consider `5.11.2` as a solution. If
they just ask for `5`, though, `5.11.1` or `5.11.3` are fine solutions,
as they satisfy `@5`, so use them.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* geant4-data: use build+run-only depends
* geant4: point to dependent datadir
This is "used" in the configure step to set up the Geant4Config.cmake
file's persistent pointers to the data directory, but the dependency
is still listed as "run" -- though I'm not sure this is the right behavior
since the geant4 installation really does change as a function of the
data directory, and the installation is incomplete/erroneous
without using one.
* Style
* trilinos: disable dl on macOS
* py-sphinx-argparse: add explicit poetry dependency
* libzmq: fix libbsd dependency
libbsd is *always* required when +libbsd (introduced in #28503) . #20893
had previously removed the macos dependency because libbsd wasn't always
enabled. Libbsd support is only available after 4.3.2 so change it to a
conflict rather than bumping the dependency.
* hdf5: work around GCC11.2 monterey fortran bug
* go-bootstrap: mark conflict for monterey
* py-tensorflow: add versions 2.5.0 and 2.6.0
- add version 2.5.0
- add version 2.6.0
- add patches for newer protobuf
- set constraints
* Remove import os. left over from testing
* Remove unused patch file
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-clang dependency
* Adjust py-clang constraint
* Build tensorflow with tensorboard
- tensorflow
- added 2.6.1 and 2.6.2 versions
- tensorboard
- have bazel use number of jobs set by spack
- add versions and constraints
- new package: py-tensorboard-data-server
- use wheel for py-tensorboard-plugin-wit
This package can not build with newer versions of bazel that are
needed for newer versions of py-tensorboard.
* Update var/spack/repos/builtin/packages/py-clang/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Remove empty line at end of file
* Fix import sorting
* Adjust python dependencies on py-clang
* Add version 2.7.0 of pt-tensorflow and py-tensorboard
* Adjust bazel constraints
* bazel-4 support begins with py-tensorflow-2.7.0
* Adjust dependencies
* Loosen cuda constraint on versions > 2.5
Tensorflow-2.5 and above can use cuda up to version 11.4.
* Add constraints to patch
The 0008-Fix-protobuf-errors-when-using-system-protobuf.patch patch
should only apply to versions 2.5 and above.
* Adjust constraints
- versions 2.4 and below need protobuf-3.12 and below
- versions 2.4 and above can use up to cuda-11.4
- versions 2.2 and below can not use cudnn-8
- the null_linker_bin patch should only be applied to versions 2.5 and
above.
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix py-grpcio dependency for version 2.7
Also, make sure py-h5py mpi specs are consistent.
* Add llvm as run dependency.
* Fix python spec for py-tensorboard
* Fix py-google-auth spec for py-tensorboard
* Do not override the pip spec for tensorboard-plugin-wit
* Converted py-tensorboard-plugin-wit to wheel only package
* Fix bazel dependency spec in tensorflow
* Adjust pip masks
- allow tensorboard to be specified in pip constraints
- mask tensorflow-estimator
* Remove blank line at end of file
* Adjust pip constraints in setup.py
Also, adjust constraint on a patch that is fixed in 2.7
* Fix flake8 error
Adjust formatting for consistency.
* Get bazel dep right
* Fix old cudnn dependency, caught in audit test
* Adjust the regex to ensure proper line is changed
* Add py-libclang package
- Stripped the py-clang package down to just version 5
- added comments to indicate the purpose of py-clang and that
py-libclang should be preferred
- set dependencies accordingly in py-tensorflow
* Remove cap on py-h5py dependency for v2.7
* Add TODO entries for tensorflow-io-gcs-filesystem
* Edit some comments
* Add phases and select python in PATH for tensorboard-data-server
* py-libclang
- remove py-wheel dependency
- remove raw string notation in filter_file
* py-tensorboard-data-server
- remove py-wheel dep
- remove py-pip dep
- use python from package class
* py-tensorboard-plugin-wit
- switch to PythonPackage
- add version 1.8.1
- remove unneeded code
* Add comment as to why a wheel is need for tensorboard-plugin-wit
* remove which pip from tensorboard-data-server
* Fix dependency specs in tensorboard
* tweak dependencies for tensorflow
* fix python constraint
* Use llvm libs property
* py-tensorboard-data-server
- merge build into install
- use std_pip_args
* remove py-clang dependency
* remove my edits to py-tensorboard-plugin-wit
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
See https://github.com/spack/spack/issues/25353#issuecomment-1041868116
This commit changes the default behavior of
```
$ spack external find
```
from searching all the possible packages Spack knows about to
search only for the ones tagged as being a "build-tool".
It also introduces a `--all` option to restore the old behavior.
Prefer `sw_vers` to `platform.mac_ver`. In anaconda3 installation, for example, the latter reports 10.16 on Monterey -- I think this is affected by how and where the python instance was built.
Use MACOSX_DEPLOYMENT_TARGET if present to override the operating system choice.
It will be useful for metrics gathering and possibly debugging to
have this environment variable available in the runner pods that
do the actual rebuilds.
Since Spack does not install external packages, this commit skips them by
default when running stand-alone tests. The assumption is that such packages
have likely undergone an acceptance test process.
However, the tests can be run against installed externals using
```
% spack test run --externals ...
```
fixes#28260
Since we iterate over different variants from many packages, the variant
values may have types which are not comparable, which causes errors
at runtime. This is not a real issue though, since we don't need the facts
to be ordered. Thus, to avoid needless sorting, the sorted function has
been removed and a comment has been added to tip any developer that
might need to inspect these clauses for debugging to add back sorting
on the first two items only.
It's kind of difficult to add a test for this, since the error depends on
whether Python sorting algorithm ever needs to compare the third
value of a tuple being ordered.
* extensions: allow multiple "extends" directives
This will allow multiple extends directives in a package as long as only one of
them is selected as a dependency in the concrete spec.
* document the option to have multiple extends
Reuse previously was a very invasive change that required parameters to be added to all
the methods that called `concretize()` on a `Spec` object. With the addition of
concretizer configuration, we can use the config system to simplify this argument
passing and keep the code cleaner.
We decided that concretizer config options should be read at `Solver` instantiation
time, and if config changes between instnatiation of a particular solver and
`solve()` invocation, the `Solver` should use the settings from `__init__()`.
- [x] remove `reuse` keyword argument from most concretize functions
- [x] refactor usages to use `spack.config.override("concretizer:reuse", True)`
- [x] rework argument passing in `Solver` so that parameters are set from config
at instantiation time
`--reuse` was previously handled individually by each command that
needed it. We are growing more concretization options, and they'll
need their own section for commands that support them.
Now there are two concretization options:
* `--reuse`: Attempt to reuse packages from installs and buildcaches.
* `--fresh`: Opposite of reuse -- traditional spack install.
To handle thes, this PR adds a `ConfigSetAction` for `argparse`, so
that you can write argparse code like this:
```
subgroup.add_argument(
'--reuse', action=ConfigSetAction, dest="concretizer:reuse",
const=True, default=None,
help='reuse installed dependencies/buildcaches when possible'
)
```
With this, you don't need to add logic to pull the argument out and
handle it; the `ConfigSetAction` just does it for you. This can probably
be used to clean up some other commands later, as well.
Code that was previously passing `reuse=True` around everywhere has
been refactored to use config, and config is set from the CLI using
a new `add_concretizer_args()` function in `spack.cmd.common.arguments`.
- [x] Add `ConfigSetAction` to simplify concretizer config on the CLI
- [x] Refactor code so that it does not pass `reuse=True` to every function.
- [x] Refactor commands to use `add_concretizer_args()` and to pass
concretizer config using the config system.
Config scopes were different for `config` and `mutable_config`,
and `mutable_config` did not have a command line scope.
- [x] Fix by consolidating the creation logic for the two fixtures.
The concretizer is going to grow to have many more configuration,
and we really need some structured config for that.
* We have the `config:concretizer` option that chooses the solver,
but extending that is awkward (we'd need to replace a string with
a `dict`) and the solver choice will be deprecated eventually.
* We have the `concretization` option in environments, but it's
not a top-level config section -- it's just for environments,
and it also only admits a string right now.
To avoid overlapping with either of these and to allow the most
extensibility in the future, this adds a new `concretizer` config
section that can be used in and outside of environments. There
is only one option right now: `reuse`. This can expand to include
other options later.
Likely, we will soon deprecate `config:concretizer` and warn when
the user doesn't use `clingo`, and we will eventually (sometime later)
move the `together` / `separately` options from `concretization` into
the top-level `concretizer` section.
This commit just adds the new section and schema. Fully wiring it
up is TBD.
The solver has a lot of configuration associated with it. Rather
than adding arguments to everything, we should encapsulate that
in a class. This is the start of that work; it replaces `solve()`
and its kwargs with a class and properties.
* Add 'stable' to the list of infinity version names.
Rename libunwind 1.5-head to 1.5-stable.
* Add stable to the infinite version list in packaging_guide.rst.
* py-etelemetry: add 0.3.0
* Update var/spack/repos/builtin/packages/py-etelemetry/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
No version of py-nbconvert@5: can be concretized due to conflicting versions
of flit-core that are required. This issue could be solved by separate
concretization of build deps.
* archspec: remove pyproject.toml to workaround PEP517
If pyproject.toml is in the folder, that is preferred to the
setup.py packaged by poetry itself. Adding a dependency on
poetry for deploying a pure Python package seems wasteful,
since the package to be deployed just needs to be copied in
place, so we don't want to built rust for that.
* archspec: patch pyproject.toml to comply to PEP517
See https://python-poetry.org/docs/pyproject/#poetry-and-pep-517
* Fix style issues
The new HDF5 version 1.12 API causes compiler errors due to modified function prototypes. Note that version 1.11 is the development version of HDF5 1.12.
* py-numba: add 0.55.1
* Remove comment
* Pin down py-llvmlite version for older py-numba releases
* Remove py-llvmlite deps for releases not in spack
* Set upper bounds for python and py-numpy
* Add stricter upper bound to py-numpy for releases <=0.47
Setting Spack's `$prefix` to `$DESTDIR` and not to `$PREFIX` install the
package in `$prefix/usr/local` and not in `$prefix`, thus when it is
loaded the executable `direnv` in not "seen" by the environment.
* Added support to LBANN, Hydrogen, DiHydrogen, and Aluminum to capture
a gcc-toolchain cxxflags argument and pass it to a CMAKE_CUDA_FLAG
argument when set. This helps deal with compiling with clang on
systems with old base gcc installations.
* Added a dependency on py-scipy when enabling tests on LBANN.
* Updated the C++ standard for Hydrogen to C++17.
* Added a new variant +apps to enable (or disable) python packages that
are used by applications in the LBANN repo, but are not strictly
required for building and using LBANN.
* Added a run time dependency for both py-pytest and py-scipy so that
they are activated in any environment.
* Added support for building LBANN, Hydrogen, and DiHydrogen with the
IBM ESSL BLAS library. This requires explicit identification of
additional LAPACK libraries, since ESSL does not implement LAPACK, but
is found by CMake.
* Fixed a bug in the LBANN dependency on OpenCV for Power architectures.
The +powerpc variant is only required for GCC toolchains and causes
Clang to break. Switched to only enabling when using %gcc on power.
- Installation often hangs building the documentation. This happens when
doxygen and latex are found. To avoid the issue, comment-out that part
of the code until an explicit cmake variable to disable documentation
generation is available.
* sundials: fix smoke tests
* sundials: add new version
* use cmake+make instead of make for tests, fix style
* use cmake_bin workaround from https://github.com/spack/spack/pull/28622
Note that the SDK is not the same as the system version: using
apple-clang@13 is a better match than `os=monterey` since this actually
fails on bigsur as well, as long as xcode 13 is being used.
* core: Make platform environment an instance not class method
In preparation for accessing data constructed in __init__.
* macos: set consistent macosx deployment target
This should silence numerous warnings from mixed gcc/macos toolchains.
* perl: prevent too-new deployment target version
```
*** Unexpected MACOSX_DEPLOYMENT_TARGET=11
***
*** Please either set it to a valid macOS version number (e.g., 10.15) or to empty.
```
* Stylin'
* Add deployment target overrides to failing autoconf packages
* Move configure workaround to base autoconf package
This reverts commit 3c119eaf8b4fb37c943d503beacf5ad2aa513d4c.
* Stylin'
* macos: add utility functions for SDK
These aren't yet used but should probably be added to spack debug
report.
* Remove node_target_satisfies/3 in favor of target_satisfies/2
When emitting input facts we don't need to couple target with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Remove compiler_version_satisfies/4 in favor of compiler_version_satisfies/3
When emitting input facts we don't need to couple compilers with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Introduce heuristic in the ASP-program
With heuristic we can drive clingo to make better
initial guesses, which lead to fewer choices and
conflicts in the overall solve
This improves the stand-alone tests for slate by providing most
of the dependencies to the test framework and enabling stand-alone
tests on all versions except the oldest.
* AMReX: +tiny_profile
The tiny profiler options in AMReX are by default off but needed
by WarpX. Adds a new variant to control it.
* Add Erik Palmer as Co-Maintainer
... so he receives pings on updates of the package for review.
The version of the ONNX submodule was updated between the PyTorch
1.9 and 1.10 releases, which fixed builds with newer protobuf but
broke builds with older protobuf.
Also this adds minimum version reqs for numpy/typing-extensions
(which were not present before).
* gcc: revise patch range on darwin
* gcc: add conflict to work around bootstrap failure
closes#23296 . See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100340
.
```
Comparing stages 2 and 3
Bootstrap comparison failure!
gcc/tree-ssa-operands.o differs
gcc/tree-ssanames.o differs
gcc/ipa-inline.o differs
gcc/tree-ssa-pre.o differs
gcc/gimple-loop-interchange.o differs
...
```
639 total differences.
* gcc: bump conflict up to correct later version
* Fix reindex with uninstalled deps
When a prefix of a dep is removed, and the db is reindexed, it is added
through the dependent, but until now it incorrectly listed the spec as
'installed'.
There was also some questionable behavior in the db when the same spec
was added multiple times, it would always be marked installed.
* Always reserve path
* Only add installed spec's prefixes to install prefixes set
* Improve warning, and ensure ensure only ensures
* test: reindex with every file system remnant removed except for the old index; it should give a database with nothing installed, including records with installed==False,external==False,ref_count==0,explicit=True, and these should be removable from the database
* stacks: add regression tests for matrix expansion
* Use constrain semantics to construct spec lists for stacks
* Fix semantics for constraining an anonymous spec. Add tests
Since in Spack we pull binaries out of the `warpx` package, we don't
need `py-cmake` to build `py-warpx`.
Generally, `py-cmake` in `pyproject.toml` is just a mean for us to
tell `pip` to make a `cmake` CLI tool available.
* added package gptune with all its dependencies: adding py-autotune, pygmo, py-pyaml, py-autotune, py-gpy, py-lhsmdu, py-hpbandster, pagmo2, py-opentuner; modifying superlu-dist, py-scikit-optimize
* adding gptune package
* minor fix for macos spack test
* update patch for py-scikit-optimize; update test files for gptune
* fixing gptune package style error
* fixing unit tests
* a few changes reviewed in the PR
* improved gptune package.py with a few newly added/improved dependencies
* fixed a few style errors
* minor fix on package name py-pyro4
* fixing more style errors
* Update var/spack/repos/builtin/packages/py-scikit-optimize/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* resolved a few issues in the PR
* fixing file permissions
* a few minor changes
* style correction
* minor correction to jq package file
* Update var/spack/repos/builtin/packages/py-pyro4/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixing a few issues in the PR
* adding py-selectors34 required by py-pyro4
* improved the superlu-dist package
* improved the superlu-dist package
* moree changes to gptune and py-selectors34 based on the PR
* Update var/spack/repos/builtin/packages/py-selectors34/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* improved gptune package: 1. addressing comments of tldahlgren in PR 26936; 2. adding variant openmpi
* fixing style issue of gptune
* changing file mode
* improved gptune package: add variant mpispawn which depends on openmpi; add variant superlu and hypre for installing the drivers; modified hypre package file to add a gptune variant
* fixing style error
* corrected pddrive_spawn path in gptune test; enforcing gcc>7
* fixing style error
* setting environment variables when loading gptune
* removing debug print in hypre/package.py
* adding superlu-dist v7.2.0; fixing an issue with CMAKE_INSTALL_LIBDIR
* changing site_packages_dir to python_platlib
* not using python3.9 for py-gpy, which causes due to dropped support of tp_print
* more replacement of site_packages_dir
* fixing a few dependencies in gptune; added a gptune version
* adding url for gptune
* minor correction of gptune
* updating versions in butterflypack
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [py-xonsh] added py-xonsh package
* [py-xonsh] change dependency to python 3.6
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add sticky variants
* Add unit tests for sticky variants
* Add documentation for sticky variants
* Revert "Revert 19736 because conflicts are avoided by clingo by default (#26721)"
This reverts commit 33ef7d57c1.
* Add stickiness to "allow-unsupported-compiler"
- To retrieve the correct spack version we need to get it from the git
repo.
- Recommend installing the package with root for production
- Add Tomas as maintainer to sarus' spack package
- Add the option to disable unit tests in latest versions
Fixes the following build failure when building with gcc 11:
478 ../../../../CPP/7zip/Archive/Wim/WimHandler.cpp: In member function 'virtual LONG NArchive::NWim::CHandler::GetArchiveProperty(PROPID, PROPVARIANT*)':
>> 479 ../../../../CPP/7zip/Archive/Wim/WimHandler.cpp:308:11: error: use of an operand of type 'bool' in 'operator++' is forbidden in C++17
480 308 | numMethods++;
481 | ^~~~~~~~~~
>> 482 ../../../../CPP/7zip/Archive/Wim/WimHandler.cpp:318:9: error: use of an operand of type 'bool' in 'operator++' is forbidden in C++17
483 318 | numMethods++;
484 | ^~~~~~~~~~
* opencv: add new version, variant, and patch
- added version 4.5.4
- added tesseract variant
- added patch to not add system paths
* Add leptonica depends and contrib conflicts
* Add dependencies for 1394 support
- new package: libraw1394
- add sdl dependency to libdc1394
- add conflict for openjpeg and jasper
* Adjust dependencies and conflicts for opencv modules
* rewrite of opencv
- all prebuilt apps are now variants and can be installed
- core is no longer a variant. It was always built anyway so it was not
really a variant.
- contrib is no longer a variant. All of the contrib modules are now
available as variants.
- components that can not be built with Spack are no longer variants.
They are set to 'off' to prevent pulling from system.
- handle the case where a module and a component have the same name
- use `with when` framework
- adjust dependencies and conflicts
- new package: libraw1394
- have libdc1394 depend on libraw1394
- patch to find clp
- patch to find onnx
- patch for cvv to find Qt
- format with black
* Incorporate recommended changes
- fix variants and dependencies on packages that depend on opencv
- remove opencv-3.2 and patches
- add some new patches to handle different versions
- cntk needs further work
- the openvslam package was markde deprecated as it is no longer an
active project and the repository has no code
* Remove gmake dependency.
* Remove sdl support
SDL is only used in an example case, but the examples are not built.
* remove openvslam
* Remove opencv+flann variant from 3dtk
* Back out cfitsio constraint from py-astropy
* remove opencv+flann variant from dlib
* remove boost constraint from 3dtk
* Remove non-opencv related bohrium changes
* Adjustments for cntk
- protobuf constraint at version 3.10
- need specific variants for opencv
- improve patch
* Deprecate CNTK package
* variant tweaks
- added appropriate conflicts for cublas
- made cuda/cudev relationship explicit
- moved openx to pending components as it needs an openvx package
* fix isort style error
* Use date version from kaldi rather than commit
* Revert changes from a bad rebase
* Add +flann to 3dtk and dlib
* Use compression support with libtiff
* remove `+datasets` from opencv dependency
The py-torchgeo package does not need opencv+datasets.
* fix typo
zip --> zlib
* added package gptune with all its dependencies: adding py-autotune, pygmo, py-pyaml, py-autotune, py-gpy, py-lhsmdu, py-hpbandster, pagmo2, py-opentuner; modifying superlu-dist, py-scikit-optimize
* adding gptune package
* minor fix for macos spack test
* update patch for py-scikit-optimize; update test files for gptune
* fixing gptune package style error
* fixing unit tests
* a few changes reviewed in the PR
* improved gptune package.py with a few newly added/improved dependencies
* fixed a few style errors
* minor fix on package name py-pyro4
* fixing more style errors
* Update var/spack/repos/builtin/packages/py-scikit-optimize/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* resolved a few issues in the PR
* fixing file permissions
* a few minor changes
* style correction
* minor correction to jq package file
* Update var/spack/repos/builtin/packages/py-pyro4/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixing a few issues in the PR
* adding py-selectors34 required by py-pyro4
* improved the superlu-dist package
* improved the superlu-dist package
* moree changes to gptune and py-selectors34 based on the PR
* Update var/spack/repos/builtin/packages/py-selectors34/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* improved gptune package: 1. addressing comments of tldahlgren in PR 26936; 2. adding variant openmpi
* fixing style issue of gptune
* changing file mode
* improved gptune package: add variant mpispawn which depends on openmpi; add variant superlu and hypre for installing the drivers; modified hypre package file to add a gptune variant
* fixing style error
* corrected pddrive_spawn path in gptune test; enforcing gcc>7
* fixing style error
* setting environment variables when loading gptune
* removing debug print in hypre/package.py
* adding superlu-dist v7.2.0; fixing an issue with CMAKE_INSTALL_LIBDIR
* changing site_packages_dir to python_platlib
* not using python3.9 for py-gpy, which causes due to dropped support of tp_print
* more replacement of site_packages_dir
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* espnet first build with depends
* fixed flake8
* updated to lastest version and removed python dependency
* changed to pypi and version 2.17.2
* [py-kaldiio] depends on py-pytest-runner
* [py-kaldiio] updated copyright
Co-authored-by: Sid Pendelberry <sid@rit.edu>
* prmon: make sure integration tests do not run in parallel
Some integration tests fail if not run on an otherwise idle machine.
* prmon: run unittests based on googletest
* prmon: fix checksums
* superlu-dist: use CMakePackage helper functions
* Fix#28609
It's OK to have CUDA in the dependency tree as long as it's not being
used for superlu-cuda.
* Update prmon package to latest versions
Add recent versions of the prmon package
Add the spdlog dependency for versions >=3
Add package developers as additional maintainers
Update name of development branch to 'main'
* Correct checksum for v3.0.1
at-spi2-core is automatically selecting dbus-broker and enabling systemd if it finds dbus-broker-launch which some systems might have even without systemd being part of the actual spack environment. This is not ideal for a spack package.
ucx has the configure option --[enable|disable]-backtrace detail.
This option is not explicitely set by spack, causing problems on my system, because
./configure does not find the bfd.h header file / libbfd.so library.
Added variant + dependencies (binutils). Disabled by default
* trilinos: version 12 requires cxxstd=11
* trilinos: use cmake version 3.21 or old when trilinos version 12
* conflict cxxstd=17 and cmake@3.2.[01]
* trilinos: version 12 requires cxxstd=11.
* Trilinos_CXX11_FLAGS is set to ' ' to avoid inject C++11 flag.
* set Trilinos_CXX11_FLAGS only version 12 or older.
* trilinos: update dependencies
Use the tribits deps to clarify some dependencies, and group some together
using `with` statements, eliminating some transitive conflict duplication.
* trilinos: Restricit cuda incompatibility
* e4s: vastly reduce number of packages in trilinos-cuda build
Not clear who the customers of cuda-enabled trilinos are, or what options
they need, or which sets of options conflict...
* e4s: remove ~wrapper from trilinos+cuda
* VTK-m: Make vtk-m consistent with ROCmPackage
* VTKm: Add kokkos variant
Specifying +kokkos will enable kokkos backend.
Specifying +kokkos with +rocm will require a kokkos with a ROCm backend.
Specifying +cuda enables VTK-m native CUDA backend. VTK-m native cuda backend
is not compatible with the kokkos +cuda backend.
* VTK-m: Add cuda_native variant
Required to allow specifying a vtk-m spec the selects a
cuda_arch and predictably propagate that to the underlying kokkos
dependency.
This also makes explicit selecting kokkos with a cuda backend or using
the VTK-m cuda backend.
* Mesa(18): Use libllvm virtual package
* Mesa patch configuration
Patch Mesa to define LLVM_VERSION_SUFFIX if llvm is pre-release
* Patch llvm-config to define LLVM_VERSION_SUFFIX
* Add a new version to track development
The released versions do not properly install via cmake which leads to
errors when linking against the library. These upstream problems have
been addressed on the glm development branch.
* Move git to class level and remove redundant depends
* vecgeom: require exact version of veccore
Fixes configure error from downstream package:
```
CMake Error at /rnsdhpc/code/spack/opt/spack/apple-clang/cmake/7zgbrwt/share/cmake-3.22/Modules/CMakeFindDependencyMacro.cmake:47 (find_package):
Could not find a configuration file for package "VecCore" that is
compatible with requested version "0.8.0".
The following configuration files were considered but not accepted:
/rnsdhpc/code/spack/var/spack/environments/celeritas/.spack-env/view/lib/cmake/VecCore/VecCoreConfig.cmake, version: 0.6.0
```
* veccore: add new versions
* Add flags to cabana to enable hypre and heffte when they are part of spec. Also add googletest to build dependencies
* Fixed mixed spaced and tabs
* Update package.py
* Update package.py
* Update package.py
* Modified to request specifically heFFTe version 2.0.0 due to
limitations in heFFTe cmakefiles.
* Update var/spack/repos/builtin/packages/cabana/package.py
Co-authored-by: Christoph Junghans <christoph.junghans@gmail.com>
* Integrated more heffte and hypre versions into cabana requests
Co-authored-by: Christoph Junghans <christoph.junghans@gmail.com>
* ParaView/VTK: Constrain version for ADIOS2 patch.
Older available versions of ParaView/VTK predate
ADIODS2 support.
ParaView lower bound is 5.8 and VTK lower bound is 8.2.0
* ParaView: Gate the ADIOS2 by verison
It seems that spack reads the output of `setup_run_environment` to build the actual spack modules and lmod modules. So, any output here will used verbatim on the shell.
This patch fixes https://github.com/spack/spack/issues/26733
1. adding latest release 3.5.0
2. updating cmake requirement to match that of Kokkos
3. adding logic to depend on the right version of Kokkos by default
* Kokkos: updating package list, maintainers and minimum cmake version
* Kokkos: updating maintainers list
Updating maintainers list to have the correct GitHub handle for Jan.
`spack license update-copyright-year` was updating license headers but not the MIT
license file. Make it do that and add a test.
Also simplify the way we bump the latest copyright year so that we only need to
update it in one place.
* [kaldi] Added version 2021-11-16
* [kaldi] Added logic for new version and when cuda 11 is used
* [kaldi] Added patch file when cuda 11 as cub is now built into it
* [kaldi] removed .999 and simplified some logic
Co-authored-by: Doug Heckman <dahdco@rit.edu>
* add py-ats package
* add new 7.0.10 tag
* add myself as a maintainer
* add dependencies for python and setuptools
* style
* added todo for flux
* words
* update versions users should use
* Use pip to bootstrap pip
* Bootstrap wheel from source
* Update PythonPackage to install using pip
* Update several packages
* Add wheel as base class dep
* Build phase no longer exists
* Add py-poetry package, fix py-flit-core bootstrapping
* Fix isort build
* Clean up many more packages
* Remove unused import
* Fix unit tests
* Don't directly run setup.py
* Typo fix
* Remove unused imports
* Fix issues caught by CI
* Remove custom setup.py file handling
* Use PythonPackage for installing wheels
* Remove custom phases in PythonPackages
* Remove <phase>_args methods
* Remove unused import
* Fix various packages
* Try to test Python packages directly in CI
* Actually run the pipeline
* Fix more packages
* Fix mappings, fix packages
* Fix dep version
* Work around bug in concretizer
* Various concretization fixes
* Fix gitlab yaml, packages
* Fix typo in gitlab yaml
* Skip more packages that fail to concretize
* Fix? jupyter ecosystem concretization issues
* Solve Jupyter concretization issues
* Prevent duplicate entries in PYTHONPATH
* Skip fenics-dolfinx
* Build fewer Python packages
* Fix missing npm dep
* Specify image
* More package fixes
* Add backends for every from-source package
* Fix version arg
* Remove GitLab CI stuff, add py-installer package
* Remove test deps, re-add install_options
* Function declaration syntax fix
* More build fixes
* Update spack create template
* Update PythonPackage documentation
* Fix documentation build
* Fix unit tests
* Remove pip flag added only in newer pip
* flux: add explicit dependency on jsonschema
* Update packages that have been added since this was branched off of develop
* Move Python 2 deprecation to a separate PR
* py-neurolab: add build dep on py-setuptools
* Use wheels for pip/wheel
* Allow use of pre-installed pip for external Python
* pip -> python -m pip
* Use python -m pip for all packages
* Fix py-wrapt
* Add both platlib and purelib to PYTHONPATH
* py-pyyaml: setuptools is needed for all versions
* py-pyyaml: link flags aren't needed
* Appease spack audit packages
* Some build backend is required for all versions, distutils -> setuptools
* Correctly handle different setup.py filename
* Use wheels for py-tomli to avoid circular dep on py-flit-core
* Fix busco installation procedure
* Clarify things in spack create template
* Test other Python build backends
* Undo changes to busco
* Various fixes
* Don't test other backends
* Add new package to spack. survey is a lightweight application performance tool that also gathers system information and stores it as metadata.
* Add maintainer and note about source access.
* Update the man path per spack reviewer suggestion.
* Remove redundant settings for PYTHONPATH, PATH, and MANPATH.
* Move to a one mpi collector approach for cce/tce integration.
* Add pyyaml dependency
* Make further spack reviewer changes to python type specs, mpi args, build type variant.
* Add reviewer requested changes.
* Add reviewer docstring requested changes.
* Add more updates from spack reviewer comments.
* Update the versions to use tags, not branches
* Redo dashes to fix issue with spack testing.
Co-authored-by: Jim Galarowicz <jgalarowicz@newmexicoconsortium.org>
When `spack compiler list` is run without being restricted to a
particular scope, and no compilers are found, say that none are
available, and hint that the use should run spack compiler find to
auto detect compilers.
* Improve docs
* Check if stdin is a tty
* add a test
Backport a patch for v1.3.4 that fixes an unsigned typedef problem
on macOS: https://github.com/xiph/ogg/pull/64
Also add v1.3.5 that has this issue fixed.
spack paths can be long and this overflows (at least) these buffers
inside of the bundled T1lib inside of the grace distribution, leading
to crashes on startup.
Charm++ versions below 7.0.0 have build issues on macOS, mainly due to the
pre-7.0.0 `VERSION` file conflicting with other version files on the
system: https://github.com/UIUC-PPL/charm/issues/2844. Specifically, it
conflicts with LLVM's `<version>` header that was added in llvm@7.0.0 to
comply with the C++20 standard:
https://en.cppreference.com/w/cpp/header/version. The conflict only occurs
on case-insensitive file systems, as typically used on macOS machines.
Many packages implement logic at the class level to handle complex dependencies and
conflicts. Others have started using `with when("@1.0"):` blocks since we added that
capability. The loops and other control logic can cause some pure directive logic not to
be removed by our package hashing logic -- and in many cases that's a lot of code that
will cause unnecessary rebuilds.
This commit changes the unparser so that it will descend into these blocks. Specifically:
1. Descend into loops, if statements, and with blocks at the class level.
2. Don't look inside function definitions (in or outside a class).
3. Don't look at nested class definitions (they don't have directives)
4. Add logic to *remove* empty loops/with blocks/if statements if all directives
in them were removed.
This allows our package hash to ignore a lot of pure metadata that it was not ignoring
before, and makes it less sensitive.
In addition, we add `maintainers` and `tags` to the list of metadata attributes that
Spack should remove from packages when constructing canonoical source for a package
hash.
- [x] Make unparser handle if/for/while/with at class level.
- [x] Add tests for control logic removal.
- [x] Add a test to ensure that all packages are not only unparseable, but also
that their canonical source is still compilable. This is a test for
our control logic removal.
- [x] Add another unparse test package that has complex logic.
These are the unit tests from astunparse, converted to pytest, with a few backports from
upstream cpython. These should hopefully keep `unparser.py` well covered as we change it.
We can't tell `print(a, b, c)` and `print((a, b, c))` apart -- both of these expressions
generate different ASTs in Python 2 and Python 3. However, we can decide that we don't
care. This commit treats both of them the same when `py_ver_consistent` is set with
`unparse()`.
This means that the package hash won't notice changes from printing a tuple to printing
multiple values, but we don't care, because this is extremely unlikely to affect the build.
More than likely this is just an error message for the user of the package.
- [x] treat `print(a, b, c)` and `print((a, b, c))` the same in py2 and py3
- [x] add another package parsing test -- legion -- that exercises this feature
To make it easier to see how package hashes change and how they are computed, add two
commands:
* `spack pkg source <spec>`: dumps source code for a package to the terminal
* `spack pkg source --canonical <spec>`: dumps canonicalized source code for a
package to the terminal. It strips comments, directives, and known-unused
multimethods from the package. It is used to generate package hashes.
* `spack pkg hash <spec>`: This gives the package hash for a particular spec.
It is generated from the canonical source code for the spec.
- [x] `add spack pkg source` and `spack pkg hash`
- [x] add tests
- [x] fix bug in multimethod resolution with boolean `@when` values
Co-authored-by: Greg Becker <becker33@llnl.gov>
We are planning to switch to using full hashes for Spack specs, which means that the
package hash will be included in the deployment descriptor. This means we need a more
robust package hash than simply dumping the `repr` of the AST.
The AST repr that we previously used for package content is unreliable because it can
vary between python versions (Python's AST actually changes fairly frequently).
- [x] change `package_hash`, `package_ast`, and `canonical_source` to accept a string for
alternate source instead of a filename.
- [x] consolidate package hash tests in `test/util/package_hash.py`.
- [x] remove old `package_content` method.
- [x] make `package_hash` do what `canonical_source_hash` was doing before.
- [x] modify `content_hash` in `package.py` to use the new `package_hash` function.
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Our package hash is supposed to be consistent from python version to python version.
Test this by adding some known unparse inputs and ensuring that they always have the
same canonical hash. This test relies on the fact that we run Spack's unit tests
across many python versions. We can't compute for several python versions within the
same test run so we precompute the hashes and check them in CI.
Package hashing was not properly handling multimethods. In particular, it was removing
any functions that had decorators from the output, so we'd miss things like
`@run_after("install")`, etc.
There were also problems with handling multiple `@when`'s in a single file, and with
handling `@when` functions that *had* to be evaluated dynamically.
- [x] Rework static `@when` resolution for package hash
- [x] Ensure that functions with decorators are not removed from output
- [x] Add tests for many different @when scenarios (multiple @when's,
combining with other decorators, default/no default, etc.)
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Previously we used `directives.__all__` to get directive names, but it wasn't
quite right -- it included `DirectiveMeta`, etc. It's not wrong, but it's also
not the clearest way to do this.
- [x] Refactor `@directive` to track names in `directive_names` global
- [x] Rename `_directive_names` to `_directive_dict_names` in `DirectiveMeta`
- [x] Add a test for `RemoveDirectives`
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Some packages use top-level unassigned strings instead of comments, either just after a
docstring on in the body somewhere else. Ignore those strings becasue they have no
effect on package behavior.
- [x] adjust RemoveDocstrings to remove all free-standing strings.
- [x] move tests for util/package_hash.py to test/util/package_hash.py
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Python 2 and 3 represent string literals differently in the AST. Python 2 requires '\x'
literals, and Python 3 source is always unicode, and allows unicode to be written
directly. These also unparse differently by default.
- [x] modify unparser to write both out the way `repr` would in Python 2 when
`py_ver_consistent` is provided.
Backport operator precedence algorithm from here:
397b96f6d7
This eliminates unnecessary parentheses from our unparsed output and makes Spack's unparser
consistent with the one in upstream Python 3.9+, with one exception.
Our parser normalizes argument order when `py_ver_consistent` is set, so that star arguments
in function calls come last. We have to do this because Python 2's AST doesn't have information
about their actual order.
If we ever support only Python 3.9 and higher, we can easily switch over to `ast.unparse`, as
the unparsing is consistent except for this detail (modulo future changes to `ast.unparse`)
Previously, there were differences in the unparsed code for Python 2.7 and for 3.5-3.10.
This makes unparsed code the same across these Python versions by:
1. Ensuring there are no spaces between unary operators and
their operands.
2. Ensuring that *args and **kwargs are always the last arguments,
regardless of the python version.
3. Always unparsing print as a function.
4. Not putting an extra comma after Python 2 class definitions.
Without these changes, the same source can generate different code for different
Python versions, depending on subtle AST differences.
One place where single source will generate an inconsistent AST is with
multi-argument print statements, e.g.:
```
print("foo", "bar", "baz")
```
In Python 2, this prints a tuple; in Python 3, it is the print function with
multiple arguments. Use `from __future__ import print_function` to avoid
this inconsistency.
Add `astunparse` as `spack_astunparse`. This library unparses Python ASTs and we're
adding it under our own name so that we can make modifications to it.
Ultimately this will be used to make `package_hash` consistent across Python versions.
Add an abstraction around libllvm to allow libllvm
providers to be specified for all packages.
This is targeting allowing mesa to build against
llvm-amdgpu or intel-llvm or llvm or any other
custom llvm variant that arises for specific GPU
toolchains
* Python: set default config_vars
* Add missing commas
* dso_suffix not present for some reason
* Remove use of default_site_packages_dir
* Use config_vars during bootstrapping too
* Catch more errors
* Fix unit tests
* Catch more errors
* Update docstring
* Update existing 2020.0 version to use tag
* Add versions 2018.2 and master
* Add patches for GCC/Intel
* Use MPI compiler wrappers when +mpi
* Constrain CMake build dependency (need >= 3.1)
* Add variants for optional components (e.g QFIT library)
./configure tries to execute an MPI test, which is not possible on
most HPC platforms (if you don't build on a compute node), so this
check is disabled to allow the build to proceed. Ideally we could
check this by placing constraints on the MPI that Spack builds (e.g.
require building a version that is guaranteed to have threading
support).
* rocm recipes updates for 4.5.0
* update to rocm recipes for 4.5.0 release
* updates to the rocm recipes for rocm-4.5.0 release
* fix style errors
* update to rocm-validation-suite for rocm-4.5.0 release
* bump up rccl recipe for rocm-4.5.0
* bump up version for rdc for rocm-4.5.0
* update miopengemm, miopen-opencl,rocm-opencl recipes for 4.5.0 release
* bump up version for mivisiox for rocm-4.5.0 release
* update the rocm-validation-suite recipe
* no need to change the perl path for 4.5.0
* fix the build failure with the recent change made for hip package
* modify checksum for the llvm-amdgpu for 4.5.0
* fix the build issue aftere recent changes made for enabling test
* fix the build issue with 4.5.0
* add new recipe for hipsolver
* address review comments
* llvm: make targets a multivalued variant
* Fix the targets variant values
1. Make them lowercase and add a mapping to cmake equivalent
2. auto -> all
2. Restore composability by using a multivalued variant, so that
`targets=all` and `targets=x86` is combined to `targets=all,x86`
which is then transformed into LLVM_TARGETS_TO_BUILD=all.
* use targets=x86 in iwyu
* Default to nvptx/amdgpu/host arch targets
* default to none
* Update var/spack/repos/builtin/packages/zig/package.py
This reports the kernel version (vs. the distro version) on Linux and
returns a valid Version (stripping characters like '+' which may be
present for custom-built kernels).
Reading appdirs.py without explicitly requesting UTF-8 decoding results
in the build process to fail for Python 3.6.
See https://github.com/ActiveState/appdirs/pull/152 for the upstream
fix.
* snakemake: New version
The newer versions of snakemake have a lot of new dependencies. The
optional dependencies still have to be added.
* removed comment
* some changes
* added reports variant
* deprecate older version and add me as maintainer
* Added dependency py-ratelimiter
* Fix: py-adios Cython run
Always run Cython before `py-adios` installs.
This makes sure the `.cpp` files from `.pyx` files are freshly
created and work with newer CPython versions than the one checked
in.
* Cleanup: `rm` -> `os.remove`
* Drop preferred hdf5 version
* Fix conduit
* Add compat bound for silo on hdf5
* Update var/spack/repos/builtin/packages/conduit/package.py
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* hdf5 <= 1.10 for conduit <= 0.7
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Doxygen's build system uses CMake's deprecated `FindPythonInterp`,
which can get confused by other `python` executables in the PATH.
See issue: https://github.com/spack/spack/issues/28215
Patch to add --embed flag to config-python when interface=python and
using python@3.8:
This is because python@3.8 changed behavior of python-config --ldflags
(and --libs) such that it no longer includes -lpython unless --embed
flag is used.
See e.g. https://github.com/mesonbuild/meson/issues/5629
* [py-aiohttp] added new version
* [py-aiohttp] changed dependency on setuptools
* [py-aiohttp] fixed some of the dependencies
* [py-aiohttp] updated dependency to py-typing-extensions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: py-azure-storage-blob
* fixed typo
* Update var/spack/repos/builtin/packages/py-azure-storage-blob/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-azure-storage-blob/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-azure-storage-blob/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* libtiff: fix build on macOS Monterey
* patch configure, not configure.ac
* Revert "patch configure, not configure.ac"
This reverts commit 8bf315cb22.
* Force Spack to run autoreconf using new patch
* [py-aiohttp] added new version
* [py-aiohttp] updated py-setuptools version dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
For versions of aws-parallelcluster >= 2.9, the pyyaml dependency had to be >= 5.3.1 and == 5.1.2
at the same time making impossible to install ParallelCluster >= 2.9 from spack repository.
See issue: https://github.com/spack/spack/issues/28172
Fixed by limiting pyyaml 5.1.2 version to aws-parallelcluster < 2.8, according to this commit:
7255d314b7
Tested with a manual installation of aws-parallelcluster@2.11.4
Add a new check to `spack audit` to scan and verify that version constraints may be satisfied
Modifications:
- [x] Add a new check to `spack audit` to scan and verify that version constraints may be satisfied by some version declared in the built-in repository
- [x] Fix issues found by CI
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-datrie: New package, initial commit
* removed boilerplate
* Update var/spack/repos/builtin/packages/py-datrie/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-datrie/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* trilinos: fix define_tpl to handle depspecs w/out headers
This should address #27758 (i.e. errors due to netlib-scalapack not
having headers)
* trilinos: This fixes a mismatch in variant name and spec name for x11/libx11
* [py-minio] Added py-minio package
* [py-minio] added missing dependencies for py-minio
* [py-minio] added type=('build', 'run') in dependencies
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [py-dh-scikit-optimize] Added new versions
* [py-dh-scikit-optimize] fixing dependencies
* making py-configspace a required dependency of py-dh-scikit-learn
* r-hdf5r: use pkg-config to find hdf5
Since hdf5 was switched from autotools to cmake, the hdf5 compiler
wrappers can not be used to find and configure hdf5. This PR switches to
using pkg-config for configuration.
* Add comment about configure patch
* The Class F problem has been added to seven of the benchmarks
(BT, SP, LU, CG, MG, FT, and EP).
* The Class E problem has been added to the IS benchmark.
* In version 3.4.1, 'the number of processes' option does not apply.
* MPIFC and FC flags were added.
These versions change the install location of CMake files used
by dependents, but most dependents don't seem to look in this
new location.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Chris White <white238@llnl.gov>
This command pokes the environment, Python interpreter
and bootstrap store to check if dependencies needed by
Spack are available.
If any are missing, it shows a comprehensible message.
* locks: allow locks to work under high contention
This is a bug found by Harshitha Menon.
The `lock=None` line shouldn't be a release but should be
```
return (lock_type, None)
```
to inform the caller it couldn't get the lock type requested without
disturbing the existing lock object in the database. There were also a
couple of bugs due to taking write locks at the beginning without any
checking or release, and not releasing read locks before requeueing.
This version no longer gives me read upgrade to write errors, even
running 200 instances on one box.
* Change lock in check_deps_status to read, release if not installed,
not sure why this was ever write, but read definitely is more
appropriate here, and the read lock is only held out of the scope if
the package is installed.
* Release read lock before requeueing to reduce chance of livelock, the
timeout that caused the original issue now happens in roughly 3 of 200
workers instead of 199 on average.
With this commit:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
==> Updating view at /tmp/spack-faiirgmt/.spack-env/view
$ spack install zlib
==> All of the packages are already installed
```
Before this PR:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
$ spack install zlib
==> All of the packages are already installed
```
No view was generated
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* upcxx: Update the UPC++ package to 2021.9.0
* Add the new release, and a missing older one.
* Remove the spack package cruft for supporting the obsolete build system that
was present in older versions that are no longer supported.
* General cleanups.
Support for library versions older than 2020.3.0 is officially retired,
for two reasons:
1. Releases prior to 2020.3.0 had a required dependency on Python 2,
which is [officially EOL](https://www.python.org/doc/sunset-python-2/)
as of Jan 1 2020, and is no longer considered secure.
2. (Most importantly) The UPC++ development team is unable/unwilling to
support releases more than two years old. UPC++ provides robust
backwards-compatibility to earlier releases of UPC++ v1.0, with very
rare well-documented/well-motivated exceptions. Users are strongly
encouraged to update to a current version of UPC++.
NOTE: Most of the lines changed in this commit are simply re-indentation,
and thus might be best reviewed in a diff that ignores whitespace.
* upcxx: Detect Cray XC more explicitly
This change is necessary to prevent false matches occuring on new Cray Shasta
systems, which do not use the aries network but were incorrectly being treated
as a Cray XC + aries platform.
UPC++ has not yet deployed official native support for Cray Shasta, but this
change is sufficient to allow building the portable backends there.
When running `spack install --log-format junit|cdash ...`, install
errors were ignored. This made spack continue building dependents of
failed install, ignoring `--fail-fast`, and exit 0 at the end.
* locks: allow locks to work under high contention
This is a bug found by Harshitha Menon.
The `lock=None` line shouldn't be a release but should be
```
return (lock_type, None)
```
to inform the caller it couldn't get the lock type requested without
disturbing the existing lock object in the database. There were also a
couple of bugs due to taking write locks at the beginning without any
checking or release, and not releasing read locks before requeueing.
This version no longer gives me read upgrade to write errors, even
running 200 instances on one box.
* Change lock in check_deps_status to read, release if not installed,
not sure why this was ever write, but read definitely is more
appropriate here, and the read lock is only held out of the scope if
the package is installed.
* Release read lock before requeueing to reduce chance of livelock, the
timeout that caused the original issue now happens in roughly 3 of 200
workers instead of 199 on average.
Fixes#27652
Ensure that mirror's to_dict function returns a syaml_dict object for all code
paths.
Switch to using the .get function for accessing the potential information from
the S3 mirror objects. If the key is not there, it will gracefully return
None instead of failing with a KeyError
Additionally, check that the connection object is a dictionary before trying
to "get" from it.
Add a test for the capturing of the new S3 information.
* perl: fix macOS build
With both 5.34.0 and 5.32.1 the build fails on macos-bigsur-skylake
%clang@12.0.5 and %clang13.0.0 :
```
2 errors found in build log:
579013 /private/var/folders/fy/x2xtwh1n7fn0_0q2kk29xkv9vvmbqb/T/s3j/spack-stage/spack-stage-perl-5.34.0-tpha2u52qfwaraidpzzbf6u4dbqg7dk5/spack-src/cpan/
Math-BigInt-FastCalc/../../miniperl "-I../../lib" -MExtUtils::Command::MM -e 'cp_nonempty' -- FastCalc.bs ../../lib/auto/Math/BigInt/FastCalc/Fas
tCalc.bs 644
579014
579015 Everything is up to date. Type '/Applications/Xcode.app/Contents/Developer/usr/bin/make test' to run test suite.
579016 DYLD_LIBRARY_PATH=/private/var/folders/fy/x2xtwh1n7fn0_0q2kk29xkv9vvmbqb/T/s3j/spack-stage/spack-stage-perl-5.34.0-tpha2u52qfwaraidpzzbf6u4dbqg7d
k5/spack-src ./perl -Ilib -I. installperl --destdir=
579017 WARNING: You've never run 'make test' or some tests failed! (Installing anyway.)
579018 /rnsdhpc/code/spack/opt/spack/apple-clang/perl/tpha2u5/bin/perl5.34.0
>> 579019 install_name_tool: error: sh -c '/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk /Applications/Xcode.app/Contents/Developer/Pl
atforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk -find install_name_tool 2> /dev/null' failed with exit code 256: (null) (errno=Invalid argument
)
579020 xcode-select: Failed to locate 'install_name_tool', requesting installation of command line developer tools.
579021 Cannot update /rnsdhpc/code/spack/opt/spack/apple-clang/perl/tpha2u5/bin/perl5.34.0 dependency paths
>> 579022 make: *** [install-all] Error 72
```
This is due to SYSTEM_VERSION_COMPAT being set.
* perl: conditionally set SYSTEM_VERSION_COMPAT based on CLT
The version of command line tools is the only difference between
@alalazo and my builds: his (v11) works only when SYSTEM_VERSION_COMPAT
is set to 1, and mine (v12.5 and v13) only work when it is unset.
With this commit:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
==> Updating view at /tmp/spack-faiirgmt/.spack-env/view
$ spack install zlib
==> All of the packages are already installed
```
Before this PR:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
$ spack install zlib
==> All of the packages are already installed
```
No view was generated
* New package: py-onnxmltools and dependencies
* Small fix
* Changes from review
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update recipe following review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Updates to installer.py did not account for spack monitor, so as currently implemented
there are three cases of failure that spack monitor will not account for. To fix this we add additional
hooks, including an on cancel and also do a custom action on concretization fail.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
The latest version of `jsonschema` fails if we're not specific about which schema draft
specification we're using. Update all of them to use the latest one (draft-07).
Our `jsonschema` external won't support Python 3.10, so we need to upgrade it.
It currently generates this warning:
lib/spack/external/jsonschema/compat.py:6: DeprecationWarning: Using or importing the ABCs
from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and
in 3.10 it will stop working
This upgrades `jsonschema` to 3.2.0, the latest version with support for Python 2.7. The next
version after this (4.0.0) drops support for 2.7 and 3.6, so we'll have to wait to upgrade to it.
Dependencies have been added in prior commits.
* [geant4] new version 11.0.0
* [geant4] prefer 10.7.3 for now
* [vecgeom] new version 1.1.18
* [clhep] new version 2.4.5.1
* [g4emlow] new version 8.0
* [g4particlexs] new version 4.p
* [geant4-data] new version 11.0.0
* [geant4] @11.0.0: cxxstd=17: ^clhep@2.4.5.1: ^vecgeom@1.1.18:
* [geant4] depends_on cmake@3.16:
* [geant4-data] remove g4tendl comment
* [g4tendl] new version 1.4
* [geant4] default cxxstd=11 when @10, 17 when @11; use CMAKE_CXX_STANDARD
* [geant4] variant tbb whe @11:, depends_on tbb, sets GEANT4_USE_TBB
* [geant4] new variant vtk when @11:, depends_on vtk@8.2:
* [geant4] simplify GEANT4_USE_VTK with define_from_variant
* [geant4] remove variant cxxstd conditional again
* [geant4] flake8 space after comma
* What's new in AOCL 3.1
1) AMD BLIS:
1.a) Supports Dynamic Dispatch and AOCL Dynamic feature
1.b) Improvements in DGEMM, ZGEMM, DTRSM, DSYRK, xGEMV, and DOTV
2) AMD libFLAME:
2.a) Supports LAPACK 3.10.0 specification
2.b) Optimized factorization and ZGEEV routines
3) AMD FFTW:
3.a) Features like 'AMD application optimization layer', 'Fast MPI transpose algorithm' and 'Top N planner' are added
4) AMD LibM:
4.a) Optimized exp2, log2 (Single and Double precision) scalar and vector
4.b) Optimized log10f (scalar and vector) and powf vector variants to support WRF4.1.2 benchmark
5) AOCL-Sparse:
5.a) New API for sparse matrix and dense matrix multiplication
6) AMD ScaLAPACK:
6.a) ILP64 support has been enabled
7) AOCL enabled MUMPS library:
7.a) CMake based build system on Windows for AOCL enabled MUMPS sparse solver library will be available shortly on GitHub
7.a.i) Refer https://github.com/amd/mumps-build
7.b) Spack-based recipe on Linux for AOCL enabled MUMPS sparse solver library will be enabled shortly
* Fix invalid version range error
* Incorporated review comments
1) Restore to previous url value
2) Instead of if else statements, used spack's enable_or_disable feature
* Incorporated following review comments:
1. Use of satisfies() for spec checks
2. Seperate conflict statements to check for minimum and maximum GCC versions
3. Used CMakePackage helpers
4. Code rearrangement to have the directives listed before methods
* New package: py-xrootdpyfs
* Add old version of py-fs
* Replace 2to3.patch with running 2to3
* Just restriuct setuptools version
* Update var/spack/repos/builtin/packages/py-fs/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
spack monitor now requires authentication as each build must be associated
with a user, so it does not make sense to allow the --monitor-no-auth flag
and this commit will remove it
* suite-sparse: Fix check for +/- tbb variant
Changed
'tbb' in spec
to
'+tbb' in spec
The former would configure suite-sparse to use tbb if any dependency
package (e.g. intel-oneapi-mkl) depends on tbb, even if
suite-sparse~tbb was specified.
* suite-sparse: conflict when trying to use 2021.x versions of tbb
See https://github.com/DrTimothyAldenDavis/SuiteSparse/issues/72
suite-sparse depends on task_schedule_init to control the number
of threads when e.g. interfacing with MATLAB. However, Intel
dropped task_schedule_init in the 2021.x releases of TBB (it has
been deprecated since TBB 4.3.5).
We just raise a spack conflict when using tbb @2021.x and +tbb
Because tbb is a virtual package and is not versioned, I have
instead checked for either intel-oneapi-tbb@2021: or
intel-tbb@2021:, not the most elegant but should do the job
* suite-sparse: fix style issues
* Added installation of OpenMP as an option
* Added a softlink (dpcpp) to clang++ to
mimic the packaged version of dpcpp
Co-authored-by: ravil <ravil.dorozhinskii@tum.de>
* Fix building container images
Patchelf is bootstrapped from sources, so we cannot
disable that mechanism until a finer selection is
possible in the configuration.
* Build on changes to the Dockerfile
* Don't login to Dockerhub on PRs
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Update hdf5/package.py to add HDF5 1.10.8 release and move
preferred version from 1.10.7 to 1.10.8.
* silo: versions before 4.11 conflict with hdf5 >= 1.10.8.
* Add patch file for silo@4.11 with hdf5 1.10 >=1.10.8 and
hdf5 1.12 >= 1.12.1.
* New package: py-scinum
* Update var/spack/repos/builtin/packages/py-scinum/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix for OpenFOAM pthread issue for AOCC 3.2
* addressing the review comments
* updating when command for aocc v3.2.0 and above
Co-authored-by: mohan babu <mohbabul@amd.com>
* PETSc is a core dependency, yet we left a variant for PETSc.
This was removed, and ExaGO always depends on PETSc. The CMake
arguments were updated accordingly.
* camp+cuda is only a dependency when we build with RAJA and CUDA.
* py-radical-entk: add version 1.9.0
* py-radical-pilot: add version 1.10.1
* py-radical-utils: add version 1.9.1
* py-radical-utils: requires py-pymongo less than version 4
* New package: py-pysqlite3
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR also slightly changes the behavior in ci_rebuild().
We now still attempt to submit `spack install` results to CDash
even if the initial registration failed due to connection issues.
This commit follows in the spirit of #24299. We do not want `spack install`
to exit with a non-zero status when something goes wrong while attempting to
report results to CDash.
* Add a CI job to audit all the packages in the built-in repository
* flecsi: fixed typo for dependency on legion
* py-pythonqwt: fix a typo in variant name
* sollve: removed a conflict with a non-existing variant
* acts: fixed use of wrong variant in dd4hep
Also removed duplicated variant declaration in dd4hep
* aoflagger: update variant of a dependency
Issues introduced indirectly in #22925
* camellia: removed unused variant
Issue introduced indirectly in #26150
* cbtf-*: remove cti variants and dependency on mrnet+cti
Issue introduced in #14178
* flecsale: update variants to match flecsi
Issue introduced in #11679
* grnboost: fixed issue with non-existing variant in a dependency
This package possibly never worked since #8763
* nalu: fixed issue with non-existing variant in a dependency
* open-iscsi: fixed issue with non-existing variant in a dependency
* openspeedshop-*: remove use of non-existing mrnet+cti variant
* percept: fixed issue with non-existing variant in a dependency
* phyluce: fixed issue with non-existing variant in a dependency
Issue introduced in #12952
* phyluce: fixed issue with non-existing variant in a dependency
Issue introduced in #22340
* CPU Architecture Support
This commit removes the `native` variant in favor of Spack's built-in support for specifying a target cpu architecture. It also passes this information to the Legion build system so that it correctly passes the architecture to GASNet when built internally
* fixing whitespace
* Update package.py
based on a conversation with @streichler, this change sets `BUILD_MARCH` to an empty string, which will prevent legion's CMake build system from inserting `-march=native` and allow Spack to provide the correct architecture flags.
* Update package.py
adding a comment on what problem this MR solves.
* Update package.py
formatting
* PICMI: 0.16 & 0.18 & WarpX 1D
Update the `py-picmistandard` and the latest WarpX release.
Preparing 1D support (testable inputs coming for 22.01+).
* Fix style: overlong line
* Update pypi example link
* Fix requirement ranges
* WarpX 21.12: Update Patch
Follow-up from
https://github.com/ECP-WarpX/WarpX/pull/2646
* fix style
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* openjpeg: add missing dependencies and optionally disable them
* openjpeg: remove variant 'ownlibs'
* openjpeg: bugfix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* openjpeg: do not build CODEC executables by default
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: py-hist and it's dependencies
* Update var/spack/repos/builtin/packages/py-hist/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-histoprint/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-mplhep/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update py-hist recipe
* Update package.py
* Fix py-iminuit recipe (requires py-cmake now)
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: py-correctionlib
* Update var/spack/repos/builtin/packages/py-correctionlib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-correctionlib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-make and update py-correctionlib recipe
* Add py-bind11 dependency
* Update package.py
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR is meant to move code with "business logic" from `spack.cmd.buildcache` to appropriate core modules[^1].
Modifications:
- [x] Add `spack.binary_distribution.push` to create a binary package from a spec and push it to a mirror
- [x] Add `spack.binary_distribution.install_root_node` to install only the root node of a concrete spec from a buildcache (may check the sha256 sum if it is passed in as input)
- [x] Add `spack.binary_distribution.install_single_spec` to install a single concrete spec from a buildcache
- [x] Add `spack.binary_distribution.download_single_spec` to download a single concrete spec from a buildcache to a local destination
- [x] Add `Spec.from_specfile` that construct a spec given the path of a JSON or YAML spec file
- [x] Removed logic from `spack.cmd.buildcache`
- [x] Removed calls to `spack.cmd.buildcache` in `spack.bootstrap`
- [x] Deprecate `spack buildcache copy` with a message that says it will be removed in v0.19.0
[^1]: The rationale is that commands should be lightweight wrappers of the core API, since that helps with both testing and scripting (easier mocking and no need to invoke `SpackCommand`s in a script).
* New package: py-cmsml
* Update var/spack/repos/builtin/packages/py-cmsml/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: py-cx-oracle
* Add dependencies from pyproject.toml
* Update var/spack/repos/builtin/packages/py-cx-oracle/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixing bugs in spack monitor
updates to installer.py did not account for spack monitor, so as currently implemented
there are three cases of failure that spack monitor will not account for. To fix this we add additional
hooks, including an on cancel and also do a custom action on concretization fail.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* New package: py-climate
* Revert "fixing bugs in spack monitor"
This reverts commit bf7f6bf0e3.
* Flake-8
* Update package.py
* Update package.py
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
After this PR an error in a single package while detecting
external software won't abort the entire procedure.
The error is reported to screen as a warning.
* New package: py-boost-histogram
* Update var/spack/repos/builtin/packages/py-boost-histogram/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Remove a try/catch for an error with no handling. If the affected
code doesn't execute successfully, then the associated variable
is undefined and another (more-obscure) error occurs shortly after.
Since #27185, the cuda_arch variant values are conditional on +cuda. This means that for -cuda specs, the installation fails with:
```
==> acts: Executing phase: 'cmake'
==> Error: KeyError: 'cuda_arch'
/home/wdconinc/git/spack/var/spack/repos/builtin/packages/acts/package.py:222, in cmake_args:
219 log_failure_threshold = spec.variants['log_failure_threshold'].value
220 args.append("-DACTS_LOG_FAILURE_THRESHOLD={0}".format(log_failure_threshold))
221
>> 222 cuda_arch = spec.variants['cuda_arch'].value
223 if cuda_arch != 'none':
224 args.append('-DCUDA_FLAGS=-arch=sm_{0}'.format(cuda_arch[0]))
225
```
* new package: py-tensorflow-datasets
- includes new dependency package: py-tensorflow-metadata
* Update var/spack/repos/builtin/packages/py-tensorflow-datasets/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-tensorflow-metadata/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added gdb Dependency
When using spack to install cgdb, a spack-built gdb is necessary to
avoid dynamic link errors.
- Added maintainer: tuxfan
- Set preferred to 'master' (best version for spack currently)
* Update: The gdb dependency added by this PR is for runtime
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
This adds support in spack for both build/install tests (spack install
--run-tests) and post-install smoke tests (spack test run).
Hpctoolkit itself only recently added tests, so for now, this only
applies to branch master.
To use this, you can "spack install intel-oneapi-compilers" and then
"spack compiler add" the new compiler. You would need to install with
"spack install ginkgo+oneapi%dpcpp"
- Use .tar.gz archive
- Update 2.3.3 to use .tar.gz archive (and update checksum)
- autoreconf dependency is no-longer required
- The new version depends on gperf
- Add sensei to the SDK with appropriate propagations
- Rework variants to SENSEI package to avoid providing broken builds
- Turn off miniapps by default, these are examples and not critical to using sensei
This PR updates the vtk package to use new variable names and adds some
dependencies.
- add version 9.1.0
- add version 1.4.2 to gl2ps package. This is needed to use gl2ps as a
dependency.
- new package: utf8cpp, used as a dependency for version 9.
- add dependencies when possible
- pugixml
- libogg
- libtheora
- utf8cpp
- gl2ps
- proj
- turn off configuring MPI if ~mpi
- always use the package-provided FindHDF5.cmake for versions up to 8.
Version @9: already does this so does not need a patch.
- use new CMake variables for version 9
- remove unused CMake variables depending on version
* py-packaging: add 21.3
* Update var/spack/repos/builtin/packages/py-packaging/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
A part of the gpgme testsuite by default even runs during normal
make and make install phases, creating a public keyring in ~/.gnupg.
Prevent this and avoid build failures in containers due to another
problem of the test suite and fix a test case of the new 0.16.0 release.
When the uidmap tools are installed on a system, this allows to run
containers as unprivileged user (rootless and daemonless) slimilar
to singularity, but using a familiar CLI: "alias docker=podman"
This is helpful to run e.g. spack builds in containers to reproduce
build failures from CI without requiring a installation of docker.
The required dependencies of podman are added as well.
Fix to not attempt to patch a nonexisting file for old versions
when building with gcc-11. Skip the build-time tests as all access
the X DISPLAY and open many windows on the screen.
I was browsing package metadata, as one does on a Sunday, and stumbled across a new kind of version attribute - brancch! I suspect this is supposed to be "branch."
Update flux-core and flux-sched package.py to include latest releases.
For flux-sched:
- Add patch to disable false-positive-happy valgrind test
- pin yaml-cpp to 0.6.3 due to issue described at:
https://github.com/flux-framework/flux-sched/issues/886
Starting with meson 0.60, unknown args produce errors and
the -Dx11 arg is only present in @:2.40
https://gitlab.gnome.org/GNOME/gtk-osx/-/issues/44
Add tiff variant: Default disabled since fails the tests in part.
Only libtiff@:3.9 pass, but these old versions have severe security issues.
Deprecate @:2.41 as they are affected by the high-severity CVE-2021-20240:
https://nvd.nist.gov/vuln/detail/CVE-2021-20240
Remove a custom bootstrapping procedure to
use spack.bootstrap instead
Modifications:
* Reference count the bootstrap context manager
* Avoid SpackCommand to make the bootstrapping
procedure more transparent
* Put back requirement on patchelf being in PATH for unit tests
* Add an e2e test to check bootstrapping patchelf
* Fix version constrains in py-ipykernel and py-ipython
Before the fix:
```
$ spack spec py-ipykernel@6.4.1 ^py-jupyter-client@7.0.6
==> Error: py-ipykernel@6.4.1 ^py-jupyter-client@7.0.6 is unsatisfiable, conflicts are:
no version satisfies the given constraints
```
After the fix:
```
```
(thanks god the old concretizer is still there - it provides sane error messages!)
* Fix py-ipython recipe
* Revert "Fix py-ipython recipe"
This reverts commit d65071665f.
* added package gptune with all its dependencies: adding py-autotune, pygmo, py-pyaml, py-autotune, py-gpy, py-lhsmdu, py-hpbandster, pagmo2, py-opentuner; modifying superlu-dist, py-scikit-optimize
* adding gptune package
* minor fix for macos spack test
* update patch for py-scikit-optimize; update test files for gptune
* fixing gptune package style error
* fixing unit tests
* a few changes reviewed in the PR
* improved gptune package.py with a few newly added/improved dependencies
* fixed a few style errors
* minor fix on package name py-pyro4
* fixing more style errors
* Update var/spack/repos/builtin/packages/py-scikit-optimize/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* resolved a few issues in the PR
* fixing file permissions
* a few minor changes
* style correction
* minor correction to jq package file
* Update var/spack/repos/builtin/packages/py-pyro4/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixing a few issues in the PR
* adding py-selectors34 required by py-pyro4
* improved the superlu-dist package
* improved the superlu-dist package
* moree changes to gptune and py-selectors34 based on the PR
* Update var/spack/repos/builtin/packages/py-selectors34/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
I think this test should be removed, but when it stays, it should at
least follow the symlink, cause it fails for me if I let spack build
patchelf and have a symlink in a view.
- Prevent `-j` flags to `make`, which has been known to cause problems
with Racket builds.
- Add variants for various common build flags, including support
for both versions of the Racket VM environment.
In addition:
- Prefer the minimal release to improve install times. Bells and
whistles carry their own runtime dependencies and should be
installed via `raco`. An enterprising user may even create a
`RacoPackage` class to make spack aware of `raco` installed packages.
- Match the official version numbering scheme.
Modifications:
- [x] Removed `centos:6` unit test, adjusted vermin checks
- [x] Removed backport of `collections.OrderedDict`
- [x] Removed backport of `functools.total_ordering`
- [x] Removed Python 2.6 specific skip markers in unit tests
- [x] Fixed a few minor Python 2.6 related TODOs in code
Updating the vendored dependencies will be done in separate PRs
* py-cython: add 3.0.0a9
* Update var/spack/repos/builtin/packages/py-cython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New version: py-singledispatch 3.7.0
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Make CUDA and ROCm architecture conditional
fixes#14337
The variant to specify which architecture to use
for CUDA and ROCm are now conditional on +cuda and
+rocm respectively.
* cp2k: make all CUDA related variants conditional on +cuda
* New version: py-pyzmq 22.3.0
* Update var/spack/repos/builtin/packages/py-pyzmq/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add connection specification to mirror creation
This allows each mirror to contain information about the credentials
used to access it.
Update command and tests based on comments
Switch to only "long form" flags for the s3 connection information.
Use the "any" function instead of checking for an empty list when looking
for s3 connection information.
Split test to use the access token separately from the access id and key.
Use long flag form in test.
Add endpoint_url to available S3 options.
Extend the special parameters for an S3 mirror to accept the
endpoint_url parameter.
Add a test.
* Add connection information per URL not per mirror
Expand the mirror-based connection information to be per-URL.
This will allow a user to specify different S3 connection information
for both the fetch and the push URLs.
Add a parameter for "profile", another way of storing the id/secret pair.
* Switch from "access_profile" to "profile"
* py-cycler: add 0.11.0
* Update var/spack/repos/builtin/packages/py-cycler/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New version: py-qtconsole 5.2.0
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New version: openldap 2.6.0;
fix recipe for groff (requires pkg-config to find uchardet);
fix recipe for openldap (requires groff to build documentation)
* Restrict openldap versions of py-python-ldap and percona-server
* Update var/spack/repos/builtin/packages/groff/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This causes `otfconfig` to also list zlib's library directory.
Otherwise, `-lz` cannot be found. Also remove `--with-zlibsymbols`,
which doesn't seem to be supported anymore.
Include several several patches to vtk 8.1 for building on a
system with no system install X11 libraries or include files.
Specify specific versions of dependent packages that are known to work with 3.2.1.
Tested on spock.olcf.ornl.gov. The GUI came up and rendered images
and an image was successfully saved using off screen rendering from
data from curv2d.silo.
Remove the "get_executable" function from the
spack.bootstrap module. Now "flake8", "isort",
"mypy" and "black" will use the same
bootstrapping method as GnuPG.
Currently Spack vendors `pytest` at a version which is three major
versions behind the latest (3.2.5 vs. 6.2.4). We do that since v3.2.5
is the latest version supporting Python 2.6. Remaining so much
behind the currently supported versions though might introduce
some incompatibilities and is surely a technical debt.
This PR modifies Spack to:
- Use the vendored `pytest@3.2.5` only as a fallback solution,
if the Python interpreter used for Spack doesn't provide a newer one
- Be able to parse `pytest --collect-only` in all the different output
formats from v3.2.5 to v6.2.4 and use it consistently for `spack unit-test --list-*`
- Updating the unit tests in Github Actions to use a more recent `pytest` version
The previous workaround of using CMAKE_INSTALL_RPATH=ON was used to
avoid CMake trying to write an RPATH into the linker script libcxx.so,
which is nonsensical. See commit f86ed1e.
However, CMAKE_INSTALL_RPATH=ON seems to disable the build RPATH, which
breaks LLVM during the build when it has to locate its build-time shared
libraries (e.g. libLLVM.so). That required yet another workaround, where
some shared libraries were installed "by hand", so that they were picked
up from the install libdir. See commit 8a81229.
This was a dirty workaround, and also makes it impossible to use ninja,
since we explicitly invoked make.
This commit removes the two old workaround, and sets
LIBCXX_ENABLE_STATIC_ABI_LIBRARY=ON, so that libc++abi.a is linked into
libc++.so, which makes it enough to link with -lc++ or invoke clang++
with -stdlib=libc++, so that our install succeeds and linking LLVM's c++
standard lib is still easy.
Some packages use a 64_ or _64 symbol suffix for the ilp64 (= 64-bit
integers) intefrace for BLAS. In particular if we want to support shim
libraries like libopenblastrampoline supporting both the 32 and 64 bit
integer version of blas, it must be possible to distinguish between the
two.
mesa inherits MesonPackage (since October 2020) which depends on Py@3.
The conflicts('mesa') enables a regular build of `qt@5.7:5.15+webkit`
without having to specify the exact version by causing the concretizer
to select mesa18 which does not depend on python@3.
Co-authored-by: Bernhard Kaindl <bernhard.kaindl@ait.ac.at>
This type of error is skipped:
make[1]: *** [Makefile:222: /tmp/user/spack-stage/.../spack-src/usr/lib/julia/libopenblas64_.so.so] Error 1
but it's useful to have it, especially when a package sets a variable
incorrectly in makefiles
* New version: py-pylint 2.8.2; new package py-platformdirs
* Update var/spack/repos/builtin/packages/py-platformdirs/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pylint/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Intel mpi comes with an installation of libfabric (which it needs as a
dependency). It can use other implementations of libfabric at runtime
though, so if you install a package that depends on `mpi` and
`libfabric`, you can specify `intel-mpi+external-libfabric` and ensure
that the Spack-built instance is used (both by `intel-mpi` and the
root).
Apply analogous change to intel-oneapi-mpi.
* New version: py-pyrsistent 0.18.0
* Update package.py
* Update var/spack/repos/builtin/packages/py-pyrsistent/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New version: py-pytest 6.2.5
* Update var/spack/repos/builtin/packages/py-pytest/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Use setup_run_environment to search for libraries and set env variables for module generation.
Libraries are installed with CMAKE_INSTALL_LIBDIR, which can be lib or lib64 depending on the machine, which makes it impossible to hardcode through modules.yaml.
If the perl that perl-forks is built against is non-threaded the build
system will drop into interactive mode to ask about simulating ithreads.
This causes the build to hang. Set FORKS_SIMULATE_USEITHREADS to avoid
going into interactive mode.
If building Qt on a system with a recent glibc but an older kernel, it
is possible that Qt configures features based on glibc that are not
supported in the kernel. This PR tests the kernel version and ensures
certain features are disabled if the kernel does not support them.
* New version: py-prettytable 2.4.0; update homepage
* Update var/spack/repos/builtin/packages/py-prettytable/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New version: py-prometheus-client 0.12.0; new dependency (py-twisted) version 21.7.0 + it's dependencies
* Apply suggestions from code review (1/?)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Changes from review (2/?)
* Changes from review (3/?)
* Changes from review (4/?)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- disable graphblas by default (very slow to compile)
- fix patch upperbound for cuda 11
- remove find_system_libs; not sure why it was added in the first place,
but it makes spack rather unusable as it introduces an rpath to /lib/...
* New version: py-oauthlib 3.1.1
* Update var/spack/repos/builtin/packages/py-oauthlib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Generalize env var PYTHON to avoid version conflicts
* Use available python executable
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* upcxx: Update the UPC++ package to 2021.9.0
* Add the new release, and a missing older one.
* Remove the spack package cruft for supporting the obsolete build system that
was present in older versions that are no longer supported.
* General cleanups.
Support for library versions older than 2020.3.0 is officially retired,
for two reasons:
1. Releases prior to 2020.3.0 had a required dependency on Python 2,
which is [officially EOL](https://www.python.org/doc/sunset-python-2/)
as of Jan 1 2020, and is no longer considered secure.
2. (Most importantly) The UPC++ development team is unable/unwilling to
support releases more than two years old. UPC++ provides robust
backwards-compatibility to earlier releases of UPC++ v1.0, with very
rare well-documented/well-motivated exceptions. Users are strongly
encouraged to update to a current version of UPC++.
NOTE: Most of the lines changed in this commit are simply re-indentation,
and thus might be best reviewed in a diff that ignores whitespace.
* upcxx: Detect Cray XC more explicitly
This change is necessary to prevent false matches occuring on new Cray Shasta
systems, which do not use the aries network but were incorrectly being treated
as a Cray XC + aries platform.
UPC++ has not yet deployed official native support for Cray Shasta, but this
change is sufficient to allow building the portable backends there.
* py-setupmeta: add new package
* Update var/spack/repos/builtin/packages/py-setupmeta/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When running `spack install --log-format junit|cdash ...`, install
errors were ignored. This made spack continue building dependents of
failed install, ignoring `--fail-fast`, and exit 0 at the end.
* Add hdf5-vol-async package.
Add HDF5 1.13.0-rc6 version for building vol-async.
* Style test required another blank line.
* Change hdf5 dependency to develop-1.13+mpi+threadsafe.
* Update args for hdf5-vol-async.
Open MPI currently fails to build with scheduler=slurm if +pmix is
not given with a fatal error due to ``config_args +=
self.with_or_without('pmix', ...)`` resulting in --without-pmix.
However, Open MPI's configure points out "Note that Open MPI does
not support --without-pmix."
The PR only adds "--with-pmix=PATH" if +pmix is part of the spec.
Otherwise, nothing is added and Open MPI can fall back to its
internal PMIX sources.
(The other alternative would be to depend on +pmix in for
scheduler=slurm as is done for +pmi.)
- Added new checksums for 4.3.
- Now using llvm-amdgpu ~openmp in order to use the rocm-device-libs
build as external project in llvm-amdgpu package. We still need
to pull device-libs in using resource for the build as some headers
are not installed.
- Updated symlink creation to now remove an existing link if present
to avoid issues on partial reinstalls when debugging.
- Adjusted the flang_warning to be a part of Cmake options instead of
a filter_file for better compatability.
- The dependency on hsa-rocr-dev created some problems as type was changed
to the default build/link. This issue was because ROCr uses libelf and
the build of openmp expects elfutils. When link is specified libelf
was being found in the path first, causing errors. This was
introduced with the llvm-amdgpu external project build of device-libs.
- On a bare bone installation of sles15 it was noted that libquadmath0 was
needed as a dependency. On 18.04 gcc-multilib was also needed.
* Workaround for libelf headers being used instead of elfutils.
Due to Kitware API changes, default ANTs builds were failing, presumably for all versions (https://github.com/ANTsX/ANTs/issues/1236).
This commit defaults BUILD_TESTING to OFF, preventing calls against
these APIs and fixing all versions.
Note that the ANTs test suite was not clean anyway (e.g. ANTs/#842).
* Python tests: allow importing weirdly-named modules
e.g. with dashes in name
* SIP tests: allow importing weirdly-named modules
* Skip modules with invalid names
* Changes from review
* Update from review
* Update from review
* Cleanup
* New version: py-lz4 3.1.3; use external lz4 instead of bundled one
* Update var/spack/repos/builtin/packages/py-lz4/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Changes from review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add spack package py-ytopt-team-ytopt and required dependencies.
* Removed old ytop package.
* Added author as maintainer.
* Fix style.
* Update var/spack/repos/builtin/packages/py-config-space/package.py
Update python dependency to 3.7
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-config-space/package.py
Remove run dependency from py-cython.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-config-space/package.py
Added run dependency type for py-pyparsing.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Updated description of py-dh-scikit-optimize.
* Source py-dh-scikit-optimize from PyPI.
* Added latest py-dh-scikit-optimize version.
* Made plots option False by default for py-dh-scikit-optimize.
* Removed 0.9.4 as it needs additional dependencies.
* Added version dependencies.
* Added missing py-joblib dependencies.
* Added run dependency type.
* Added python 2.7+ as supported for py-pyaml.
* Change py-config-space to py-configspace.
* Added dependency on python 3.6+.
* Fix py-configspace package naming.
* Changed py-autotune to py-ytopt-autotune.
* Update var/spack/repos/builtin/packages/py-pyaml/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added debug variant with py-ray dependency.
* Added missing py-mpi4py missing dependency.
* Removed erroneous variant.
* Added debug variant to py-ray.
* Fix indentation.
* Removed debug variant of py-ray.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Noting that missing numeric_limits was the cause of the compile issues
with gcc-11, I tested adding -include limits fixing @5.9:5.14%gcc@11.
Therefore, we can replace the conflicts('%gcc@11:', when='@5.9:5.14').
Co-authored-by: Bernhard Kaindl <bernhard.kaindl@ait.ac.at>
* Prevent additional properties to be in the answer set when reusing specs
fixes#27237
The mechanism to reuse concrete specs relies on imposing
the set of constraints stemming from the concrete spec
being reused.
We also need to prevent that other constraints get added
to this set.
See #25249 and https://github.com/spack/spack/pull/27159#issuecomment-958163679.
This adds `spack load --list` as an alias for `spack find --loaded`. The new command is
not as powerful as `spack find --loaded`, as you can't combine it with all the queries or
formats that `spack find` provides. However, it is more intuitively located in the command
structure in that it appears in the output of `spack load --help`.
The idea here is that people can use `spack load --list` for simple stuff but fall back to
`spack find --loaded` if they need more.
- add help to `spack load --list` that references `spack find`
- factor some parts of `spack find` out to be called from `spack load`
- add shell tests
- update docs
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Richarda Butler <39577672+RikkiButler20@users.noreply.github.com>
Reformulate variant rules so that we minimize both
1. The number of non-default values being used
2. The number of default values not-being used
This is crucial for MV variants where we may have
more than one default value
In our tests, we use concrete specs generated from mock packages,
which *only* occur as inputs to the solver. This fixes two problems:
1. We weren't previously adding facts to encode the necessary
`depends_on()` relationships, and specs were unsatisfiable on
reachability.
2. Our hash lookup for reconstructing the DAG does not
consider that a hash may have come from the inputs.
Concrete specs that are already installed or that come from a buildcache
may have compilers and variant settings that we do not recognize, but that
shouldn't prevent reuse (at least not until we have a more detailed compiler
model).
- [x] make sure compiler and variant consistency rules only apply to
built specs
- [x] don't validate concrete specs on input, either -- they're concrete
and we shouldn't apply today's rules to yesterday's build
In switching to hash facts for concrete specs, we lost the transitive facts
from dependencies. This was fine for solves, because they were implied by
the imposed constraints from every hash. However, for `spack diff`, we want
to see what the hashes mean, so we need another mode for `spec_clauses()` to
show that.
This adds a `expand_hashes` argument to `spec_clauses()` that allows us to
output *both* the hashes and their implications on dependencies. We use
this mode in `spack diff`.
- [x] Get rid of forgotten maximize directive.
- [x] Simplify variant handling
- [x] Fix bug in treatment of defaults on externals (don't count
non-default variants on externals against them)
Variants in concrete specs are "always" correct -- or at least we assume
them to be b/c they were concretized before. Their variants need not match
the current version of the package.
Multi-valued variants previously maximized default values to handle
cases where the default contained two values, e.g.:
variant("foo", default="bar,baz")
This is because previously we were minimizing non-default values, and
`foo=bar`, `foo=baz`, and `foo=bar,baz` all had the same score, as
none of them had any "non-default" values.
This commit changes the approach and considers a non-default value
to be either a value set to something not default *or* the absence
of a default value from the set value. This allows multi- and
single-valued variants to be handled the same way, with the same
minimization criterion. It alse means that the "best" value for every
optimization criterion is now zero, which allows us to make useful
assumptions about the optimization criteria.
Minimizing builds is tricky. We want a minimizing criterion because
we want to reuse the avaialble installs, but we also want things that
have to be built to stick to *default preferences* from the package
and from the user. We therefore treat built specs differently and
apply a different set of optimization criteria to them. Spack's *first*
priority is to reuse what it can, but if it builds something, the built
specs will respect defaults and preferences.
This is implemented by bumping the priority of optimization criteria
for built specs -- so that they take precedence over the otherwise
topmost-priority criterion to reuse what is installed.
The scheme relies on all of our optimization criteria being minimizations.
That is, we need the case where all specs are reused to be better than
any built spec could be. Basically, if nothing is built, all the build
criteria are zero (the best possible) and the number of built packages
dominates. If something *has* to be built, it must be strictly worse
than full reuse, because:
1. it increases the number of built specs
2. it must have either zero or some positive number for all criteria
Our optimziation criteria effectively sum into two buckets at once to
accomplish this. We use a `build_priority()` number to shift the
priority of optimization criteria for built specs higher.
The constraints in the `spack diff` test were very specific and assumed
a lot about the structure of what was being diffed. Relax them a bit to
make them more resilient to changes.
Make the first minimization conditional on whether `--reuse` is enabled in the solve.
If `--reuse` is not enabled, there will be nothing in the set to minimize and the
objective function (for this criterion) will be 0 for every answer set.
Many of the integrity constraints in the concretizer are there to restrict how solves are done, but
they ignore that past solves may have had different initial conditions. For example, for things
we're building, we want the allowed variants to be restricted to those currently in Spack packages,
but if we are reusing a concrete spec, we need to be flexible about names that may have existed in
old packages.
Similarly, restrictions around compatibility of OS's, compiler versions, compiler OS support, etc.
are really only about what is supported by the *current* set of compilers/build tools known to
Spack, not about what we may get from concrete specs.
- [x] restrict certain integrity constraints to only apply to packages that we need to build, and
omit concrete specs from consideration.
The OS logic in the concretizer is still the way it was in the first version.
Defaults are implemented in a fairly inflexible way using straight logic. Most
of the other sections have been reworked to leave these kinds of decisions to
optimization. This commit does that for OS's as well.
As with targets, we optimize for target matches. We also try to optimize for
OS matches between nodes. Additionally, this commit adds the notion of
"OS compatibility" where we allow for builds to depend on binaries for certain
other OS's. e.g, for macos, a bigsur build can depend on an already installed
(concrete) catalina build. One cool thing about this is that we can declare
additional compatible OS's later, e.g. CentOS and RHEL.
If we don't rename Spack will fail with:
```
ImportError: cannot bootstrap the "clingo" Python module from spec "clingo-bootstrap@spack+python %gcc target=x86_64" due to the following failures:
'spack-install' raised ValueError: Invalid config scope: 'bootstrap'. Must be one of odict_keys(['_builtin', 'defaults', 'defaults/cray', 'bootstrap/cray', 'disable_modules', 'overrides-0'])
Please run `spack -d spec zlib` for more verbose error messages
```
in case bootstrapping from binaries fails and we are
falling back to bootstrapping from sources.
ensure that none of ^intel-mkl, ^intel-mpi, and ^mkl are used, unless
the compiler is intel.
Fix bad logic in the src/src_xs/m_makespectrum.f90 file in the oxygen version.
Add the -fallow-argument-mismatch for gcc >= 10.
* scr: 3.0rc2 release, variants and deps updates
This adds 3.0rc2 release for end users to aid in testing scr for
upcoming 3.0 release.
Included in this change:
- Require most recent component versions for this release
- Add a variant for PDSH as it is now an optional dependency with
this release
- Add bbapi and datawarp (dw) variants
- bbapi_fallback variant now requires bbapi variant with latest
release
- Add variants to enable/disable examples and tests
- Add shared variant and current conflicts with ~shared
- Update cmake_args to account for added variants where needed
Additional updates:
- Add maintainers
- Use lists and for loops to clean up repetitive code involving all
components
- Use self.define and self.define_from_variant to clean up cmake_args
- Use consistent quoting throughout package
* Un-deprecate v2 and legacy
* Use new conditional variants
* trilinos: fix @13.0.1+tpetra^cuda@11
* Mark CUDA conflict with old versions and always define TPL
* trilinos: patch doesn't build so just mark as conflict
A common question from users has been how to model variants
that are new in new versions of a package, or variants that are
dependent on other variants. Our stock answer so far has been
an unsatisfying combination of "just have it do nothing in the old
version" and "tell Spack it conflicts".
This PR enables conditional variants, on any spec condition. The
syntax is straightforward, and matches that of previous features.
* GnuPG: allow bootstrapping from buildcache and sources
* Add a test to bootstrap GnuPG from binaries
* Disable bootstrapping in tests
* Add e2e test to bootstrap GnuPG from sources on Ubuntu
* Add e2e test to bootstrap GnuPG on macOS
* trilinos: add @13.2.0, and switch default to cxxstd=14
* trilinos: fix python dependency when using +ifpack or +ifpack2
* trilinos: add conflict for ~epetra +ml when @13.2.0:
* trilinos: keep 13.0.1 as the preferred version
* Update var/spack/repos/builtin/packages/trilinos/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* update
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
This PR adds error message sentinels to the clingo solve, attached to each of the rules that could fail a solve. The unsat core is then restricted to these messages, which makes the minimization problem tractable. Errors that can only be generated by a bug in the logic program or generating code are prefaced with "Internal error" to make clear to users that something has gone wrong on the Spack side of things.
* minimize unsat cores manually
* only errors messages are choices/assumptions for performance
* pre-check for unreachable nodes
* update tests for new error message
* make clingo concretization errors show up in cdash reports fully
* clingo: make import of clingo.ast parsing routines robust to clingo version
Older `clingo` has `parse_string`; newer `clingo` has `parse_files`. Make the
code work wtih both.
* make AST access functions backward-compatible with clingo 5.4.0
Clingo AST API has changed since 5.4.0; make some functions to help us
handle both versions of the AST.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This removes `-lpytrilinos` from Makefile.export.Trilinos so that C code
trying to link against a Trilinos built with PyTrilinos does not fail
due to undefined references to python routines (libpytrilinos is only
used when importing PyTrilinos in python, in which case those references
are already defined by Python).
There was already a bit of code to do something similar for C codes
importing Trilinos via a CMake mechanism, this extends that to a basic
Makefile mechanism as well. This patch also updates the comments to
remove a stale link discussing this issue, and replacing with links to
the some Trilinos issue reports related to the matter.
After #26608 I got a report about missing rpaths when building a
downstream package independently using a spack-installed toolchain
(@tmdelellis). This occurred because the spack-installed libraries were
being linked into the downstream app, but the rpaths were not being
manually added. Prior to #26608 autotools-installed libs would retain
their hard-coded path and would thus propagate their link information
into the downstream library on mac.
We could solve this problem *if* the mac linker (ld) respected
`LD_RUN_PATH` like it does on GNU systems, i.e. adding `rpath` entries
to each item in the environment variable. However on mac we would have
to manually add rpaths either using spack's compiler wrapper scripts or
manually (e.g. using `CMAKE_BUILD_RPATH` and pointing to the libraries of
all the autotools-installed spack libraries).
The easier and safer thing to do for now is to simply stop changing the
dylib IDs.
gconf depends on gettext and libintl (dep: intltool)
glibmm, gtkmm, libcanberra and cups need pkgconfig
glibmm needs libsigc++ < 2.9x(which are 3.x pre-releases)
libsigc++@:2.9 depends on m4 for the build
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
- add version 6.0.5
- add patch to allow fsl to use newer gcc versions
- add patch to allow fsl to use newer cuda versions
- remove constraints on gcc and cuda versions
- add filters to prevent using system headers and libraries
- clean up the installed tree
The `--generic` argument allows printing the best generic target for the
current machine. This can be quite handy when wanting to find the
generic architecture to use when building a shared software stack for
multiple machines.
This PR adds a "spack tags" command to output package tags or
(available) packages with those tags. It also ensures each package
is listed in the tag cache ONLY ONCE per tag.
glib has a few tests which have external dependencies or
try to access the X server. We cannot run those.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* ci: Enable more packages in the DVSDK CI pipeline
* doxygen: Add conflicts for gcc bugs
* dray: Add version constraints for api breakage with newer deps
5.14.2 fails with %gcc@11 with Error: 'numeric_limits' is not a class template
5.8.0 has multiple compile failures as well: Extend the conflict to those too.
- Fix also the confgigure of @5.6.3 (tested with %gcc@11)
* New versions: py-gitdb 4.0.7, 4.0.8, 4.0.9
* Update var/spack/repos/builtin/packages/py-gitdb/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New versions of py-google-auth and py-google-auth-oauthlib
* Update var/spack/repos/builtin/packages/py-google-auth/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New verion: py-html5lib 1.1
* Update var/spack/repos/builtin/packages/py-html5lib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-html5lib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update package.py
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/py-html5lib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-html5lib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New version: py-importlib-resources 5.3.0
* Update var/spack/repos/builtin/packages/py-importlib-resources/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-slurm-pipeline: Add 4.0.4, Fix base.py: import six for @:3
Before 4.0.0, slurm_pipeline/base.py has: `from six import string_types`
* Added depends_on('py-pytest@6.2.2:', type='build') as requested by Adam
* remove comment requested to be removed
- [x] Allow dding enumerated types and types whose default value is forbidden by the schema
- [x] Add a test for using enumerated types in the tests for `spack config add`
- [x] Make `config add` tests use the `mutable_config` fixture so they do not
affect other tests
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
The build-time testsuite which would be run when building
with tests needs docker. Check that it exists before
attempting to execute the tests.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
If you don't format `spack.yaml` correctly, `spack config edit` still fails and
you have to edit your `spack.yaml` manually.
- [x] Add some code to `_main()` to defer `ConfigFormatError` when loading the
environment, until we know what command is being run.
- [x] Make `spack config edit` use `SPACK_ENV` instead of the config scope
object to find `spack.yaml`, so it can work even if the environment is bad.
Co-authored-by: scheibelp <scheibel1@llnl.gov>
`spack config get <section>` was erroneously returning just the `spack.yaml`
for the environment.
It should return the combined configuration for that section (including
anything from `spack.yaml`), even in an environment.
- [x] reorder conditions in `cmd/config.py` to fix
`spack --debug config edit` was not working properly -- it would not do show a
stack trace for configuration errors.
- [x] Rework `_main()` and add some notes for maintainers on where things need
to go for configuration to work properly.
- [x] Move config setup to *after* command-line parsing is done.
Co-authored-by: scheibelp <scheibel1@llnl.gov>
`main()` has grown, and in some cases code that can generate errors has gotten
outside the top-level try/catch in there. This means that simple errors like
config issues give you large stack traces, which shouldn't happen without
`--debug`.
- [x] Split `main()` into `main()` for the top-level error handling and
`_main()` with all logic.
There were some loose ends left in ##26735 that cause errors when
using `SPACK_DISABLE_LOCAL_CONFIG`.
- [x] Fix hard-coded `~/.spack` references in `install_test.py` and `monitor.py`
Also, if `SPACK_DISABLE_LOCAL_CONFIG` is used, there is the issue that
`$user_config_path`, when used in configuration files, makes no sense,
because there is no user config scope.
Since we already have `$user_cache_path` in configuration files, and since there
really shouldn't be *any* data stored in a configuration scope (which is what
you'd configure in `config.yaml`/`bootstrap.yaml`/etc., this just removes
`$user_config_path`.
There will *always* be a `$user_cache_path`, as Spack needs to write files, but
we shouldn't rely on the existence of a particular configuration scope in the
Spack code, as scopes are configurable, both in number and location.
- [x] Remove `$user_config_path` substitution.
- [x] Fix reference to `$user_config_path` in `etc/spack/deaults/bootstrap.yaml`
to refer to `$user_cache_path`, which is where it was intended to be.
* Deactivate previous env before activating new one
Currently on develop you can run `spack env activate` multiple times to switch
between environments, but they leave traces, even though Spack only supports
one active environment at a time.
Currently:
```console
$ spack env create a
$ spack env create b
$ spack env activate -p a
[a] $ spack env activate -p b
[b] [a] $ spack env activate -p b
[a] [b] [a] $ spack env activate -p a
[a] [b] [c] $ echo $MANPATH | tr ":" "\n"
/path/to/environments/a/.spack-env/view/share/man
/path/to/environments/a/.spack-env/view/man
/path/to/environments/b/.spack-env/view/share/man
/path/to/environments/b/.spack-env/view/man
```
This PR fixes that:
```console
$ spack env activate -p a
[a] $ spack env activate -p b
[b] $ spack env activate -p a
[a] $ echo $MANPATH | tr ":" "\n"
/path/to/environments/a/.spack-env/view/share/man
/path/to/environments/a/.spack-env/view/man
```
Currently spack is a bit of a bad actor as a zsh plugin, and it was my
fault. The autoload and compinit should really be handled by the user,
as was made abundantly clear when I found spack was doing completion
initialization for *all* of my plugins due to a deferred setup that was
getting messed up by it.
Making this conditional took spack load time from 1.5 seconds (with
module loading disabled) to 0.029 seconds. I can actually afford to load
spack by default with this change in.
Hopefully someday we'll do proper zsh completion support, but for now
this helps a lot.
* use zsh hist expansion in place of dirname
* only run (bash)compinit if compdef/complete missing
* add zsh compiled files to .gitignore
* move changes to .in file, because spack
* Drastically improve YamlFilesystemView file removal via batching
The `remove_file` routine has to check if the file is owned by multiple packages, so it doesn't
remove necessary files. This is done by the `get_all_specs` routine, which walks the entire
package tree. With large numbers of packages on shared file systems, this can take seconds
per file tree traversal, which adds up extremely quickly. For example, a single deactivate
of a largish python package in our software stack on GPFS took approximately 40 minutes.
This patch simply replaces `remove_file` with a batch `remove_files` routine. This routine
removes a list of files rather than a single file, requiring only one traversal per batch. In
practice this means a package can be removed in seconds time, rather than potentially hours,
essentially a ~100x speedup (ignoring initial deactivation logic, which takes about 3 minutes
in our test setup).
* Fix sbang hook for non-writable files
PR #26793 seems to have broken the sbang hook for files with missing
write permissions. Installing perl now breaks with the following error:
```
==> [2021-10-28-12:09:26.832759] Error: PermissionError: [Errno 13] Permission denied: '$SPACK/opt/spack/linux-fedora34-zen2/gcc-11.2.1/perl-5.34.0-afuweplnhphcojcowsc2mb5ngncmczk4/bin/cpanm'
```
Temporarily add write permissions to the original file so it can be
overwritten with the patched one.
And test that file permissions are preserved in sbang even for non-writable files
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
When relocating a binary distribution, Spack only checks files to see
if they are a link that needs to be relocated. Directories can be
such links as well, however, and need to undergo the same checks
and potential relocation.
`spack list` tests are not using mock packages for some reason, and many
are marked as potentially slow. This isn't really necessary; we don't need
6,000 packages to test the command.
- [x] update tests to use `mock_packages` fixture
- [x] remove `maybeslow` annotations
Currently Spack reads full files containing shebangs to memory as
strings, meaning Spack would have to guess their encoding. Currently
Spack has a fixed guess of UTF-8.
This is unnecessary, since e.g. the Linux kernel does not assume an
encoding on paths at all, it's just bytes and some delimiters on the
byte level.
This commit does the following:
1. Shebangs are treated as bytes, so that e.g. latin1 encoded files do
not throw UnicodeEncoding errors, and adds a test for this.
2. No more bytes than necessary are read to memory, we only have to read
until the first newline, and from there on we an copy the file byte by
bytes instead of decoding and re-encoding text.
3. We cap the number of bytes read to 4096, if no newline is found
before that, we don't attempt to patch it.
4. Add support for luajit too.
This should make Spack both more efficient and usable for non-UTF8
files.
Spack's `system` and `user` scopes provide ways for administrators and
users to set global defaults for all Spack instances, but for use cases
where one wants a clean Spack installation, these scopes can be undesirable.
For example, users may want to opt out of global system configuration, or
they may want to ignore their own home directory settings when running in
a continuous integration environment.
Spack also, by default, keeps various caches and user data in `~/.spack`,
but users may want to override these locations.
Spack provides three environment variables that allow you to override or
opt out of configuration locations:
* `SPACK_USER_CONFIG_PATH`: Override the path to use for the
`user` (`~/.spack`) scope.
* `SPACK_SYSTEM_CONFIG_PATH`: Override the path to use for the
`system` (`/etc/spack`) scope.
* `SPACK_DISABLE_LOCAL_CONFIG`: set this environment variable to completely
disable *both* the system and user configuration directories. Spack will
only consider its own defaults and `site` configuration locations.
And one that allows you to move the default cache location:
* `SPACK_USER_CACHE_PATH`: Override the default path to use for user data
(misc_cache, tests, reports, etc.)
With these settings, if you want to isolate Spack in a CI environment, you can do this:
export SPACK_DISABLE_LOCAL_CONFIG=true
export SPACK_USER_CACHE_PATH=/tmp/spack
This is a stop-gap approach until we have figured out how to deal with
the system and user config scopes more generally, as there are plans to
potentially / eventually get rid of them.
**User config**
Spack is a bit of a pain when you have:
- a shared $HOME folder across different systems.
- multiple Spack versions on the same system.
**System config**
- On shared systems with a versioned programming environment / toolkit,
system administrators want to provide config for each version (e.g.
21.09, 21.10) of the programming environment, and the user Spack
instance should be able to pick this up without a steep learning
curve.
- On shared systems the user should be able to opt out of the
hard-coded config scope in /etc/spack, since it may be incompatible
with their particular instance. Currently Spack can only opt out of all
config scopes through overrides with `"config:":`, `"packages:":`, but that
also drops the defaults config, which would have to be repeated, which
is undesirable, especially the lengthy packages.yaml.
An example use case is: having config in this folder:
```
/path/to/programming/environment/{version}/{compilers,packages}.yaml
```
and have `module load spack-system-config` set the variable
```
SPACK_SYSTEM_CONFIG_PATH=/path/to/programming/environment/{version}
```
where the user no longer has to worry about what `{version}` they are
on.
**Continuous integration**
Finally, there is the use case of continuous integration, which may
clone an arbitrary Spack version, which optimally should not pick up
system or user config from the previous run (like may happen in
classical bare metal non-containerized filesystem side effect ridden
jenkins pipelines). In fact this is very similar to how spack itself
tries to avoid picking up system dependencies during builds...
**But environments solve this?**
- You could do `include`s in environment files to get similar behavior
to the spack_system_config_path example, but environments require you
to:
1) require paths to individual config files, not directories.
2) fail if the listed config file does not exist
- They allow you to override config scopes, but this is generally too
rigurous, as it requires you to repeat the default config, in
particular packages.yaml, and just defies the point of layered config.
Co-authored-by: Tom Scogland <tscogland@llnl.gov>
Co-authored-by: Tim Fuller <tjfulle@sandia.gov>
Co-authored-by: Steve Leak <sleak@lbl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Any spec satisfying a default will be symlinked to `default`
If multiple specs have modulefiles in the same directory and satisfy
configured module defaults, then whichever was written last will be
default.
Use of `-R` flag to CTest command causes "empty-14" test to run,
by matching "empty", before the empty-14 target is built.
Patch CTest command in buildscript to match name exactly.
* [geant4] depends_on vecgeom@1.1.8:1.1 range
While previous versions were unclear, the [geant4.10.7 release notes](https://geant4-data.web.cern.ch/ReleaseNotes/ReleaseNotes4.10.7.html) indicate that vecgeom@1.1.8 is a minimum required version, not an exact required version ("Set VecGeom-1.1.8 as minimum required version for optional build with VecGeom."). This will allow some more freedom on the concretizer solutions while allowing geant4 to take advantage of bugfixes and improvements in vecgeom.
* [vecgeom] new version 1.1.17
* For py-torch: Also update dependencies: many version constraints
with an upper bound of @1.9 are now open (e.g. `@1.8.0:1.9` is
converted to `@1.8.0:`).
* For py-torchvision: Also add 0.11.0 and update ^pil constraint
to avoid building with 8.3.0
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442:
```yaml
container:
images:
os: amazonlinux:2
spack:
ref: develop
resolve_sha: true
```
The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`.
Modifications:
- [x] Permit to pin the version of Spack, either by branch or tag or sha
- [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1)
- [x] Permit to print the bootstrap image as a standalone
- [x] Add documentation on the new part of the schema
- [x] Add unit tests for different use cases
* [pkg][new version] Provide eospac@6.5.0 and mark it as default.
* Merge in changes found in #21629
* Mark all alpha/beta versions as deprecated.
- Addresses @sethrj's recommendation
- Also add a note indicated why these versions are marked this way.
1. Currently it prints not just the spec name, but the dependencies +
their variants + their compilers + their architectures + ...
2. It's clear from the context what spec the message applies to, so,
let's not print the spec at all.
These three rules in `concretize.lp` are overly complex:
```prolog
:- not provider(Package, Virtual),
provides_virtual(Package, Virtual),
virtual_node(Virtual).
```
```prolog
:- provides_virtual(Package, V1), provides_virtual(Package, V2), V1 != V2,
provider(Package, V1), not provider(Package, V2),
virtual_node(V1), virtual_node(V2).
```
```prolog
provider(Package, Virtual) :- root(Package), provides_virtual(Package, Virtual).
```
and they can be simplified to just:
```prolog
provider(Package, Virtual) :- node(Package), provides_virtual(Package, Virtual).
```
- [x] simplify virtual rules to just one implication
- [x] rename `provides_virtual` to `virtual_condition_holds`
fixes#26866
This semantics fits with the way Spack currently treats providers of
virtual dependencies. It needs to be revisited when #15569 is reworked
with a new syntax.
* gcc: support runtime ability to not install spack rpaths
Fixes#26582 .
* gcc: Fix malformed specs file and add docs
The updated docs point out that the spack-modified GCC does *not*
follow the usual behavior of LD_RUN_PATH!
* gcc: fix bad rpath on macOS
This bug has been around since the beginning of the GCC package file:
the rpath command it generates for macOS writes a single (invalid)
rpath entry.
* gcc: only write rpaths for directories with shared libraries
The original lib64+lib was just a hack for "in case either has one" but
it's easy to tell whether either has libraries that can be directly
referenced.
* py-vermin: add latest version 1.3.1
* Exclude line from Vermin since version is already being checked for
Vermin 1.3.1 finds that `encoding` kwarg of builtin `open()` requires Python 3+.
* Update py-aiohttp to 3.7.4 and py-chartdet to 4.0
* Changes from review
* Update package.py
* Update var/spack/repos/builtin/packages/py-chardet/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The OS should only interpret shebangs, if a file is executable.
Thus, there should be no need to modify files where no execute bit is set.
This solves issues that are e.g. encountered while packaging software as
COVISE (https://github.com/hlrs-vis/covise), which includes example data
in Tecplot format. The sbang post-install hook is applied to every installed
file that starts with the two characters #!, but this fails on the binary Tecplot
files, as they happen to start with #!TDV. Decoding them with UTF-8 fails
and an exception is thrown during post_install.
Co-authored-by: Martin Aumüller <aumuell@reserv.at>
This commit contains changes to support Google Cloud Storage
buckets as mirrors, meant for hosting Spack build-caches. This
feature is beneficial for folks that are running infrastructure on
Google Cloud Platform. On public cloud systems, resources are
ephemeral and in many cases, installing compilers, MPI flavors,
and user packages from scratch takes up considerable time.
Giving users the ability to host a Spack mirror that can store build
caches in GCS buckets offers a clean solution for reducing
application rebuilds for Google Cloud infrastructure.
Co-authored-by: Joe Schoonover <joe@fluidnumerics.com>
* scr/veloc: component releases
Update the ECP-VeloC component packages in preparation for an
upcoming scr@3.0rc2 release.
All
- Add new release versions
- Add new `shared` variant for all components
- Add zlib link dependency to packages that were missing it
- Add maintainers
- Use self.define and self.define_from_variant to clean up cmake_args
axl
- Add independent vendor async support variants
rankstr
- Update older version sha that fails checksum on install
* Fix scr build error
Lock dependencies for scr@3.0rc1 to the versions released at the same
time.
* py-neurokit2: add 0.1.4.1
* Update var/spack/repos/builtin/packages/py-neurokit2/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update cray architecture detection for milan
Update the cray architecture module table with x86-milan -> zen3
Make cray architecture more robust to back off from frontend
architecture to a recent ancestor if necessary. This should make
future cray updates less paingful for users.
Co-authored-by: Gregory Becker <becker33.llnl.gov>
Co-authored-by: Todd Gamblin <gamblin2@llnl.gov>
1. Don't use 16 digits of precision for the seconds, round to 2 digits after the comma
2. Don't print if we don't concretize (i.e. `spack concretize` without `-f` doesn't have to tell me it did nothing in `0.00` seconds)
* Speed-up environment concretization with a process pool
We can exploit the fact that the environment is concretized
separately and use a pool of processes to concretize it.
* Add module spack.util.parallel
Module includes `pool` and `parallel_map` abstractions,
along with implementation details for both.
* Add a new hash type to pass specs across processes
* Add tty msg with concretization time
We use POSIX `patch` to apply patches to files when building, but
`patch` by default prompts the user when it looks like a patch
has already been applied. This means that:
1. If a patch lands in upstream and we don't disable it
in a package, the build will start failing.
2. `spack develop` builds (which keep the stage around) will
fail the second time you try to use them.
To avoid that, we can run `patch` with `-N` (also called
`--forward`, but the long option is not in POSIX). `-N` causes
`patch` to just ignore patches that have already been applied.
This *almost* makes `patch` idempotent, except that it returns 1
when it detects already applied patches with `-N`, so we have to
look at the output of the command to see if it's safe to ignore
the error.
- [x] Remove non-POSIX `-s` option from `patch` call
- [x] Add `-N` option to `patch`
- [x] Ignore error status when `patch` returns 1 due to `-N`
- [x] Add tests for applying a patch twice and applying a bad patch
- [x] Tweak `spack.util.executable` so that it saves the error that
*would have been* raised with `fail_on_error=True`. This lets
us easily re-raise it.
Co-authored-by: Greg Becker <becker33@llnl.gov>
* py-magic: delete redundant package
This package is actually named py-python-magic (since the project itself
is "python-magic").
* New package: libmagic
* Py-python-magic: add required runtime dependency on libmagic and new version
* Py-filemagic: add required runtime dependency
* py-magic: restore and mark as redundant
This reverts commit 4cab7fb69e.
* file: add implicit dependencies and static variant
Replaces redundant libmagic that I added. Compression headers were previously
being picked up from the system.
* Fix py-python-magic dependency
* Update python version requirements
* relocate: call install_name_tool less
* zstd: fix race condition
Multiple times on my mac, trying to install in parallel led to failures
from multiple tasks trying to simultaneously create `$PREFIX/lib`.
* PackageMeta: simplify callback flush
* Relocate: use spack.platforms instead of platform
* Relocate: code improvements
* fix zstd
* Automatically fix rpaths for packages on macOS
* Only change library IDs when the path is already in the rpath
This restores the hardcoded library path for GCC.
* Delete nonexistent rpaths and add more testing
* Relocate: Allow @executable_path and @loader_path
* downgrade_docutils_version
* invalid version
* Update requirements.txt
* Improve spelling and shorten the reference link
* Update spack.yaml
* update version requirement
* update version to maximum of 0.16
Co-authored-by: bernhardkaindl <43588962+bernhardkaindl@users.noreply.github.com>
* py-jupytext: add new package
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* update jupytext dependencies
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-jupytext: remove py-jupyerlab dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-gevent: add version 1.5
* py-gevent: update dependencies for v1.5.0
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-gevent/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* phist: Prefer 1.9.5 (1.9.6 uses mpi_f08, but not available in CI)
* phist: remove dupe of 1.9.5, missing preferred=True
Also, for 1.9.6, patch the (most, one does not work) tests to use
Currently Spack keeps track of the origin in the code of any
modification to the environment variables. This is very slow
and enabled unconditionally even in code paths where the
origin of the modification is never queried.
The only place where we inspect the origins of environment
modifications is before we start a build, If there's an override
of the type `e.set(...)` after incremental changes like
`e.append_path(..)`, which is a "suspicious" change.
This is very rare though.
If an override like this ever happens, it might mean a package is
broken. If that leads to build errors, we can just ask the user to run
`spack -d install ...` and check the warnings issued by Spack to find
the origins of the problem.
UPP and ncep-post are the same package, so this PR
removes the duplication.
ncep-post was originally named after the upstream repo
that now changed its name to UPP.
It can be frustrating to successfully run `spack test run --alias <name>` only to find you cannot get the results because you already use `<name>` in some previous stand-alone test execution. This PR prevents that from happening.
Primary fix:
Due to a typo in a version range, overlapping PR merges resulted
in a build failure of the latest version:
Don't attempt to remove a non-existing file for version 1.9.6.
Secondary fixes:
update_tpetra_gotypes.patch was mentioned twice, and the version
range has to exclude @1.4.2, to which it cannot be applied.
Add depend_on() py-pytest, py-numpy and pkgconfig with type='test'
@:1.9.0 fail with 'Rank mismatch with gfortran@10:, add a conflicts().
raise InstallError('~mpi not possible with kernel_lib=builtin!')
when applicable.
Fixes for spack install --test=root phist:
mpiexec -n12 puts a lot of stress on a pod and gets stuck in a loop
very often: Reduce the mpiexec procs and the number of threads.
Remove @run_after('build') @on_package_attributes(run_tests=True):
from 'def check()': fixes it from getting called twice
The build script of 'make test_install' for the installcheck expects
the examples to be copied to self.stage.path: Provide them.
* updating the recipe for betterment
* addressing the suggesions received from reviewers
* adding package helper macros
Co-authored-by: mohan002 <mohbabul@amd.com>
Using the Spec.constrain method doesn't work since it might
trigger a repository lookup which could break our directives
and triggers a circular import error.
To fix that we introduce a function to merge abstract anonymous
specs, based only on package names, which does not perform any
lookup in the repository.
Add missing pkgconfig to openslide and its dep perl-alien-libxml2.
Fix shared-mime-info to be a runtime dependency of gdk-pixbuf,
Otherwise, configure cannot detect use gdk-pixbuf without error.
* SEACAS: add a Faodel variant
* Use safer CMake and variant packages instead of directly adding parameters
Add a "+faodel ~mpi" dependency to balance "+faodel +mpi"
The buildcache is now extracted in a temporary folder within the current store,
moved to its final place and relocated.
"spack clean -s" has been extended to also clean the temporary extraction directory.
Add hardlinks with absolute paths for libraries in the corge, garply and quux packages
to detect incorrect handling of hardlinks in tests.
Problem: Flux expects the `FLUX_PMI_LIBRARY_PATH` to point directly at
the `libpmi.so` installed by Flux. When the env var is unset,
prepending to it results in this behavior. In the rare case that the
env var is already set, then the spack `libpmi.so` gets prepended with a
`:`, which Flux then attempts to interpret as a single path.
Solution: don't prepend to the path, instead set the path to point to
the `libpmi.so` (which will be undone when Flux is unloaded).
* flux-core: remove deprecated environment variables
The earliest checksummed version in this package is 0.15.0. As of
0.12.0, wreck (and its associated paths) no longer exist in Flux. As of
0.13.0, the `FLUX_RCX_PATH` variables are no longer used. So clean up
these env vars from the `setup_run_environment`.
gromacs@2018:2020.6 is fixed to build with gcc@11.2.0
by adding #include <limits> to a few header files.
Thanks to Maciej Wójcik <w8jcik@gmail.com> for testing versions.
The `find` command was missing for the examples forcing colorized output. Without this (or another suitable) command, spack produces output that is not using any color. Thus, without the `find` command one does not see any difference between forced colorized and non-colorized output.
There was a bug in 2.36.* of missing Makefile dependencies. The
previous workaround was to require 2.36 to be built serially. This is
now fixed upstream in 2.37 and this PR adds the patch to restore
parallel make to 2.36.
* py-niworkflows: add new package
* Update var/spack/repos/builtin/packages/py-niworkflows/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove unnecessary comment
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-nistats: add new package
* Update var/spack/repos/builtin/packages/py-nistats/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove `conflicts`
* remove test dependencies
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
when deployed on kubernetes, the server sends back permanent redirect responses.
This is elegantly handled by the requests library, but not urllib that we have
to use here, so I have to manually handle it by parsing the exception to
get the Location header, and then retrying the request there.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* packages/phist, re #26002: force phist to use MPI compiler wrappers (copied from trilinos package)
* packages/phist re #26002, use cmake-provded FindMPI module only
* packages/phist source code formatting
* packages/phist: set MPI_HOME rather than MPI_BASE_DIR, thanks @sethri.
* phist: delete own FindMPI.cmake for older versions (rather than patching it away)
* packages/phist: remove blank line
* phist: adjust sorting of imports
* phist: change order of imports
The ASP-based solver maximizes the number of values in multi-valued
variants (if other higher order constraints are met), to avoid cases
where only a subset of the values that have been specified on the
command line or imposed by another constraint are selected.
Here we swap the priority of this optimization target with the
selection of the default providers, to avoid unexpected results
like the one in #26598
Seems like https://bugs.python.org/issue29699 is relevant. Better to
just ignore errors when removing them tmpdir. The OS will remove it
anyways.
Errors are happening randomly from tests that are using this fixture.
TL;DR: there are matching groups trying to match 1 or more occurrences of
something. We don't use the matching group. Therefore it's sufficient to test
for 1 occurrence. This reduce quadratic complexity to linear time.
---
When parsing logs of an mpich build, I'm getting a 4 minute (!!) wait
with 16 threads for regexes to run:
```
In [1]: %time p.parse("mpich.log")
Wall time: 4min 14s
```
That's really unacceptably slow...
After some digging, it seems a few regexes tend to have `O(n^2)` scaling
where `n` is the string / log line length. I don't think they *necessarily*
should scale like that, but it seems that way. The common pattern is this
```
([^:]+): error
```
which matches `: error` literally, and then one or more non-colons before that. So
for a log line like this:
```
abcdefghijklmnopqrstuvwxyz: error etc etc
```
Any of these are potential group matches when using `search` in Python:
```
abcdefghijklmnopqrstuvwxyz
bcdefghijklmnopqrstuvwxyz
cdefghijklmnopqrstuvwxyz
⋮
yz
z
```
but clearly the capture group should return the longest match.
My hypothesis is that Python has a very bad implementation of `search`
that somehow considers all of these, even though it can be implemented
in linear time by scanning for `: error` first, and then greedily expanding
the longest possible `[^:]+` match to the left. If Python indeed considers
all possible matches, then with `n` matches of length `1 .. n` you
see the `O(n^2)` slowness (i verified this by replacing + with {1,k}
and doubling `k`, it doubles the execution time indeed).
This PR fixes this by removing the `+`, so effectively changing the
O(n^2) into a O(n) worst case.
The reason we are fine with dropping `+` is that we don't use the
capture group anywhere, so, we just ensure `:: error` is not a match
but `x: error` is.
After going from O(n^2) to O(n), the 15MB mpich build log is parsed
in `1.288s`, so about 200x faster.
Just to be sure I've also updated `^CMake Error.*:` to `^CMake Error`,
so that it does not match with all the possible `:`'s in the line.
Another option is to use `.*?` there to make it quit scanning as soon as
possible, but what line that starts with `CMake Error` that does not have
a colon is really a false positive...
Installing packages with a lot of dependencies does not have an easy way
of judging the current progress (apart from running `spack spec -I pkg`
in another terminal). This change allows Spack to update the terminal's
title with status information, including its current progress as well as
information about the current and total number of packages.
* kahip: update to cmake for v3.11, retain scons for older versions
* kahip: update build system to cmake for v3.11, retain SCons for older versions
* address PR comments and add maintainer
* address PR comments - correct version to 2.10, add deprecated and url, and remove scons version
- Do not store the full list of environment variables in
<prefix>/.spack/spack-build-env.txt because it may contain user secrets.
- Only store environment variable modifications upon installation.
- Variables like PATH may still contain user and system paths to make
spack-build-env.txt sourceable. Variables containing paths are
modified through prepending/appending, and if we don't apply these
to the current environment variable, we end up with statements like
`export PATH=/path/to/spack/bin` with system paths missing, meaning
no system binaries are in the path, which is a bad user experience.
- Do write the full environment to spack-build-env.txt in the staging dir,
but ensure it is readonly for the current user, to make it a bit safer
on shared systems.
Creates an environment in a temporary directory and activates it, which
is useful for a quick ephemeral environment:
```
$ spack env activate -p --temp
[spack-1a203lyg] $ spack add zlib
==> Adding zlib to environment /tmp/spack-1a203lyg
==> Updating view at /tmp/spack-1a203lyg/.spack-env/view
```
PR #25904 moved the `--with-tcl` option to only older versions. However,
without this option, the build breaks:
```
checking for Tcl configuration... configure: error: Can't find Tcl configuration definitions. Use --with-tcl to specify a directory containing tclConfig.sh
```
The DB should be what is trusted for certain operations.
If it is not present when read we should assume the
corresponding store is empty, rather than trying a
write operation during a read.
* Add a unit test
* Document what needs to be there in tests
* py-twisted,py-storm: dep on zope.interface, bump storm version
py-twisted and py-storm's import tests need zope.interface.
py-storm: Use pypi and add version 0.25. It didn't change reqs.
zope.infterface@4.5.0 imports removed Feature: Use setuptools@:45
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-storm: all deps updated with type=('build', 'run')
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-bcrypt, py-bleach, py-decorator, py-pygdal: fix python dependency
* Update var/spack/repos/builtin/packages/py-bleach/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-matplotlib: fix 3.4.3
* Update var/spack/repos/builtin/packages/py-matplotlib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-keras-preprocessing: Add missing deps: six@1.9.0: and numpy@1.9.1:
Add deps: pip download --no-binary :all: keras-preprocessing==1.1.2
Collecting numpy>=1.9.1
Installing build dependencies: started
Collecting six>=1.9.0
* Update var/spack/repos/builtin/packages/py-keras-preprocessing/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When a symlink to a license file exists but is broken, the license symlink post-install hook fails
because os.path.exists() checks the existence of the target not the symlink itself.
os.path.lexists() is the proper function to use.
* visit: add an external find function (determine_version)
* visit: correct too long comment line
* visit: forgot to set executables
* visit: external find uses signgle dash version
* visit: found as external asking visit version
* fish: adding version 3.3.1
* adding maintainer
* Update var/spack/repos/builtin/packages/fish/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The most recent release of numactl (2.0.14) fails to build on riscv64
because of a missing "-latomic". This patch from upstream resolves this
issue. It can be dropped once the next version of numactl is released.
Environments push/pop scopes upon activation. If some lazily
evaluated value depending on the current configuration was
computed and cached before the scopes are pushed / popped
there will be an inconsistency in the current state.
This PR fixes the issue for stores, but it would be better
to move away from global state.
The `spack.architecture` module contains an `Arch` class that is very similar to `spack.spec.ArchSpec` but points to platform, operating system and target objects rather than "names". There's a TODO in the class since 2016:
abb0f6e27c/lib/spack/spack/architecture.py (L70-L75)
and this PR basically addresses that. Since there are just a few places where the `Arch` class was used, here we query the relevant platform objects where they are needed directly from `spack.platforms`. This permits to clean the code from vestigial logic.
Modifications:
- [x] Remove the `spack.architecture` module and replace its use by `spack.platforms`
- [x] Remove unneeded tests
* Use gnuconfig package for config file replacement for RISC-V.
This extends the changes in #26035 to handle RISC-V. Before this change,
many packages fail to configure on riscv64 due to config.guess being too
old to know about RISC-V. This is seen out of the box when clingo fails
to build from source due to pkgconfig failing to configure, throwing
error: "configure: error: cannot guess build type; you must specify one".
* Add riscv64 architecture
* Update vendored archspec from upstream project.
These archspec updates include changes needed to support riscv64.
* Update archspec's __init__.py to reflect the commit hash of archspec being used.
Cherry-picked from #25564 so this is standalone.
With this PR we can activate an environment in Spack itself, without computing changes to environment variables only necessary for "shell aware" env activation.
1. Activating an environment:
```python
spack.environment.activate(Environment(xyz)) -> None
```
this basically just sets `_active_environment` and modifies some config scopes.
2. Activating an environment **and** getting environment variable modifications for the shell:
```python
spack.environment.shell.activate(Environment(xyz)) -> EnvironmentModifications
```
This should make it easier/faster to do unit tests and scripting with spack, without the cli interface.
* Isolate bootstrap configuration from user configuration
* Search for build dependencies automatically if bootstrapping from sources
The bootstrapping logic will search for build dependencies
automatically if bootstrapping anything form sources. Any
external spec, if found, is written in a scope that is specific
to bootstrapping.
* Don't clean the bootstrap store with "spack clean -a"
* Copy bootstrap.yaml and config.yaml in the bootstrap area
Reverting from CMake to Make install caused
`-install_path=/usr/local/lib/libzstd.1.dylib` to be hardcoded into the
zstd. Now we explicitly pass the PREFIX into the build command so that
the correct spack install path is saved.
Fixes#26438 and also the ROOT install issue I had :)
- [x] Our wrapper error messages are sometimes hard to differentiate from other build
output, so prefix all errors from `die()` with '[spack cc] ERROR:'
- [x] The error we raise when running, say, `fc` without a Fortran compiler was not
clear enough. Clarify the message and the comment.
This converts everything in cc to POSIX sh, except for the parts currently
handled with bash arrays. Tests are still passing.
This version tries to be as straightforward as possible. Specifically, most conversions
are kept simple -- convert ifs to ifs, handle indirect expansion the way we do in
`setup-env.sh`, only mess with the logic in `cc`, and don't mess with the python code at
all.
The big refactor is for arrays. We can't rely on bash's nice arrays and be ignorant of
separators anymore. So:
1. To avoid complicated separator logic, there are three types of lists. They are:
* `$lsep`-separated lists, which end with `_list`. `lsep` is customizable, but we
picked `^G` (alarm bell) for `$lsep` because it's ASCII and it's unlikely that it
would actually appear in any arguments. If we need to get fancier (and I will lose
faith in the world if we do) then we could consider XON or XOFF.
* `:`-separated directory lists, which end with `_dirs`, `_DIRS`, `PATH`, or `PATHS`
* Whitespace-separated lists (like flags), which can have any other name.
Whitespace and colon-separated lists come with the territory with PATHs from env
vars and lists of flags. `^G` separated lists are what we use for most internal
variables, b/c it's more likely to work.
2. To avoid subshells, use a bunch of functions that do dirty `eval` stuff instead. This
adds 3 functions to deal with lists:
* `append LISTNAME ELEMENT [SEP]` will put `ELEMENT` at the end of the list called
`LISTNAME`. You can optionally say what separator you expect to use. Note that we
are taking advantage of everything being global and passing lists by name.
* `prepend LISTNAME ELEMENT [SEP]` like append, but puts `ELEMENT` at the start of
`LISTNAME`
* `extend LISTNAME1 LISTNAME2 [PREFIX]` appends everything in LISTNAME2 to
LISTNAME1, and optionally prepends `PREFIX` to every element (this is useful for
things like `-I`, `-isystem `, etc.
* `preextend LISTNAME1 LISTNAME2 [PREFIX]` prepends everything in LISTNAME2 to
LISTNAME1 in order, and optionally prepends `PREFIX` to every element.
The routines determine the separator for each argument by its name, so we don't have to
pass around separators everywhere. Amazingly, as long as you do not expand variables'
values within an `eval` environment, you can do all this and still preserve quoting.
When iterating over lists, the user of this API still has to set and unset `IFS`
properly.
We ended up having to ignore shellcheck SC2034 (unused variable), because using evals
all over the place means that shellcheck doesn't notice that our list variables are
actually used.
So far this is looking pretty good. I took the most complex unit test I could find
(which runs a sample link line) and ran the same command line 200 times in a shell
script. Times are roughly as follows:
For this invocation:
```console
$ bash -c 'time (for i in `seq 1 200`; do ~/test_cc.sh > /dev/null; done)'
```
I get the following performance numbers (the listed shells are what I put in `cc`'s
shebang):
**Original**
* Old version of `cc` with arrays and `bash v3.2.57` (macOS builtin): `4.462s` (`.022s` / call)
* Old version of `cc` with arrays and `bash v5.1.8` (Homebrew): `3.267s` (`.016s` / call)
**Using many subshells (#26408)**
* with `bash v3.2.57`: `25.302s` (`.127s` / call)
* with `bash v5.1.8`: `27.801s` (`.139s` / call)
* with `dash`: `15.302s` (`.077s` / call)
This version didn't seem to work with zsh.
**This PR (no subshells)**
* with `bash v3.2.57`: `4.973s` (`.025s` / call)
* with `bash v5.1.8`: `4.984s` (`.025s` / call)
* with `zsh`: `2.995s` (`.015s` / call)
* with `dash`: `1.890s` (`.0095s` / call)
Dash, with the new posix design, is easily the winner.
So there are several interesting things to note here:
1. Running the posix version in `bash` is slower than using `bash` arrays. That is to be
expected because it's doing a bunch of string processing where it likely did not have
to before, at least in `bash`.
2. `zsh`, at least on macOS, is significantly faster than the ancient `bash` they ship
with the system. Using `zsh` with the new version also makes the posix wrappers
faster than `develop`. So it's worth preferring `zsh` if we have it. I suppose we
should also try this with newer `bash` on Linux.
3. `bash v5.1.8` seems to be significantly faster than the old system `bash v3.2.57` for
arrays. For straight POSIX stuff, it's a little slower. It did not seem to matter
whether `--posix` was used.
4. `dash` is way faster than `bash` or `zsh`, so the real payoff just comes from being
able to use it. I am not sure if that is mostly startup time, but it's significant.
`dash` is ~2.4x faster than the original `bash` with arrays.
So, doing a lot of string stuff is slower than arrays, but converting to posix seems
worth it to be able to exploit `dash`.
- [x] Convert all but array-related portions to sh
- [x] Fix basic shellcheck issues.
- [x] Convert arrays to use a few convenience functions: `append` and `extend`
- [x] Get `cc` tests passing.
- [x] Add `cc` tests where needed passing.
- [x] Benchmarking.
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
* Add version 0.12.1
* Add variant to build with C++11 standard
build with c++11 standard requires boost threads, and needs explicit setting of
CMAKE_CXX_STANDARD
* intel-tbb: install pkgconfig file
* intel-tbb: install pkgconfig file when @:2021.2.0
* intel-tbb: add blank line
* intel-tbb: fix library name to refer
* intel-tbb: fix library name to refer again
* intel-tbb: use self.prefix.lib.pkgconfig
From the gnupg.org website:
> GnuPG 1.4 is the old, single binary version which still support the
> unsafe PGP-2 keys. However, it lacks many modern features and will
> receive only important updates.
I'm starting to appreciate gpg1 more, because it is relocatable (gng2
has hard-coded paths to gpg-agent and other tools) and it does not
require gpg-agent at all.
* graph500: added option -fcommon for gcc@10.2:, otherwise failed to build with "multiple definition of `column'"
* graph500: moved setting cflag to flag_handler
Work around issues in older hdf5 build and overzealous build flags:
```
>> 1420 /var/folders/j4/fznvdyhx4875h6fhkqjn2kdr4jvyqd/T/9te/spack-stage/spack-stage-hdf5-1.10.4-feyl6tz6hpx5kl7m33avpuacwje2ubul/spack-src/src/H5Odeprec.c:141:8: error: implicit decl
aration of function 'H5CX_set_apl' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
```
The older patch does not apply so the build ends up failing:
```
1539 In file included from /private/var/folders/fy/x2xtwh1n7fn0_0q2kk29xkv9vvmbqb/T/s3j/spack-stage/spack-stage-python-3.8.11
-6jyb6sxztfs6fw26xdbc3ktmbtut3ypr/spack-src/Modules/_tkinter.c:48:
>> 1540 /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/tk.h:86:11: f
atal error: 'X11/Xlib.h' file not found
1541 # include <X11/Xlib.h>
1542 ^~~~~~~~~~~~
1543 1 error generated.
```
* Explicitly set path to Kokkos for ArborX testing
* Improve formatting
* Update var/spack/repos/builtin/packages/arborx/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* Remove blank line
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* -fallow-argument-mismatch flag added when compiling with GCC to avoid a compilation error when using a GCC version > 10.0.
Co-authored-by: Haz99 <jsalamerosanz@gmail.com>
* Filtered every occurrence of "!$OMP SIMD SAFELEN(LVEC2)" when compiling with nvhcp to avoid a compilation error.
Co-authored-by: Haz99 <jsalamerosanz@gmail.com>
* Line with more than 80 characters split into multiple lines.
Co-authored-by: Haz99 <jsalamerosanz@gmail.com>
When using modules for compiler (and/or external package), if a
package's `setup_[dependent_]build_environment` sets `PYTHONHOME`, it
can influence the python subprocess executed to gather module
information.
The error seen was:
```
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
But the actual hidden error happened in the `python -c 'import
json...'` subprocess, which made it return an empty string as json:
```
ModuleNotFoundError: No module named 'encodings'
```
This fix uses `python -E` to ignore `PYTHONHOME` and
`PYTHONPATH`. Should be safe here because the python subprocess code
only use packages built-in python.
The python subprocess in `environment.py` was also patched to be safe
and consistent.
* Remove redundant preserve environment code in build environment
* Remove fix for a bug in a module
See https://github.com/spack/spack/issues/3153#issuecomment-280460041,
this shouldn't be part of core spack.
* Don't module unload cray-libsci on all platforms
Spack has logic to preserve an installation prefix when it is being
overwritten: if the new install fails, the old files are restored.
This PR adds error handling for when this backup restoration fails
(i.e. the new install fails, and then some unexpected error prevents
restoration from the backup).
* Remove vestigial code to be compatible with Spack v0.9.X
* ArchSpec: reworked __repr__ to be more adherent to common Python idioms
* ArchSpec: simplified __init__.py and copy()
closes #26354 and #26358
Previously we did not pass paths for GDB or GMP and ./configure would
get confused about which one to pull from. Be more specific.
Built with all variants enabled and fixed the fixable versions and variants:
@:8.1 were fixable by limiting python versions for these to @:3.6.
7.10.1 and 7.11(.1) were fixable to build with glibc-2.25 and newer
using two long patches.
gdb 7.8 and 7.9 weren't fixable as there is no backport if the fix
to build these with glibc-2.25 and newer:
http://lists.busybox.net/pipermail/buildroot/2017-March/188055.html
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Modifications:
- Modify the workflow to build container images without pushing when the workflow file itself is modified
- Strip the leading ghcr.io/spack/ from env.container env.versioned to prepare pushing to multiple registries
- Fixed CentOS 7 and Amazon Linux builds
- Login and push to Docker Hub as well as Github Action
- Add a badge to README.md with the status of docker images
- Specify CMake minimum version more precisely
- Ensure rocBLAS is available at build time
- Limit workaround for missing rocblas include path
to the only affected version (4.1.0)
- Make hip a build and link dependency
- Remove hip's link dependencies
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
CMake 3.21.3 disables the broken hipcc-as-rocmclang detection again.
From the release notes:
> The AMD ROCm Platform hipcc compiler was identifed by CMake 3.21.0
> through 3.21.2 as a distinct compiler with id ROCMClang. This has been
> removed because it caused regressions. Instead:
> * hipcc may no longer be used as a HIP compiler because it interferes
> with flags CMake needs to pass to Clang. Use Clang directly.
> * hipcc may once again be used as a CXX compiler, and is treated as
> whatever compiler it selects underneath, as CMake 3.20 and below
> did.
* py-snappy: add patch to fix dependencies
* Update var/spack/repos/builtin/packages/py-snappy/req.patch
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-jupyter-packaging: add 0.7.12 and 0.10.6
* Update var/spack/repos/builtin/packages/py-jupyter-packaging/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-jupyter-packaging/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The format of the HPE/Cray supplied module for cray-mvapich2 on HPE apollo systems is
very different from the cray-mpich module supplied on Cray EX and XE
systems.
Recent changes to the cray-mpich package -
https://github.com/spack/spack/pull/23470
broke support for cray-mvapich2 and relies now on the structure of the
cray-mpich module to work properly.
Rather than try to support two very different vendor mpich modules
using the same spack package, just add another one specialized for
the cray-mvapich2 module.
Signed-off-by: Howard Pritchard <hppritcha@gmail.com>
1. Changes the variant of openssl to `certs=mozilla/system/none` so that
users can pick whether they want Spack or system certs, or if they
don't want certs at all.
2. Keeps the default behavior of openssl to use certs=systems.
3. Changes the curl configuration to not guess the ca path during
config, but rather fall back to whatever the tls provider is
configured with. If we don't do this, curl will still pick up system
certs if it finds them.
As a minor fix, it also adds the build dep `pkgconfig` to curl, since
that's being used during the configure phase to get openssl compilation
flags.
* py-mock: fix depends of `@:2.0.0` and bump version
fixes the build of `py-gsutil`, it depends on `'py-mock@:2.0.0'`.
* Update var/spack/repos/builtin/packages/py-mock/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-mock/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Apply the other requested changes
* Add requested change: Add the python@3.6 for newer versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* seacas: new release and fixes for metis/parmetis
* Update to add sha256 checksum for latest seacas release
* Updated the documentation strings with new applications
* Fixed the metis/parmetis variants and logic depending on whether mpi
is enabled/disabled. (There is still a zoltan issue I need to fix,
but this will at least allow seacas to be built without
metis/parmetis or with +mpi+parmetis. The ~mpi+metis still needs
work elsewhere.)
* Enable cpup, slice, zellij in +applications
NetCDF-4.8.1 has been released.
As discussed in https://github.com/Unidata/netcdf-c/issues/2110
(netcdf-c-4.8.1.tar.gz not on ftp site... #2110), the canonical
download site for netCDF releases has been changed and the previous
ftp site is no longer available.
This PR updates the `url` to point to the new recommended download
site and updates the sha256 checksums for the new tar files.
* py-anuga: add git main version to support build with python@3.5:
py-anuga's main branch has been converted to Python-3 recently.
* py-triangle: use pypi, py-anuga: Fixed depends and test suite works now
* py-numba: add 0.54.0 and restrict old dependencies
* Update var/spack/repos/builtin/packages/py-numba/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* OpenSSL 3.0.0
* Remove openssl constraint in e4s to test 3.0.0
* Restrict openssl
* Restrict openssl to @:1 in unifyfs
* Revert "Remove openssl constraint in e4s to test 3.0.0"
This reverts commit 0f0355609771764280ab1b6a523c80843a4f85d6.
* Prefer 1.x
The logic to perform detection of already installed
packages has been extracted from cmd/external.py
and put into the spack.detection package.
In this way it can be reused programmatically for
other purposes, like bootstrapping.
The new implementation accounts for cases where the
executables are placed in a subdirectory within <prefix>/bin
The build needs `pkgconfig` and `openssl`, `m4` is already added by `autoconf`.
Also add the current version of `libp11` to the list of versions.
Co-authored-by: Bernhard Kaindl <bernhard.kaindl@ait.ac.at>
* Use gnuconfig package for config file replacement
Currently the autotools build system tries to pick up config.sub and
config.guess files from the system (in /usr/share) on arm and power.
This is introduces an implicit system dependency which we can avoid by
distributing config.guess and config.sub files in a separate package,
such as the new `gnuconfig` package which is very lightweight/text only
(unlike automake where we previously pulled these files from as a
backup). This PR adds `gnuconfig` as an unconditional build dependency
for arm and power archs.
In case the user needs a system version of config.sub and config.guess,
they are free to mark `gnuconfig` as an external package with the prefix
pointing to the directory containing the config files:
```yaml
gnuconfig:
externals:
- spec: gnuconfig@master
prefix: /tmp/tmp.ooBlkyAKdw/lol
buildable: false
```
Apart from that, this PR gives some better instructions for users when
replacing config files goes wrong.
* Mock needs this package too now, because autotools adds a depends_on
* Add documentation
* Make patch_config_files a prop, fix the docs, add integrations tests
* Make macOS happy
* py-datalad: move datalad wtf test over from py-datalad-metalad
* Update var/spack/repos/builtin/packages/py-datalad/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [vbfnlo] Add doc variant to toggle building of docs
* [openloops] Add scons to dependencies
Make sure that the build_processes does not accidentally pick up a
non-suitable scons version from the underlying system
* [openloops] Set OLPYTHON to make sure the right scons is picked
* [openloops] Fix Flake8 style complaints
Workaround this compile error for gcc by adding -Wno-narrowing for it:
spindle_logd.cc:65:76: error: narrowing conversion of '255' from 'int' to 'char'
spindle_logd.cc:65:76: error: narrowing conversion of '223' from 'int' to 'char'
spindle_logd.cc:65:76: error: narrowing conversion of '191' from 'int' to 'char'
spindle 0.8.1 wants to compile tests with mpi.h, newer versions need mpicc,
thus add: depends_on("mpi"). Spindle supports the --no-mpi to disable MPI.
Fix the perl test case bug Perl/perl5#15544
Variable PATH longer than 1000 characters (as is usual with spack) fails a perl test case
The fix is: Don't test PATH in testcase perlbug.t
Fixes `spack install --test=all` for specs triggering a build and test of perl!
"Long long" is the default type when building trilinos on its own, and
many downstream packages (both in and out of spack) rely on it. E4S
already sets this explicitly to long_long.
When using the ONNX package inside of an environment that specifies a
python3 executable, it will attempt to use a system installed
version. This can lead to a failure where the system python and the
environment python don't agree and the system python ends up with an
invalid environment. Forces ONNX to use the same version of python as
the rest of the spec.
Co-authored-by: Greg Becker <becker33@llnl.gov>
Fix solving/concretizing candle-benchmarks:
py-theano: The requested variant +gpu us now named +cuda
opencv: The requested variants +python and +zlib are now fixed deps
- Match failed autotest tests show the word "FAILED" near the end
- Match "FAIL: ", "FATAL: ", "failed ", "Failed test" of other suites
- autotest " ok"$ means the test passed, independend of text before.
- autoconf messages showing missing tools are fatal later, show them.
dropwatch is a network packet drop checker and it's make check starts
a daemon which does not terminate.
- Skip this test to not block builds.
- Add depends_on('pkgconfig', type='build')
It is needed in case the host does not have pkg-config installed.
- Remove the depends_on('m4', type='build'):
The depends_on('autoconf', type='build') pulls m4 as it needs it.
When using Ubuntu's gcc-8.4.0 on Ubuntu 18.04 to compile rivet-3.1.3,
compilation errors related to UnstableParticles(), "UFS" show up.
Compilation with this compiler is fixed in rivet-3.1.4, adding it.
Adding type='link' to the depends on 'hepmc' and 'hepmc' fixes
the tests to find libHepMC.so.4 in `spack install --tests=all`
Co-authored-by: Valentin Volkl <valentin.volkl@cern.ch>
The cairo test suite is huge, has many backends and the README states
that running and attempting to pass it is not a goal for normal users,
it has so many dependencies into the system, including fonts, that
passing it is not a goal realistically in reach soon:
Skip it, it takes far too long to be practical.
Despite the patch disabling installation of rules, meson's setup
stage looks up the udev package to get `/lib/udev/rules.d`, but as
spack has no `systemd/udev` package, it would fail to build.
Fix such builds by passing `-Dudevrulesdir` and bump version to 3.10.5
* autotoolspackage.rst: No depends_on('m4') with depends_on('autoconf')
- Remove `m4` from the example depends_on() lines for the autoreconf phase.
- Change the branch used as example from develop to master as it is
far more common in the packages of spack's builtin repo.
- Fix the wrong info that libtoolize and aclocal are run explicitly
in the autoreconf phase by default. autoreconf calls these internally
as needed, thus autotools.py also does not call them directly.
- Add that autoreconf() also adds -I<aclocal-prefix>/share/aclocal.
- Add an example how to set autoreconf_extra_args.
- Add an example of a custom autoreconf phase for running autogen.sh.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Based on the original script of R. Mijakovic further improvements of
GPI-2 installation, in particular different official versions,
configuration setups and even testing. Importantly, the non-autotools
way of installation for older versions is also considered, which is
relevant for some packages using GPI-2.
Co-authored-by: Arcesio Castaneda Medina <arcesio.castaneda.medina@itwm.fraunhofer.de>
xload failed with unresolved referenced to libintl functions:
Disabled it's use of gettext calls and added the last "new" version.
Co-authored-by: Bernhard Kaindl <bernhard.kaindl@ait.ac.at>
The hand-written `configure` script of this package does not handle
--without-<feature> at all. The source wants to use `lhapdf` headers
even if support of lhapdf is not indicated using `--with-lhapdf`.
Enable the variant `lhapdf` by default: It fixes the build of the
current package and provides the TauSpinner feature as well.
Fix the build for normal non-root/non-system-user builds, as we cannot
know that we'd have to uninstall these files even if installed as root.
Also add `pkgconfig` and remove not explicitly needed `depends_on('m4')`
* update the Tau package to use the correct ROCm dependencies and prefixes
1st:
When the rocm variant is selected, tau defaults to look for rocm in /opt/rocm
which is not guarenteed to be the correct location -- this has been fixed
to provide the prefix for hsa-rocr-dev (which is now a dependency when +rocm is
selected).
2nd:
the rocprofiler dependency package was not specified correctly, it should
be called rocprofiler-dev, also rocprofiler-dev is a dependency when
+rocprofiler is selected.
added roctracer support
w3m's build fails with `undefined reference to `RAND_egd'` which
is an deprecated insecure feature and from building japanese messages.
Disabling both makes the build of `w3m` work.
This commit shows a template for cut-and-paste into the package to fix it:
```py
==> fast-global-file-status: Executing phase: 'autoreconf'
==> Error: RuntimeError: Cannot generate configure: missing dependencies autoconf, automake, libtool.
Please add the following lines to the package:
depends_on('autoconf', type='build', when='@master')
depends_on('automake', type='build', when='@master')
depends_on('libtool', type='build', when='@master')
Update the version (when='@master') as needed.
```
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
The build needs pkgconfig and lzma, m4 is already added by autoconf.
Disable generation of kmod manpages as spack does not have xsltproc yet.
Co-authored-by: Bernhard Kaindl <bernhard.kaindl@ait.ac.at>
Assimp searches for zlib (or builds its own version). When it searches, it can find a system install that is not provided by spack. Ref: d286aadbdf/CMakeLists.txt (L451)
Tumbleweed has been broken for a couple of days. The attempt
to fix it in #26170 didn't really work. Let's try to move to
a more stable release series for OpenSuse.
* Make libunwind optional
* Add support for sized_delete and debugalloc
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Fix the build of pango and it's 20 dependents: Only provide the versions which
support the build using autotools (conversion to MesonPackage didn't progress)
This only restores the list of versions of August 10, before the build broke.
This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.
The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.
Because of this, we need to track open lock files so that we only close them when
a process no longer needs them. We do this by tracking each lockfile by its
inode and process id. This has several nice properties:
1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
number of times necessary for the locks we have.
Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.
Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
needed (this avoids inadvertently releasing locks that should not be released).
Spack's source mirror was previously in a plain old S3 bucket. That will still
work, but we can do better. This switches to AWS's CloudFront CDN for hosting
the mirror.
CloudFront is 16x faster (or more) than the old bucket.
- [x] change mirror to https://mirror.spack.io
This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:
```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```
This caused clang's version to be misdetected as:
```
clang@11.0.0
Target:
```
This resulted in errors when trying to actually use it as a compiler.
When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:
```
==> Warning: "llvm@11.0.0+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for llvm@11.0.0+clang+lld+lldb]
```
Changing the regex to only match until the end of the line fixes these
problems.
Fixes: #19473
clean_environment(): Unset three more environment variables:
MAKEFLAGS: Affects make, can eg indirectly inhibit enabling parallel build
DISPLAY: Tests of GUI widget libraries might try to connect to an X server
TERM: Could make testsuites attempt to color their output
* Switch Umpire to CMakeCachedPackage
* Fix missing import
* Correct tests option in Umpire
* Switch RAJA to CachedCMakePackage
* Convert CHAI to CachedCMakePackage
* Corrections in RAJA
* Patches in Umpire & RAJA for BLT target export
* Fixup style
* Fixup incorrect use of cmake_cache_string
fixes#25992
Currently the bootstrapping process may need a compiler.
When bootstrapping from sources the need is obvious, while
when bootstrapping from binaries it's currently needed in
case patchelf is not on the system (since it will be then
bootstrapped from sources).
Before this PR we were searching for compilers as the
first operation, in case they were not declared in
the configuration. This fails in case we start
bootstrapping from within an environment.
The fix is to defer the search until we have swapped
configuration.
While debugging #24508, I noticed that we call `basename` in `cc`. The
same can be achieved by using Bash's parameter expansion, saving one
external process per call.
Parameter expansion cannot replace basename for directories in some
cases, but is guaranteed to work for executables.
Git 2.24 introduced a feature flag for repositories with many files, see:
https://github.blog/2019-11-03-highlights-from-git-2-24/#feature-macros
Since Spack's Git repository contains roughly 8,500 files, it can be
worthwhile to enable this, especially on slow file systems such as NFS:
```
$ hyperfine --warmup 3 'cd spack-default; git status' 'cd spack-manyfiles; git status'
Benchmark #1: cd spack-default; git status
Time (mean ± σ): 3.388 s ± 0.095 s [User: 256.2 ms, System: 625.8 ms]
Range (min … max): 3.168 s … 3.535 s 10 runs
Benchmark #2: cd spack-manyfiles; git status
Time (mean ± σ): 168.7 ms ± 10.9 ms [User: 98.6 ms, System: 126.1 ms]
Range (min … max): 144.8 ms … 188.0 ms 19 runs
Summary
'cd spack-manyfiles; git status' ran
20.09 ± 1.42 times faster than 'cd spack-default; git status'
```
* Add support for C++20 to HPX package
* Enable unity builds in HPX package when available
* Add support for HIP/ROCm to HPX package
* Rearrange and update required versions for HPX package
* Add C++20 option to asio package
Modifications:
- [x] Change `defaults/config.yaml`
- [x] Add a fix for bootstrapping patchelf from sources if `compilers.yaml` is empty
- [x] Make `SPACK_TEST_SOLVER=clingo` the default for unit-tests
- [x] Fix package failures in the e4s pipeline
Caveats:
1. CentOS 6 still uses the original concretizer as it can't connect to the buildcache due to issues with `ssl` (bootstrapping from sources requires a C++14 capable compiler)
1. I had to update the image tag for GitlabCI in e699f14.
1. libtool v2.4.2 has been deprecated and other packages received some update
This will allow a user to (from anywhere a Spec is parsed including both name and version) refer to a git commit in lieu of
a package version, and be able to make comparisons with releases in the history based on commits (or with other commits). We do this by way of:
- Adding a property, is_commit, to a version, meaning I can always check if a version is a commit and then change some action.
- Adding an attribute to the Version object which can lookup commits from a git repo and find the last known version before that commit, and the distance
- Construct new Version comparators, which are tuples. For normal versions, they are unchanged. For commits with a previous version x.y.z, d commits away, the comparator is (x, y, z, '', d). For commits with no previous version, the comparator is ('', d) where d is the distance from the first commit in the repo.
- Metadata on git commits is cached in the misc_cache, for quick lookup later.
- Git repos are cached as bare repos in `~/.spack/git_repos`
- In both caches, git repo urls are turned into file paths within the cache
If a commit cannot be found in the cached git repo, we fetch from the repo. If a commit is found in the cached metadata, we do not recompare to newly downloaded tags (assuming repo structure does not change). The cached metadata may be thrown out by using the `spack clean -m` option if you know the repo structure has changed in a way that invalidates existing entries. Future work will include automatic updates.
# Finding previous versions
Spack will search the repo for any tags that match the string of a version given by the `version` directive. Spack will also search for any tags that match `v + string` for any version string. Beyond that, Spack will search for tags that match a SEMVER regex (i.e., tags of the form x.y.z) and interpret those tags as valid versions as well. Future work will increase the breadth of tags understood by Spack
For each tag, Spack queries git to determine whether the tag is an ancestor of the commit in question or not. Spack then sorts the tags that are ancestors of the commit by commit-distance in the repo, and takes the nearest ancestor. The version represented by that tag is listed as the previous version for the commit.
Not all commits will find a previous version, depending on the package workflow. Future work may enable more tangential relationships between commits and versions to be discovered, but many commits in real world git repos require human knowledge to associate with a most recent previous version. Future work will also allow packages to specify commit/tag/version relationships manually for such situations.
# Version comparisons.
The empty string is a valid component of a Spack version tuple, and is in fact the lowest-valued component. It cannot be generated as part of any valid version. These two characteristics make it perfect for delineating previous versions from distances. For any version x.y.z, (x, y, z, '', _) will be less than any "real" version beginning x.y.z. This ensures that no distance from a release will cause the commit to be interpreted as "greater than" a version which is not an ancestor of it.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This PR coincides with tiny changes to spack to support spack monitor using the new spec
the corresponding spack monitor PR is at https://github.com/spack/spack-monitor/pull/31.
Since there are no changes to the database we can actually update the current server
fairly easily, so either someone can test locally or we can just update and then
test from that (and update as needed).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* ESMF and NEMSIO changes.
- Updating ESMF to set the COMM correctly when using Intel oneapi.
- Explicitly setting the CMake MPI Fortran compiler for NEMSIO.
* Update UFS utils CMake to use MPI_<lang>_COMPILER.
#22845 revealed a long-standing bug that had never been triggered before, because the
hashing algorithm had been stable for multiple years while the bug was in production. The
bug was that when reading a concretized environment, Spack did not properly read in the
build hashes associated with the specs in the environment. Those hashes were recomputed
(and as long as we didn't change the algorithm, were recomputed identically). Spack's
policy, though, is never to recompute a hash. Once something is installed, we respect its
metadata hash forever -- even if internally Spack changes the hashing method. Put
differently, once something is concretized, it has a concrete hash, and that's it -- forever.
When we changed the hashing algorithm for performance in #22845 we exposed the bug.
This PR fixes the bug at its source, but properly reading in the cached build hash attributes
associated with the specs. I've also renamed some variables in the Environment class
methods to make a mistake of this sort more difficult to make in the future.
* ensure environment build hashes are never recomputed
* add comment clarifying reattachment of env build hashes
* bump lockfile version and include specfile version in env meta
* Fix unit-test for v1 to v2 conversion
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Refactor platform etc. to avoid circular dependencies
All the base classes in spack.architecture have been
moved to the corresponding specialized subpackages,
e.g. Platform is now defined within spack.platforms.
This resolves a circular dependency where spack.architecture
was both:
- Defining the base classes for spack.platforms, etc.
- Collecting derived classes from spack.platforms, etc.
Now it dopes only the latter.
* Move a few platform related functions to "spack.platforms"
* Removed spack.architecture.sys_type()
* Fixup for docs
* Rename Python modules according to review
* dvsdk: Turn off variants by default
This allows an install to more easily be explicit about which pieces to
turn on as more variants are added
* dvsdk: effectively disable the broken variants
* Switch http to https where latter exists
* Hopefully restore original permissions
* Add URL updates after include the -L curl option
* Manual corrections to select URL format strings
* Tell gtk-doc where the XML catalog is
The gtk-doc configure script has an option for specifying the path to
the XML catalog. If this is not set the configure script will search
a defined set of directories for a catalog file and will set
`with_xml_catalog` based on that. Only if no system catalog is found will
the XML_CATALOG_FILES be looked at. In order to make sure that the spack
provided catalog is used, pass the `--with-xml-catalog` option.
* Use the property from docbook-xml
Currently as part of installing a package, we lock a prefix, check if
it exists, and create it if not; the logic for creating the prefix
included a check for the existence of that prefix (and raised an
exception if it did), which was redundant.
This also includes removal of tests which were not verifying
anything (they pass with or without the modifications in this PR).
- Parallel install was failing to generate a config file.
- OpenSSH has an extensive test suite, run it if requested.
- 'executables' wrongly had 'rsh', replaced the openssh tools.
There are two ways to build SQLite: With the Autotools setup or the
so-called "amalgamation" which is a single large C file containing the
SQLite implementation. The amalgamation build is controlled by
pre-processor flags and the Spack setup was using an amalgamation
pre-processor flag for a feature that is controlled by an option of the
configure script. As a consequence, until now Spack has always built
SQLite with the rtree feature enabled.
Knowing that spack has patched the code and organized the build is potentially valuable information for GROMACS users and developers troubleshooting their builds.
PLUMED does further patches to GROMACS, so that is expressed directly also.
Modifications:
- Export platforms from spack.platforms directly, so that client modules don't have to import submodules
- Use only plain imports in test/architecture.py
- Parametrized test in test/architecture.py and put most of the setup/teardown in fixtures
This is a major rework of Spack's core core `spec.yaml` metadata format. It moves from `spec.yaml` to `spec.json` for speed, and it changes the format in several ways. Specifically:
1. The spec format now has a `_meta` section with a version (now set to version `2`). This will simplify major changes like this one in the future.
2. The node list in spec dictionaries is no longer keyed by name. Instead, it is a list of records with no required key. The name, hash, etc. are fields in the dictionary records like any other.
3. Dependencies can be keyed by any hash (`hash`, `full_hash`, `build_hash`).
4. `build_spec` provenance from #20262 is included in the spec format. This means that, for spliced specs, we preserve the *full* provenance of how to build, and we can reproduce a spliced spec from the original builds that produced it.
**NOTE**: Because we have switched the spec format, this PR changes Spack's hashing algorithm. This means that after this commit, Spack will think a lot of things need rebuilds.
There are two major benefits this PR provides:
* The switch to JSON format speeds up Spack significantly, as Python's builtin JSON implementation is orders of magnitude faster than YAML.
* The new Spec format will soon allow us to represent DAGs with potentially multiple versions of the same dependency -- e.g., for build dependencies or for compilers-as-dependencies. This PR lays the necessary groundwork for those features.
The old `spec.yaml` format continues to be supported, but is now considered a legacy format, and Spack will opportunistically convert these to the new `spec.json` format.
* Added spackage to build Sina (https://github.com/LLNL/Sina).
* Improvements to sina/package.py
Made numerous simplifications and improvements to sina/package.py
based on PR feedback.
* Added licence info
* Added maintainers
* Changed maintainers to be Github IDs.
Added a dependency for mpip@3.5: when the libunwind is set to true (which is the default)
and '~setjmp' is set to False (which is also the default) to avoid a configure
time error from not finding libunwind.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This modification accounts for:
1. Bootstrapping from sources using system, non-standard Python
2. Using later an ABI compatible standard Python interpreter
* tests: make `spack url [stats|summary]` work on mock packages
Mock packages have historically had mock hashes, but this means they're also invalid
as far as Spack's hash detection is concerned.
- [x] convert all hashes in mock package to md5 or sha256
- [x] ensure that all mock packages have a URL
- [x] ignore some special cases with multiple VCS fetchers
* url stats: add `--show-issues` option
`spack url stats` tells us how many URLs are using what protocol, type of checksum,
etc., but it previously did not tell us which packages and URLs had the issues. This
adds a `--show-issues` option to show URLs with insecure (`http`) URLs or `md5` hashes
(which are now deprecated by NIST).
This allows to fix the compilation of gcc versions less than 11.1.0,
due to the remove of cyclades of libsanitizer as it is described in
the patch:
The Linux kernel has removed the interface to cyclades from the latest
kernel headers due to them being orphaned for the past 13
years. libsanitizer uses this header when compiling against glibc, but
glibcs itself doesn't seem to have any references to cyclades. Further
more it seems that the driver is broken in the kernel and the firmware
doesn't seem to be available anymore. As such since this is breaking
the build of libsanitizer (and so the GCC bootstrap) it is proposed to
remove this.
Co-authored-by: Arcesio Castaneda Medina <arcesio.castaneda.medina@itwm.fraunhofer.de>
By changing return values from C #defines to enums, gdbm-1.20 breaks a kludge:
#ifndef GDBM_ITEM_NOT_FOUND
# define GDBM_ITEM_NOT_FOUND GDBM_NO_ERROR
#endif
The absence of the #define causes perl to #define GDBM_ITEM_NOT_FOUND
as GDBM_NO_ERROR which incorrect for gdbm@1.20:
* Optionally enable ccmake in cmake
Renames ncurses variant to `ccmake` since that's how users know it, and
explicitly enable/disable `BUILD_CursesDialog`.
* Make cmake locate its dependencies with CMAKE_PREFIX_PATH, and set rpath flags too
* Undo variant name & defaults change
Fixes removal of SPACK_ENV_PATH from PATH in the presence of trailing
slashes in the elements of PATH:
The compiler wrapper has to ensure that it is not called nested like
it would happen when gcc's collect2 uses PATH to call the linker ld,
or else the compilation fails.
To prevent nested calls, the compiler wrapper removes the elements
of SPACK_ENV_PATH from PATH.
Sadly, the autotest framework appends a slash to each element
of PATH when adding AUTOTEST_PATH to the PATH for the tests,
and some tests like those of GNU bison run cc inside the test.
Thus, ensure that PATH cleanup works even with trailing slashes.
This fixes the autotest suite of bison, compiling hundreds of
bison-generated test cases in a autotest-generated testsuite.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
netlib-lapack: Version 3.9.0 and above no longer builds with the IBM XL
compiler (#25447). Ported some fixes from the old ibm-xl.patch and added
logic for detection of XL's -qrecur flag.
Apply stable-release fixes from 2017 to older autoconf releses:
- Fix the scripts autoheader and autoscan to pass the test suite
- Fix test case to passing when libtool 2.4.3+ is in use
autoconf-2.13 dates back to 1999. The build wasn't possible since
4 years: Since 2017, we patch autom4te which didn't exist in 2.13,
failing the build of it. 4 years of not being able to build 2.13
is a crystal clear indication that we can remove it safely.
* amrex: support sundials variant in newer amrex versions
* propagate cuda_arch to sundials
* change to old string formatting
* require sundials+rocm when amrex+rocm
Ensure that testsuite has py-anytree and py-parameterized
and finds gtk-doc's gitdocize.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR will add a new audit, specifically for spack package homepage urls (and eventually
other kinds I suspect) to see if there is an http address that can be changed to https.
Usage is as follows:
```bash
$ spack audit packages-https <package>
```
And in list view:
```bash
$ spack audit list
generic:
Generic checks relying on global variables
configs:
Sanity checks on compilers.yaml
Sanity checks on packages.yaml
packages:
Sanity checks on specs used in directives
packages-https:
Sanity checks on https checks of package urls, etc.
```
I think it would be unwise to include with packages, because when run for all, since we do requests it takes a long time. I also like the idea of more well scoped checks - likely there will be other addresses for http/https within a package that we eventually check. For now, there are two error cases - one is when an https url is tried but there is some SSL error (or other error that means we cannot update to https):
```bash
$ spack audit packages-https zoltan
PKG-HTTPS-DIRECTIVES: 1 issue found
1. Error with attempting https for "zoltan":
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'www.cs.sandia.gov'. (_ssl.c:1125)>
```
This is either not fixable, or could be fixed with a change to the url or (better) contacting the site owners to ask about some certificate or similar.
The second case is when there is an http that needs to be https, which is a huge issue now, but hopefully not after this spack PR.
```bash
$ spack audit packages-https xman
Package "xman" uses http but has a valid https endpoint.
```
And then when a package is fixed:
```bash
$ spack audit packages-https zlib
PKG-HTTPS-DIRECTIVES: 0 issues found.
```
And that's mostly it. :)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* py-jupyterhub: add version: 1.4.1
* dont need mako for latest release
* sort dependencies
* notebook isnt used for 1.4.1+
* add dependency on py-jupyter-telemetry; create new package py-jupyter-telemetry
* py-jupyter-telemetry: declare missing dependencies
* py-jupyterhub: need more specific depends_on before less specific
* add py-json-logger; py-jupyter-telemetry: add depends_on for py-json-logger
* Update var/spack/repos/builtin/packages/py-jupyter-telemetry/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove py-json-logger erroneously and duplicatively added
* Update var/spack/repos/builtin/packages/py-jupyterhub/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* need py-alembic@1.4: for newest py-jupyterhub
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add a __reduce__ method to Spec
fixes#23892
The recursion limit seems to be due to the default
way in which a Spec is serialized, following all
the attributes. It's still not clear to me why this
is related to being in an environment, but in any
case we already have methods to serialize Specs to
disk in JSON and YAML format. Here we use them to
pickle a Spec instance too.
* Downgrade to build-hash
Hopefully nothing will change the package in
between serializing the spec and sending it
to the child process.
* Add support for Python 2
* Make sure PackageInstaller does not remove the just-restored
install dir after failure in spack install --overwrite
* Remove cryptic error message and rethrow actual error
By default, figlet looks for fonts in `/usr/local/share/figlet`, and if
it doesn't exist you get `figlet: standard: Unable to open font file`.
This fix changes the default font dir to the one installed in the
install prefix.
The gcc compiler can be configured to use `ld.gold` by default. It will
then call `ld.gold` explicitly when linking. When so, spack need to have
a ld.gold wrapper in PATH to inject rpaths link flags etc...
Also I wouldn't be surprised to see some package calling `ld.gold`
directly.
As for ld.gold, the argument could be made that we want to support any
package that could call ld.lld.
* Add a __reduce__ method to SpecBuildInterface
This class was confusing pickle when being serialized,
due to its scary nature of being an object that disguise
as another type.
* Add more MacOS tests, switch them to clingo
* Fix condition syntax
* Remove Python v3.6 and v3.9 with macOS
some of these are not resolvable in that there is only an http page
available, or a page reported as broken is actually ok, or a page has
an SSL error that does not prevent one from visiting (and no good replacement)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* Add intel-tbb-oneapi package that does the cmake configure and build.
Compare too the intel-oneapi-tbb package which only downloads a script that contains prebuilt binaries.
* Rename package intel-tbb-cmake
* Incorporate intel-tbb-cmake into intel-tbb package
* Conditionally remove 'context' from kwargs in _urlopen
Previously, 'context' is purged from kwargs in _urlopen to
conform to varying support for 'context' in different versions
of urllib. This fix tries to use 'context', and then removes
it if an exception is thrown and tries again.
* Specify error type in try statement in _urlopen
Specify TypeError when checking if 'context' is in kwargs
for _urlopen. Also, if try fails, check that 'context' is
in the error message before removing from kwargs.
This is a direct followup to #13557 which caches additional attributes that were added in #24095 that are expensive to compute. I had to reopen#25556 in another PR to invalidate the GitLab CI cache, but see #25556 for prior discussion.
### Before
```console
$ time spack env activate .
real 2m13.037s
user 1m25.584s
sys 0m43.654s
$ time spack env view regenerate
==> Updating view at /Users/Adam/.spack/.spack-env/view
real 16m3.541s
user 10m28.892s
sys 4m57.816s
$ time spack env deactivate
real 2m30.974s
user 1m38.090s
sys 0m49.781s
```
### After
```console
$ time spack env activate .
real 0m8.937s
user 0m7.323s
sys 0m1.074s
$ time spack env view regenerate
==> Updating view at /Users/Adam/.spack/.spack-env/view
real 2m22.024s
user 1m44.739s
sys 0m30.717s
$ time spack env deactivate
real 0m10.398s
user 0m8.414s
sys 0m1.630s
```
Fixes#25555Fixes#25541
* Speedup environment activation, part 2
* Only query distutils a single time
* Fix KeyError bug
* Make vermin happy
* Manual memoize
* Add comment on cross-compiling
* Use platform-specific include directory
* Fix multiple bugs
* Fix python_inc discrepancy
* Fix import tests
Most of these are perl packages that need to point to the meta docs site,
and then a fair amount of http addresses that need to be https, and then
the rest are usually documentation sites that no longer exist or were
otherwise changes
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* Set pubkey trust to ultimate during `gpg trust`
Tries to solve the same problem as #24760 without surpressing stderr
from gpg commands.
This PR makes every imported key trusted in the gpg database.
Note: I've outlined
[here](https://github.com/spack/spack/pull/24760#issuecomment-883183175)
that gpg's trust model makes sense, since how can we trust a random
public key we download from a binary cache?
* Fix test
Fixes#25603
This commit adds a new context manager to temporarily
deactivate active environments. This context manager
is used when setting up bootstrapping configuration to
make sure that the current environment is not affected
by operations on the bootstrap store.
* Preserve exit code 1 if nothing is found
* Use context manager for the environment
- remove unneeded dependency on blas
- create external-lapack variant
- patch makefile to not build lapack if `+external-lapack`
Also:
- fix homepage link
- set parallel = False
- make references to `spec` consistent
- remove unneeded `build` method
This commit adds a regression test for version selection
with preferences in `packages.yaml`. Before PR 25585 we
used negative weights in a minimization to select the
optimal version. This may lead to situations where a
dependency may make the version score of dependents
"better" if it is preferred in packages.yaml.
PackageInstaller and Package.installed disagree over what it means
for a package to be installed: PackageInstaller believes it should be
enough for a database entry to exist, whereas Package.installed
requires a database entry & a prefix directory.
This leads to the following niche issue:
* a develop spec in an environment is successfully installed
* then somehow its install prefix is removed (e.g. through a bug fixed
in #25583)
* you modify the sources and reinstall the environment
1. spack checks pkg.installed and realizes the develop spec is NOT
installed, therefore it doesn't need to have 'overwrite: true'
2. the installer gets the build task and checks the database and
realizes the spec IS installed, hence it doesn't have to install it.
3. the develop spec is not rebuilt.
The solution is to make PackageInstaller and pkg.installed agree over
what it means to be installed, and this PR does that by dropping the
prefix directory check from pkg.installed, so that it only checks the
database.
As a result, spack will create a build task with overwrite: true for
the develop spec, and the installer in fact handles overwrite requests
fine even if the install prefix doesn't exist (it just does a normal
install).
By default the number of parellel compiler processes launched by
py-grpcio equals the number of threads. This commit limit it to
spack config build_jobs.
* Provide new version of eospac.
+ Provide version 6.5.0beta.
+ Make version 6.4.2 the default
+ Also increment
* volunteer to be the maintainer (for now).
see #25563
When we have a concrete environment and we ask to install a
concrete spec from a file, currently Spack returns a list of
specs that are all the one that match the argument DAG hash.
Instead we want to compare build hashes, which also account
for build-only dependencies.
* Added py-meshio package
* Added setuptools dependency to py-meshio package
* Update var/spack/repos/builtin/packages/py-meshio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-meshio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-meshio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-meshio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-meshio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added missing py-importlib-metadata dependency in py-meshio
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
#25303 filtered padding from build output, but it's still there in binary install/relocate output,
so our CI logs are still quite long and frequently hit the limit.
- [x] add context handler from #25303 to buildcache installation as well
This allows you to run `spack graph --installed` from within an environment and get a dot graph of
its concrete specs.
- [x] make `spack graph -i` environment-aware
- [x] add code to the generated dot graph to ensure roots have min rank (i.e., they're all at the
top or left of the DAG)
As of cray-mpich version 8.1.7, conventional MPI compiler wrappers are included in cray-mpich.
Co-authored-by: Luke Roskop <lroskop@cedar.head.cm.us.cray.com>
Bootstrapping clingo on macOS on `develop` gives errors like this:
```
==> Error: RuntimeError: Unable to locate python command in /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/bin
/Users/gamblin2/Workspace/spack/var/spack/repos/builtin/packages/python/package.py:662, in command:
659 return Executable(path)
660 else:
661 msg = 'Unable to locate {0} command in {1}'
>> 662 raise RuntimeError(msg.format(self.name, self.prefix.bin))
```
On macOS, `python` is laid out differently. In particular, `sys.executable` is here:
```console
Python 2.7.16 (default, May 8 2021, 11:48:02)
[GCC Apple LLVM 12.0.5 (clang-1205.0.19.59.6) [+internal-os, ptrauth-isa=deploy on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.executable
'/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python'
```
Based on that, you'd think that
`/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents` would be
where you'd look for a `bin` directory, but you (and Spack) would be wrong:
```console
$ ls /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/
Info.plist MacOS/ PkgInfo Resources/ _CodeSignature/ version.plist
```
You need to look in `sys.exec_prefix`
```
>>> sys.exec_prefix
'/System/Library/Frameworks/Python.framework/Versions/2.7'
```
Which looks much more like a standard prefix, with understandable `bin`, `lib`, and `include`
directories:
```console
$ ls /System/Library/Frameworks/Python.framework/Versions/2.7
Extras/ Mac/ Resources/ bin/ lib/
Headers@ Python* _CodeSignature/ include/
$ ls -l /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python
lrwxr-xr-x 1 root wheel 7B Jan 1 2020 /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python@ -> python2
```
- [x] change `bootstrap.py` to use the `sys.exec_prefix` as the external prefix, instead of just
getting the parent directory of the executable.
This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.
The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.
Because of this, we need to track open lock files so that we only close them when
a process no longer needs them. We do this by tracking each lockfile by its
inode and process id. This has several nice properties:
1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
number of times necessary for the locks we have.
Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.
Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
needed (this avoids inadvertently releasing locks that should not be released).
This commit rework version facts so that:
1. All the information on versions is collected
before emitting the facts
2. The same kind of atom is emitted for versions
stemming from different origins (package.py
vs. packages.yaml)
In the end all the possible versions for a given
package are totally ordered and they are given
different and increasing weights staring from zero.
This refactor allow us to avoid using negative
weights, which in some configurations may make
parent node score "better" and lead to unexpected
"optimal" results.
Add HPDDM, MMG, ParMMG and Tetgen to PETSc.
Add mmg version 5.5.2 (compatible with PETSc).
Add parmmg, depending on mmg.
Add pic variant to tetgen for PETSc.
* Adding a heap of NOAA packages for UFS.
Adding the Unified Forecast System (UFS) and all of the packages
it depends on.
* Fixing style tests.
* Removing the package CMAKE_BUILD_TYPE override.
* Removing compiler specs from `cmake_args()`.
- provides the site packages fix
- excludes the hdf5 linking changes (which are fixed in conduit@develop's build system)
- relaxes constraints to allows building static ascent against shared python
Once PR binary graduation is deployed, the shared PR mirror will
contain binaries just built by a merged PR, before the subsequent
develop pipeline has had time to finish. Using the shared PR mirror
as a source of binaries will reduce the number of times we have to
rebuild the same full hash.
* Refactor active environment getters
- Make `spack.environment.active_environment` a trivial getter for the active
environment, replacing `spack.environment.get_env` when the arguments are
not needed
- New method `spack.cmd.require_active_environment(cmd_name)` for
commands that require an environment (rather than abusing
get_env/active_environment)
- Clean up calling code to call spack.environment.active_environment or
spack.cmd.require_active_environment as appropriate
- Remove the `-e` parsing from `active_environment`, because `main.py` is
responsible for processing `-e` and already activates the environment.
- Move `spack.environment.find_environment` to
`spack.cmd.find_environment`, to avoid having spack.environment aware
of argparse.
- Refactor `spack install` command so argument parsing is all handled in the
command, no argparse in spack.environment or spack.installer
- Update documentation
* Python 2: toplevel import errors only with 'as ev'
In two files, `import spack.environment as ev` leads to errors
These errors are not well understood ("'module' object has no attribute
'environment'"). All other files standardize on the above syntax.
* Bootstrap clingo from binaries
* Move information on clingo binaries to a JSON file
* Add support to bootstrap on Cray
Bootstrapping on Cray requires, at the moment, to
swap the platform when looking for binaries - due
to #22800.
* Add SHA256 verification for bootstrapped software
Use sha256 verification for binaries necessary to bootstrap
the concretizer and gpg for signature verification
* patchelf: use Spec._old_concretize() to bootstrap
As noted in #24450 we may happen to need the
concretizer when bootstrapping clingo. In that case
only the old concretizer is available.
* Add a schema for bootstrapping methods
Two fields have been added to bootstrap.yaml:
"sources" which lists the methods available for
bootstrapping software
"trusted" which records if a source is trusted or not
A subcommand has been added to "spack bootstrap" to list
the sources currently available.
* Methods used for bootstrapping are configurable from bootstrap:sources
The function that tries to ensure a given Python module
is importable now tries bootstrapping methods in the same
order as they are defined in `bootstrap.yaml`
* Permit to trust/untrust bootstrapping methods
* Add binary tests for MacOS, Ubuntu
* Add documentation
* Add a note on bash
Spack is internally using a patched version of `argparse` mainly to backport Python 3 functionality
into Python 2. This PR makes it such that for the supported Python 3 versions we use `argparse`
from the standard Python library. This PR has been extracted from #25371 where it was needed
to be able to use recent versions of `pytest`.
* Fixed formatting issues when using a pristine argparse.py
* Fix error message for Python 3.X when missing positional arguments
* Account for the change of API in Python 3.7
* Layout multi-valued args into columns in error messages
* Seamless transition in develop if argparse.pyc is in external
* Be more defensive in case we can't remove the file.
Add link type to spack.yaml format
Add tests to verify link behavior is correct for installed files
for all three view types
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* ampl: Add missing ampl_lic install and improve look of resources
* ampl: Add myself as maintainer
* ampl: Remove unused variable and delete extra lines
Co-authored-by: Rob Groner <rug262@psu.edu>
* rct: new packages (core packages and some dependencies)
* rct: new packages (core packages and some dependencies)
* radical-entk: updated dependencies (according to comments)
* radical-gtod: updated version name
* radical-pilot: updated dependencies (according to comments)
* radical-saga: updated dependencies (according to comments)
* radical-utils: updated dependencies and set old versions deprecated
* saga-python: removed due to absence of packages (in PyPI, GitHub), this project was replaced by `radical-saga` and corresponding package `py-radical-saga` should be used
* saga-python: rolled back, but with deprecation status
* ntplib: removed maintainer
* pika: removed maintainer
The commands have been deprecated in #7098, and have
been failing with an error message since then.
Cleaning the code since it is unlikely that somebody
is still using them.
* Fix for building shared lib when enabling ROCm, for STRUMPACK 5.1.1.
* Update patch for shared lib with STRUMPACK 5.1.1 and ROCm, also update FindHIP.cmake
* update patch for shared libs with ROCm
Preferred providers had a non-zero weight because in an earlier formulation of the logic program that was needed to prefer external providers over default providers. With the current formulation for externals this is not needed anymore, so we can give a weight of zero to both default choices and providers that are externals. _Using zero ensures that we don't introduce any drift towards having less providers, which was happening when minimizing positive weights_.
Modifications:
- [x] Default weight for providers starts at 0 (instead of 10, needed before to prefer externals)
- [x] Rules to compute the `provider_weight` have been refactored. There are multiple possible weights for a given `Virtual`. Only one gets selected by the solver (the one that minimizes the objective function).
- [x] `provider_weight` are now accounting for each different `Virtual`. Before there was a single weight per provider, even if the package was providing multiple virtuals.
* Give preferred providers a weight of zero
Preferred providers had a non-zero weight because in an earlier
formulation of the logic program that was needed to prefer
external providers over default providers.
With the current formulation for externals this is not needed anymore,
so we can give a weight of zero to default choices. Using zero
ensures that we don't introduce any drift towards having
less providers, which was happening when minimizing positive weights.
* Simplify how we compute weights for providers
Rewrite rules so that specific events (i.e. being
an external) unlock the possibility to use certain
weights. The weight being considered is then selected
by the minimization process to be the one that gives
the best score.
* Allow providers to have different weights for different virtuals
Before this change we didn't differentiate providers based on
the virtual they provide, which meant that packages providing
more than one virtual had nonetheless a single weight.
With this change there will be a weight per virtual.
* elk package updated to handle 3 latest versions support for older
versions is dropped
* fixed typos
* openmp dependency handling added
* and for blis too
* Retain support for elk 3, deprecate
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cp2k: fix build with GCC-10+ and MPICH
* cp2k: update SIRIUS and ELPA dependencies
* elpa: add version 2021.05.001, add ROCm support, include SVE flags
This is both a bugfix and a generalization of #25168. In #25168, we attempted to filter padding
*just* from the debug output of `spack.util.executable.Executable` objects. It turns out we got it
wrong -- filtering the command line string instead of the arg list resulted in output like this:
```
==> [2021-08-05-21:34:19.918576] ["'", '/', 'b', 'i', 'n', '/', 't', 'a', 'r', "'", ' ', "'", '-', 'o', 'x', 'f', "'", ' ', "'", '/', 't', 'm', 'p', '/', 'r', 'o', 'o', 't', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '-', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '-', 'w', 'p', 'h', 'p', 't', 'l', 'h', 'w', 'u', 's', 'e', 'i', 'a', '4', 'k', 'p', 'g', 'y', 'd', 'q', 'l', 'l', 'i', '2', '4', 'q', 'b', '5', '5', 'q', 'u', '4', '/', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '.', 't', 'a', 'r', '.', 'b', 'z', '2', "'"]
```
Additionally, plenty of builds output padded paths in other plcaes -- e.g., not just command
arguments, but in other `tty` messages via `llnl.util.filesystem` and other places. `Executable`
isn't really the right place for this.
This PR reverts the changes to `Executable` and moves the filtering into `llnl.util.tty`. There is
now a context manager there that you can use to install a filter for all output.
`spack.installer.build_process()` now uses this context manager to make `tty` do path filtering
when padding is enabled.
- [x] revert filtering in `Executable`
- [x] add ability for `tty` to filter output
- [x] install output filter in `build_process()`
- [x] tests
These versions can cause weird concretizations, and it looks like the
old version of xsdk may not even work because of xsdktrilinos being
disabled. The hypre version tagged for xsdk@0.2 no longer exists at the
described location.
With the previous naming scheme, `trilinos@:10` concretizes to
`trilinos@xsdk-0.2.0`. Now, it's clear what the xsdk version is closest
to. Changed from tag to the corresponding commit SHA for safety.
* Do not allow cray build system patch for later version of otf2
* Modify flag_handler logic in the trilinos package
Modify flag_handler logic in the trilinos package to work better with compilers
other than CCE
Run CTest at build time with:
```
spack install --test=root openpmd-api@<version>
```
and run smoke-tests after install and loading of the package via
```
spack load -r /<spec>
spack test run /<spec>
```
This pull request adds a new workflow to build and deploy Spack Docker containers
from GitHub Actions. In comparison with our current system where we use Dockerhub's
CI to build our Docker containers, this workflow will allow us to now build for multiple
architectures and deploy to multiple registries. (At the moment x86_64 and Arm64 because
ppc64le is throwing an error within archspec.)
As currently set up, the PR will build all of the current containers (minus Centos6 because
those yum repositories are no longer available?) as both x86_64 and Arm64 variants. The
workflow is currently setup to build and deploy containers nightly from develop as well as
on tagged releases. The workflow will also build, but NOT deploy containers on a pull request
for the purposes of testing this PR. At the moment it is setup to deploy the built containers to
GitHub's Container Registry although, support for also uploading to Dockerhub/Quay can be
included easily if we decide to keep releasing on Dockerhub/want to begin releasing on Quay.
This is an attempt to fix "Missing base commit" messages in the codecov UI. Because we do not run
full tests on package PRs, package PRs' merge commits on `develop` don't have coverage info. It
appears that codecov will give you an error if the pseudo-base's coverage data doesn't all apply
properly to the real PR base, unless the `allow_coverage_offsets` option is set.
* See here for docs:
https://docs.codecov.com/docs/comparing-commits#pseudo-comparison
* See here for another potential solution:
https://community.codecov.com/t/2480/15
`compare_specs()` had a `colorful` keyword argument, but everything else in
spack uses `color` for this.
- [x] rename the argument
- [x] make the default follow spack's `--color=always/never/auto` setting
* Bump py-boto3, add python constraints, bump deps
* Update var/spack/repos/builtin/packages/py-boto3/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-boto3/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-boto3/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-os-service-types
* Update var/spack/repos/builtin/packages/py-os-service-types/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-oslo-i18n
* Update var/spack/repos/builtin/packages/py-oslo-i18n/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Bump py-botocore and add python constraints
* Update var/spack/repos/builtin/packages/py-botocore/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Add a workflow to test bootstrapping clingo on
different platforms so that we can detect changes
that break it.
Compute `site_packages_dir` in `bootstrap.py` as it was
before #24095, until we figure a better way to override
that attribute.
These are the versions tested (and successfully patched) against
intel-tbb.nvhpc-remove-flags.2017.patch: @2017, @2017.8, @2018, @2018.6
intel-tbb.nvhpc-remove-flags.2019.patch: @2019
intel-tbb.nvhpc-remove-flags.2019.1.patch: @2019.[1-6]
intel-tbb.nvhpc-remove-flags.2019.7.patch: @2019.[7-8]
intel-tbb.nvhpc-remove-flags.2019.9.patch: @2019.9, 2020.[0-3]
The intel-tbb.nvhpc-version-script-fix.2017.patch was tested and
applied successfully against all of the versions above.
Long, padded install paths can get to be very long in the verbose install
output. This has to be filtered out by the Executable class, as it
generates these debug messages.
- [x] add ability to filter paths from Executable output.
- [x] add a context manager that can enable path filtering
- [x] make `build_process` in `installer.py`
This should hopefully allow us to see most of the build output in
Gitlab pipeline builds again.
`build_process` has been around a long time but it's become a very large,
unwieldy method. It's hard to work with because it has a lot of local
variables that need to persist across all of the code.
- [x] To address this, convert it its own `BuildInfoProcess` class.
- [x] Start breaking the method apart by factoring out the main
installation logic into its own function.
When context managers are used to save and restore values, we need to remember
to use try/finally around the yield in case an exception is thrown. Otherwise,
the cleanup will be skipped.
* rnpletal: New package
RNPL is an old package that is still used today by my collaborators, but doesn't see any development any more. I'm creating a Spack package merely to make it easier to install it on various systems. The code is not modern (C without prototypes – yes, that used to be a thing), and a large diff modernizes the code to make it palatable to modern C and Fortran compilers.
RNPL contains several sub-package. The current Spack package builds only the main one.
* rnpletal: Remove unused import
* Convert into AutotoolsPackage
* Don't check for "shared" variant
* rnpletal: Change "version" to `develop`
* rnpletal: Use existing `configure` function
* adjust for erroneous detection of nvc as gcc
adjust for erroneous detection of nvc as gcc when it is built with gcc
* add missing parenthesis :/
* fix trailing whitespace
* re-work hdf5 patch for nvc to make it more general
* flake8 fixes
* Render as comment
Render intended note as a comment rather than logical constraint
Co-authored-by: Frank Willmore <willmore@anl.gov>
- Change config from the undocumented `use_curl: true/false` to `url_fetch_method: urllib/curl`.
- Documentation of `url_fetch_method` in `defaults/config.yaml`
- Default fetch option explicitly set to `urllib` for users who may not have curl on their system
To upgrade from `use_curl` to `url_fetch_method`, run `spack config update config`
* kadath: New package
* Update var/spack/repos/builtin/packages/kadath/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/kadath/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/kadath/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/kadath/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* kadath: Add description to MPI variant
* kadath: Add empty line
* kadath: Add variant "codes=none" to avoid empty default
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add variant with compiler optimization
Update package.py to include variant with compiler optimization, benchmarked at A-HUG hackaton to improve major kernel time by roughly 3%.
* fix style
* Update var/spack/repos/builtin/packages/laghos/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The output order for `spack diff` is nondeterministic for larger diffs -- if you
ran it several times it will not put the fields in the spec in the same order on
successive invocations.
This makes a few fixes to `spack diff`:
- [x] Implement the change discussed in https://github.com/spack/spack/pull/22283#discussion_r598337448
to make `AspFunction` comparable in and of itself and to eliminate the need for `to_tuple()`
- [x] Sort the lists of diff properties so that the output is always in the same order.
- [x] Make the output for different fields the same as what we use in the solver. Previously, we
would use `Type(value)` for non-string values and `value` for strings. Now we just use
the value. So the output looks a little cleaner:
```
== Old ========================== == New ====================
@@ node_target @@ @@ node_target @@
- gdbm Target(x86_64) - gdbm x86_64
+ zlib Target(skylake) + zlib skylake
@@ variant_value @@ @@ variant_value @@
- ncurses symlinks bool(False) - ncurses symlinks False
+ zlib optimize bool(True) + zlib optimize True
@@ version @@ @@ version @@
- gdbm Version(1.18.1) - gdbm 1.18.1
+ zlib Version(1.2.11) + zlib 1.2.11
@@ node_os @@ @@ node_os @@
- gdbm catalina - gdbm catalina
+ zlib catalina + zlib catalina
```
I suppose if we want to use `repr()` in the output we could do that and could be
consistent but we don't do that elsewhere -- the types of things in Specs are
all stringifiable so the string and the name of the attribute (`version`, `node_os`,
etc.) are sufficient to know what they are.
* lorene: Install only executables, not unrelated files in the same directory
* lorene: Don't determine compile dependencies
The current way doesn't work (cpp misses C++ include paths), and we don't need dependencies anyway.
* lorene: Correct BLAS library names
* lorene: Remove comment
Gitlab truncates job trace output (even the complete raw output) at 4MB,
so this change captures it to a file under "user_data" artifacts as well,
to make sure we can debug output from the end of the rebuild job.
When a spec fails to build on `develop`, instead of storing an empty file as the entry in the broken specs list, this change stores the full spec yaml as well as links to the failing pipeline and job.
A `spack diff` will take two specs, and then use the spack.solver.asp.SpackSolverSetup to generate
lists of facts about each (e.g., nodes, variants, etc.) and then take a set difference between the
two to show the user the differences.
Example output:
$ spack diff python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ variant_value @@
- python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
+ python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
@@ version @@
- openssl Version(1.0.2u)
+ openssl Version(1.1.1k)
- python Version(2.7.8)
+ python Version(3.8.11)
Currently this uses diff-like output but we will attempt to improve on this in the future.
One use case for `spack diff` is whenever a user has a disambiguate situation and cannot
remember how two different installs are different. The command can also output `--json` in
the case of a more analysis type use case where we want to save complete data with all
diffs and the intersection. However, the command is really more intended for a command
line use case, and we likely will have an analyzer more suited to saving data
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Catch ConnectionError from CDash reporter
Catch ConnectionError when attempting to upload the results of `spack install`
to CDash. This follows in the spirit of #24299. We do not want `spack install`
to exit with a non-zero status when something goes wrong while attempting to
report results to CDash.
* Catch HTTP Error 400 (Bad Request) in relate_cdash_builds()
* sst-elements: add optional support for flashdimmsim, dramsim3 and
add new packages for each
* sst-dumpi: add version 7.1.0
* sst-core: autotools dependencies are required for all versions
* new package: dtc
* add error message redirect for +dumpi, otf, and otf2: these are not
currently supported
Modifications:
- Remove the "build tests" workflow from GitHub Actions
- Setup a similar e2e test on Gitlab
In this way we'll reduce load on GitHub Actions workflows and for e2e tests will
benefit from the buildcache reuse granted by pipelines.
ENABLE_SPLASH configuration has been removed entirely after 21.06 so
patch is no longer necessary after #24931. (Versions between 0.90.1 and
21.06 will likely still need a patch, and while it's not clear if this
patch is the right one, seems better to leave something in.)
- add version 9.1.2
- set a license file
- set the license environment variable
- remove the download and license information out of the description so
it does not show up in environment modules
- extend python and set python version constraints
- build gurobipy to be used in any compatible python, used for more
extensive computations than the gurobi shell
- remove preexisting PYTHONPATH from gurobi.sh as the shell uses a
built-in python, which will likely be different from "system" python
- add maintainer
`spack style` previously used a Travis CI variable to figure out
what the base branch of a PR was, and this was apparently also set
on `develop`. We switched to `GITHUB_BASE_REF` to support GitHub
Actions, but it looks like this is set to `""` in pushes to develop,
so `spack style` breaks there.
This PR does two things:
- [x] Remove `GITHUB_BASE_REF` knowledge from `spack style` entirely
- [x] Handle `GITHUB_BASE_REF` in style scripts instead, and explicitly
pass the base ref if it is present, but don't otherwise.
This makes `spack style` *not* dependent on the environment and fixes
handling of the base branch in the right place.
This adds a `--root` option so that `spack style` can check style for
a spack instance other than its own.
We also change the inner workings of `spack style` so that `--config FILE`
(and similar options for the various tools) options are used. This ensures
that when `spack style` runs, it always uses the config from the running spack,
and does *not* pick up configuration from the external root.
- [x] add `--root` option to `spack style`
- [x] add `--config` (or similar) option when invoking style tools
- [x] add a test that verifies we can check an external instance
* [py-lmfit] fixed py-asteval dependency requirements
* [py-lmfit] added version 1.0.2
* [py-lmfit] flake8
* [py-lmfit] 1.0.2 reqires python 3.6
* [py-lmfit] removed newer dependency requirements to be in line with setup.py not requirements.txt
* pbs: new virtual package
Some of our clusters have an older installation of
libtorque and tm.h that are *not* from OpenPBS. Using the current
openpbs dependency for openmpi causes concretization errors due to
restrictions on older python and hwloc requirements that don't apply,
even with an external non-buildable installation.
The new 'torque' bundle package allows users to point to that external
installation without problems.
Detailed description of torque by Sergey Kosukhin <skosukhin@gmail.com>
Intel oneAPI installs maintain a lock file in XDG_RUNTIME_DIR,
which by default exists in /tmp (and is shared by all component
installs). This prevented multiple oneAPI components from being
installed in parallel. This commit sets XDG_RUNTIME_DIR to exist
within Spack's installation Stage, so allows multiple components
to be installed at the same time.
* aws-parallelcluster: update maintainers list
Signed-off-by: Tim Lane <tilne@amazon.com>
* aws-parallelcluster: add v2.11.1
Signed-off-by: Tim Lane <tilne@amazon.com>
This PR fixes the tesseract package
- add missing dependencies
- build documentation
- build and install java component
- build and install training component
This uses our bootstrapping logic to automatically install dependencies for
`spack style`. Users should no longer have to pre-install all of the tools
(`isort`, `mypy`, `black`, `flake8`). The command will do it for them.
- [x] add logic to bootstrap specs with specific version requirements in `spack style`
- [x] remove style tools from CI requirements (to ensure we test bootstrapping)
- [x] rework dependencies for `mypy` and `py-typed-ast`
- `py-typed-ast` needs to be a link dependency
- it needs to be at 1.4.1 or higher to work with python 3.9
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* Adding package for omegaconf
* Update var/spack/repos/builtin/packages/py-omegaconf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Changing py-omegaconf to use github source URL instead of pypi
* Style fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Worked with flecsi developers to tighten, relax, and clarify
constraints and better understand how the flecsi project uses
legion. In the process, discovered that flecsi@1.4 cannot be
built with legion without heavy changes/reverts to the legion
and gasnet spackages.
Also, most importantly, fixed branding as to how flecsi is spelled
We add compilation flags when using %nvhpc to suppress warnings
(which due to global -Werror flag in the build get promoted to
errors) for the following:
Diagnostic 111: statement is unreachable
Diagnostic 177: variable "foo" was declared but never referenced
Diagnostic 188: enumerated type mixed with another type
Diagnostic 550: variable "foo" was set but never used
#24095 introduced a couple of bugs, which are fixed here:
1. The module path is computed incorrectly for bootstrapped clingo
2. We remove too many paths for `sys.path` in case of failures
z3 is a dependency of llvm and llvm-amdgpu, and when z3 python bindings
are enabled it depends on py-setuptools as a run dependency. That's
fine, except that py-setuptools now influences the hash of
llvm/llvm-amdgpu, which can be very annoying when another package
restricts the py-setuptools version -- you'll end up recompiling llvm
for no good reason :(.
* Updated the lbann package to not enabled OpenMP in BLAS package when
working on Darwin systems.
* Add the Sphinx RTD theme as an explicit dependency when building documentation
In some cases the FindHDF5.cmake returnd a wrong value for the HDF5 library names and path. For example it returns hdf5-shared as library name without a search path or checking if this is really an existing shared library. By HDF5_NO_FIND_PACKAGE_CONFIG_FILE=True/ON to the cmake options, the FindHDF5 module does not rely on a properly install hdf5-config.cmake and thus searches for the library and its paths. This results in a usable return value and fenics works afterwards.
* Updates for dependencies in main branch
* Add more depends
* Make CMake available at runtime for fenics-dolfinx
* Add maintainer
Co-authored-by: Garth N. Wells <gnw20@cam.ac.uk>
Although `cpio` is present in many environments, it may not be always
available.
The failure to build this package can be reproduced in a fresh Docker
image `debian:10`.
* trilinos: rename basker variant
The Basker solver is part of amesos2 but is clearer without the extra
scoping.
* trilinos: automatically enable teuchos and remove variant
Basically everything in trilinos needs teuchos
* trilinos: group top-level dependencies
* trilinos: update dependencies, removing unused
- GLM, X11 are unused (x11 lacks dependency specs too)
- Python variant is more like a TPL so rearrange that
- Gtest internal package shouldn't be compiled or exported
- Add MPI4PY requirement for pytrilinos
* trilinos: remove package meta-options
- XSDK settings and "all opt packages" are not used anywhere
- all optional packages are dangerous
* trilinos: Use hwloc iff kokkos
See #19119, also the HWLOC tpl name was misspelled so this was being ignored before.
* Flake
* Fix trilinos +netcdf~mpi
* trilinos: default to disabling external dependencies
* Remove teuchos from downstream dependencies
* fixup! trilinos: Use hwloc iff kokkos
* Add netcdf requirements to packages with ^trilinos+exodus
* trilinos: disable exodus by default
* fixup! Add netcdf requirements to packages with ^trilinos+exodus
* trilinos: only enable hwloc when @13: +kokkos
* xyce: propagate trilinos dependencies more simply
* dtk: fix missing boost dependency
* trilinos: remove explicit metis dependency
* trilinos: require metis/parmetis for zoltan
Disable zoltan by default to minimize default dependencies
* trilinos: mark mesquite disabled and fix kokkos arch
* xsdk: fix trilinos to also list zoltan [with zoltan2]
* ci: remove nonexistent variant from trilinos
* trilinos: add missing boost dependency
Co-authored-by: Satish Balay <balay@mcs.anl.gov>
Third-party Python libraries may be installed in one of several directories:
1. `lib/pythonX.Y/site-packages` for Spack-installed Python
2. `lib64/pythonX.Y/site-packages` for system Python on RHEL/CentOS/Fedora
3. `lib/pythonX/dist-packages` for system Python on Debian/Ubuntu
Previously, Spack packages were hard-coded to use the (1). Now, we query the Python installation itself and ask it which to use. Ever since #21446 this is how we've been determining where to install Python libraries anyway.
Note: there are still many packages that are hard-coded to use (1). I can change them in this PR, but I don't have the bandwidth to test all of them.
* Python: handle dist-packages and site-packages
* Query Python to find site-packages directory
* Add try-except statements for when distutils isn't installed
* Catch more errors
* Fix root directory used in import tests
* Rely on site_packages_dir property
* Change url and checksums for libpng to official sourceforge archives
* Update url scheme from http to https
* switch to .xz archives
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Add py-h5py version 3.3.0
The mpi4py dependency was bumped to 3.0.2 in setup.py. I'm not sure if that's actually required or not, but nothing lower is still tested.
* Use environment variable to stop h5py using setuptools setup_requires feature
* Add myself as a maintainer for py-h5py
* [py-transformers] can now use newer versions of tokenizers
* [py-transformers] Added version 4.6.1
* [py-transformers] removing old patch
* [py-transformers] boto3 no longer needed
* first build of py-torchmeta
* updated versions for torchvision and torch
* [py-torchmeta] using pil provider
Co-authored-by: Sid Pendelberry <sid@rit.edu>
The Makefile for the MAGMA smoke tests uses pkg-config to find
the MAGMA compile flags, but the test() routine in the spack
package was not configured to provide the location of the
pkg-config file. This modification sets PKG_CONFIG_PATH correctly
to allow the smoketests to successfully compile. It also removes
the *_dir variables which were unused by the magma
examples/Makefile.
Using the original concretizer, trying to concretize py-jupyterlab fails
with
```
==> Error: Invalid Version range: 6.1.0:6.1
```
because py-tornado does not have a 6.1.0 version but only a 6.1 one.
Makefiles for libtirpc have hardcoded the -pipe flag to the compiler
nvhpc compilers do not recognize that flag.
This PR provides a patch to remove the -pipe flag from the Makefile.
Patch should work with libtirpc@1.2.6 and @1.1.4
jupyterlab was looking for its application directory inside the python
prefix instead its own one. This was fixed by setting the according
environment variable.
* Permit to enable/disable bootstrapping and customize store location
This PR adds configuration handles to allow enabling
and disabling bootstrapping, and to customize the store
location.
* Move bootstrap related configuration into its own YAML file
* Add a bootstrap command to manage configuration
Spack allows users to set `padded_length` to pad out the installation path in
build farms so that any binaries created are more easily relocatable. The issue
with this is that the padding dominates installation output and makes it
difficult to see what is going on. The padding also causes logs to easily
exceed size limits for things like GitLab artifacts.
This PR fixes this by adding a filter in the logger daemon. If you use a
setting like this:
config:
install_tree:
padded_length: 512
Then lines like this in the output:
==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_pla/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga
will be replaced with the much more readable:
==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/[padded-to-512-chars]/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga
You can see that the padding has been replaced with `[padded-to-512-chars]` to
indicate the total number of characters in the padded prefix. Over a long log
file, this should save a lot of space and allow us to see error messages in
GitHub/GitLab log output.
The *actual* build logs still have full paths in them. Also lines that are
output by Spack and not by a package build are not filtered and will still
display the fully padded path. There aren't that many of these, so the change
should still help reduce file size and readability quite a bit.
015e29efe1 that introduced this section to the
documentation said “two” here instead of the actual count, three.
9f54cea5c5 then added a fourth, BLAS/LAPACK.
Rather than trying to keep this leading count in sync, this change just replaces
the wording with something more generic/stable.
Getting rid of another top-level file.
`coverage.py` has supported `pyproject.toml` since version 5.0, and
all versions of coverage so far work with python 2.7. We just need to
ensure that a version of coverage with the `toml` extra is installed
in the test environment.
I tested this with `coverage run`, `coverage report`, and `coverage html`.
* openPMD-api: rename develop
Rename to match known Spack version comparison schemes:
```
develop>main>master>head>trunk>9999>0>z>a
```
Currently, the hdf5 patch that is pre-0.14.0 is also applied to
`dev`, which naturally fails (already applied).
* fix dev in warpx
* py-markupsafe: add 2.0.1
* Update var/spack/repos/builtin/packages/py-markupsafe/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This moves our `mypy` configuration from `.mypy.ini` to `.pyproject.toml`
and increases the minimum `mypy` version in the tests.
- [x] move `mypy` configuration to `pyproject.toml`
- [x] remove `.mypy.ini`
- [x] ensure that `mypy` version .900 or higher is used in tests
Ideally a test-only dependency won't be in the build, but until then
mark the requirement of gtest up to 1.10.
See e4s job failure at https://gitlab.spack.io/spack/spack/-/jobs/349959 .
Looks like 1.11 introduces some breaking incompatibilities, so perhaps
we should transition later.
* fix remaining flake8 errors
* imports: sort imports everywhere in Spack
We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.
This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
* Fix compiler test
Use `self.spec.satisfies` on compiler to determine if a flag should be
applied or not. This approach avoids issues with the strings `gcc`
or `clang` appearing in the full path to the compiler executables, as
happens with spack-installed compilers (e.g. `nvhpc%gcc`).
* Limit compiler name search to last path component
@skosukhin pointed out that the cflag modification should happen for any
clang or gcc compiler, regardless of what compiler spec provides them.
This commit reverts to searching for a compiler name containing "gcc"
or "clang", but limits the search to the last path component, which
avoids matching spack-installed compilers built with gcc (e.g.
`nvhpc%gcc`), which will have "gcc" in the compiler path.
* Use `os.path` rather than `pathlib`
Co-authored-by: Paul Henning <phenning@lanl.gov>
`dateutil.parser` was an optional dependency for CVS tests. It was failing on macOS
beacuse the dateutil types were not being installed, and mypy was failing *even when the
CVS tests were skipped*. This seems like it was an oversight on macOS --
`types-dateutil-parser` was not installed there, though it was on Linux unit tests.
It takes 6 lines of YAML and some weird test-skipping logic to get `python-dateutil` and
`types-python-dateutil` installed in all the tests where we need them, but it only takes
4 lines of code to write the date parser we need for CVS, so I just did that instead.
Note that CVS date format can vary from system to system, but it seems like it's always
pretty similar for the parts we care about.
- [x] Replace dateutil.parser with a simpler date regex
- [x] Lose the dependency on `dateutil.parser`
Previous tests of `spack style` didn't really run the tools --
they just ensure that the commands worked enough to get coverage.
This adds several real tests and ensures that we hit the corner
cases in `spack style`. This also tests sucess as well as failure
cases.
This consolidates code across tools in `spack style` so that each
`run_<tool>` function can be called indirecty through a dictionary
of handlers, and os that checks like finding the executable for the
tool can be shared across commands.
- [x] rework `spack style` to use decorators to register tools
- [x] define tool order in one place in `spack style`
- [x] fix python 2/3 issues to Get `isort` checks working
- [x] make isort error regex more robust across versions
- [x] remove unused output option
- [x] change vestigial `TRAVIS_BRANCH` to `GITHUB_BASE_REF`
- [x] update completion
This PR configures the spack docbook packages
- docbook-xsl
- docbook-xml
The public entities are now mapped to the locally installed files of the
respective packages. The example catalogs are left in place and
XML_CATALOG_FILES points to the newly created catalogs.
Perl keeps copies of the bzip2 and zlib source code in its own source
tree and by default uses them in favor of outside libraries. Instead,
put these dependencies under control of spack and tell perl to use the
spack-built versions.
* py-keyring: fix installation on linux
* Update var/spack/repos/builtin/packages/py-keyring/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-keyring/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We should not fail the generate stage simply due to the presence of
a broken-spec somewhere in the DAG. Only fail if the known broken
spec needs to be rebuilt.
This PR adds a context manager that permit to group the common part of a `when=` argument and add that to the context:
```python
class Gcc(AutotoolsPackage):
with when('+nvptx'):
depends_on('cuda')
conflicts('@:6', msg='NVPTX only supported in gcc 7 and above')
conflicts('languages=ada')
conflicts('languages=brig')
conflicts('languages=go')
```
The above snippet is equivalent to:
```python
class Gcc(AutotoolsPackage):
depends_on('cuda', when='+nvptx')
conflicts('@:6', when='+nvptx', msg='NVPTX only supported in gcc 7 and above')
conflicts('languages=ada', when='+nvptx')
conflicts('languages=brig', when='+nvptx')
conflicts('languages=go', when='+nvptx')
```
which needs a repetition of the `when='+nvptx'` argument. The context manager might help improving readability and permits to group together directives related to the same semantic aspect (e.g. all the directives needed to model the behavior of `gcc` when `+nvptx` is active).
Modifications:
- [x] Added a `when` context manager to be used with package directives
- [x] Add unit tests and documentation for the new feature
- [x] Modified `cp2k` and `gcc` to show the use of the context manager
I installed curl on my mac and it picked up a homebrew (I think?)
installation of gsasl. A later system update broke git because of the
implicitly added dependency. Explicitly disabling libraries that *might*
exist on the system is the safe approach here.
```
dyld: Library not loaded: /usr/local/opt/gsasl/lib/libgsasl.7.dylib
Referenced from: /rnsdhpc/code/spack/opt/spack/apple-clang/curl/gag5v3c/lib/libcurl.4.dylib
Reason: image not found
error: git-remote-https died of signal 6
```
* Added Perl workaround for CUDA <= 8
* Re-wrapped comment
* Proofreading corrections
* Added a reference
* Do not override Perl include path
* Retrieve shell once
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* trilinos: add teko conflict
* trilinos: improve gotype variant
Instead of 'none' and 'long' typically being the same (but not for older
trilinos versions), add an explicit 'all' variant that only works for
older trilinos which supports multiple simultaneous tpetra
instantiations.
* trilinos: add self as maintainer
* trilinos: disable vendored gtest by default
This changes several conflicting variants to a single
multi-value variant, and uses conflicts instead of raising InstallError.
(With clingo, requesting +gui automatically selects features=huge!)
I have also rearranged the dependencies for clarity and simplified the
conifgure args.
ci: only write to broken-specs list on SpackError
Only write to the broken-specs list when `spack install` raises a SpackError,
instead of writing to this list unnecessarily when infrastructure-related problems
prevent a develop job from completing successfully.
If two Specs have the same hash (and prefix) but are not equal, Spack
originally had logic to detect this and raise an error (since both
cannot be installed in the same place). Recently this has eroded and
the check no-longer works; moreover, when defining projections (which
may truncate the hash or other distinguishing properties from the
prefix) Spack was also failing to detect collisions (in both of these
cases, Spack would overwrite the old prefix with the new Spec).
This PR maintains a list of all "taken" prefixes: if a hash is not
registered (i.e. recorded as installed in the database) but the prefix
is occupied, that is a collision. This can detect collisions created
by defining projections (specifically when they omit the hash).
The PR does not detect collisions where specs have the same hash
(and prefix) but are not equal.
Fix syntax of conflict between numpy 1.21.0 and gcc11 to that the clingo
concretizer recognizes it.
In addition the upstream master branch was renamed to main.
* Switch hdf5 package from autotools to cmake.
* Add variant for building with zlib, default to ON.
* Update for format requirements.
* Format change.
* Fix breakage from last merge from develop.
Switch szip to use libaec (unrestricted encryption).
Remove 'static' variant: static libs will only be installed when
~shared.
* Improve args based on suggestions from pull request.
* Update code URL to github.com
Add/modify 4 depends_on lines to fix running "spack graph --deptype=link hdf5".
* Remove trailing whitespace.
* Remove dependencies added solely to make "spack greph --type=link" work.
* Add new version HDF5 1.8.22.
* Remove unnecessary java_check.
* Fix whitespace for style checks.
* Reverted zlib version dependency to 1.1.2:.
zlib variant removed.
api version default renamed "default".
* Remove blank line.
* Whitespace corrections.
* iRemoved unnecessary 'debug' variant.
* Fix typo in version number in conflict for '+szip'.
* Set default for tools variant to True.
Remove patch functions dependent on 'libtool' file that cmake doesn't
produce.
* Remove line to set ONLY_SHARED_LIBS to true.
Add post_install code to install only one version of tools with shared
linkage and original tool names.
* Remove trailing white space and import of glob package not used.
* Leave BUILD_TESTING set to default which is ON.
* Remove post_install code to install only one version of tools because
some dependent packages running tests in e4s testing are using
h5diff-shared. Keep both tools versions for now.
* No longer need to import os.
Instead of refusing to build +mpi with gcc10, add what I guess is now
the standard workaround, ie., `-fallow-argument-mismatch`.
Getting this into pfunit's cmake-based but kinda non-standard build isi
a bit ugly, but you gotta do what you gotta do...
Version 1.17 of DD4hep was renamed from "01-17-00" to "01-17", in line
with the naming conventions of previous releases. Since release archives
contain a subdirectory with the version string in it, this changes the contents
of the tarball ever so slightly, so the SHA-256 checksum must change as well.
Fix url to find newer versions, add newest version 4.0.2 and add
variants for
- cxxstd: To use a specific c++ standard
- static: Enable or disable build of static libraries
- boost: Boost support
- sqlite: SQLite support
- postgresql: PostgreSQL support
When having a few packages loaded, installing go-bootstrap will fail
because the `PATH` variable is truncated at 4096 bytes. Increase the
limit to 128 KiB to make longer paths fit.
1. "+simplex" conflicts with "dealii@:9.2" [The interface to simplex is supported from version 9.3.0 onwards. Please explicitly disable this variant via ~simplex]
2. "+arborx" conflicts with "dealii@:9.2" [The interface to arborx is supported from version 9.3.0 onwards. Please explicitly disable this variant via ~arborx]
Prior to any Spack build, Spack modifies PATH etc. to help the build
find the dependencies it needs. It also allows any package to define
custom environment modifications (and furthermore a package can
specify environment modifications to apply when it is used as a
dependency). If an external package defines custom environment
modifications that alter PATH, and the external package is in a merged
or system prefix, then that prefix could "override" the Spack-built
packages.
This commit reorders environment modifications so that PrependPath
actions which expose Spack-built packages override PrependPath actions
for custom environment modifications of external packages.
In more detail, the original order of environment modifications is:
* Modules
* Compiler flag variables
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH for dependencies
* Custom package.py modifications in the following order:
* dependencies
* root
This commit changes the order:
* Modules
* Compiler flag variables
* For each external dependency
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH modifications
* Custom modifications
* For each Spack-built dependency
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH modifications
* Custom modifications
Spack pipelines need to take specific actions internally that depend
on whether the pipeline is being run on a PR to spack or a merge to
the develop branch. Pipelines can also run in other repositories,
which represents other possible use cases than just the two mentioned
above. This PR creates a "SPACK_PIPELINE_TYPE" gitlab variable which
is propagated to rebuild jobs, and is also used internally to determine
which pipeline-specific tasks to run.
One goal of the PR is fix an issue where rebuild jobs which failed on
develop pipelines did not properly report the broken full hash to the
"broken-specs-url".
* Add Externally Findable section to info command
* Use comma delimited detection attributes in addition to boolean value
* Unit test externally detectable part of spack info
yes I know this name isn't popular but that's the way it is right now.
master and the upcoming v5.0.x release branch use git submodules.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
* [py-xxhash] created template
* [py-xxhash] working on dependencies
* [py-xxhash] set version for xxhash
* [py-xxhash] Final cleanup
- added homepage
- added description
- removed fixmes
* Force the Python interpreter with an env variable
This commit forces the Python interpreter with an
environment variable, to ensure that the Python set
by the "setup-python" action is the one being used.
Due to the policy adopted by Spack to prefer python3
over python we may end up picking a Python 3.X
interpreter where Python 2.7 was meant to be used.
* Revert "Update conftest.py (#24473)"
This reverts commit 477c8ce820.
* Make python-dateutil a soft dependency for unit tests
Before #23212 people could clone spack and run
```
spack unit-tests
```
while now this is not possible, since python-dateutil is
a required but not vendored dependency. This change makes
it not a hard requirement, i.e. it will be used if found
in the current interpreter.
* Workaround mypy complaint
This commit fixes a subtle bug that may occur when
a package is a "possible_provider" of a virtual but
no "provides_virtual" can be deduced. In that case
the cardinality constraint on "provides_virtual"
may arbitrarily assign a package the role of provider
even if the constraints for it to be one are not fulfilled.
The fix reworks the logic around three concepts:
- "possible_provider": a package may provide a virtual if some constraints are met
- "provides_virtual": a package meet the constraints to provide a virtual
- "provider": a package selected to provide a virtual
Spack packages can now fetch versions from CVS repositories. Note
this fetch mechanism is unsafe unless using :extssh:. Most public
CVS repositories use an insecure protocol implemented as part of CVS.
Here we are adding an install_times.json into the spack install metadata folder.
We record a total, global time, along with the times for each phase. The type
of phase or install start / end is included (e.g., build or fail)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
The original implementation of `flag_handler` searched the
`self.compiler.cc` string for `clang` or `gcc` in order to add a flag
for those compilers. This approach fails when using a spack-installed
compiler that was itself built with gcc or clang, as those strings will
appear in the fully-qualified compiler executable paths. This commit
switches to searching for `%gcc` or `%clang` in `self.spec`.
Co-authored-by: Paul Henning <phenning@lanl.gov>
the 4.1.1 release has fixes for problems that kept 4.1.0 from
being the default open mpi version to build using spack.
related to #24396
Signed-off-by: Howard Pritchard <hppritcha@gmail.com>
* remove blueos check on cuda variant, fix typo
* restore necessary compiler guard
* remove axom+cuda from testing because it only partially works outside ppc systems
This PR does the following:
- adds version corresponding to commit at 08/03/2020
- adds missing get_DE_events.py script
- adds dependencies needed by get_DE_events.py
- removes REDItoolDenovo.py.patch and python2to3.patch in favor of
running 2to3 and reindent pre-build
- add batch_sort.patch to handle differences in string/char handling
betweeen python2 and python3
- adds a variant for the Nature Protocol
- adds dependencies for the nature_protocol variant
- added myself as maintainer
This PR adds a new version of reditools from git.
This PR fixes a couple of issues with the opencv package, mostly in
relation to cuda. This is only focused on cuda, not any of the other
variants.
- Added versions to the contrib_vers list. Added for all that can be
retrieved from github. The one for the latest version was missing.
- Added a cmake patch for v3.2.0.
- Deprecated versions 3.1.0 and 3.2.0 as neither of those could be
built, with or without cuda.
- Adjusted constraints on applying initial cmake patch.
- Added cudnn dependency when +cuda.
- Set constraints for cudnn and cuda for older versions of opencv.
Add a new "spack audit" command. This command can check for issues
with configuration or with packages and is intended to help a
user debug a failed Spack build.
In some cases the reported issues are always errors but are too
costly to check for (e.g. packages that specify missing variants on
dependencies). In other cases the issues may be legitimate but
uncommon usage of Spack and we want to be sure the user intended the
behavior (e.g. duplicate compiler definitions).
Audits are grouped by theme, and for now the two themes are packages
and configuration. For example you can run all available audits
on packages with "spack audit packages". It is intended that in
the future users will be able to define their own audits.
The package audits are good candidates for running in package_sanity
(i.e. they could catch bugs in user-submitted packages before they
are merged) but that is left for a later PR.
Building magma has been failing consistently and is currently
blocking PRs from being merged. Disable that spec while we
investigate the failure and work on a fix.
This should get us most of the way there to support using monitor during a spack container build, for both Singularity and Docker. Some quick notes:
### Docker
Docker works by way of BUILDKIT and being able to specify --secret. What this means is that you can prefix a line with a mount of type secret as follows:
```bash
# Install the software, remove unnecessary deps
RUN --mount=type=secret,id=su --mount=type=secret,id=st cd /opt/spack-environment && spack env activate . && export SPACKMON_USER=$(cat /run/secrets/su) && export SPACKMON_TOKEN=$(cat /run/secrets/st) && spack install --monitor --fail-fast && spack gc -y
```
Where the id for one or more secrets corresponds to the file mounted at `/run/secrets/<name>`. So, for example, to build this container with su (spackmon user) and sv (spackmon token) defined I would export them on my host and do:
```bash
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
```
And when we add `env` to the secret definition that tells the build to look for the secret with id "st" in the environment variable `SPACKMON_TOKEN` for example.
If the user is building locally with a local spack monitor, we also need to set the `--network` to be the host, otherwise you can't connect to it (a la isolation of course.)
## Singularity
Singularity doesn't have as nice an ability to clearly specify secrets, so (hoping this eventually gets implemented) what I'm doing now is providing the user instructions to write the credentials to a file, add it to the container to source, and remove when done.
## Tags
Note that the tags PR https://github.com/spack/spack/pull/23712 will need to be merged before `--monitor-tags` will actually work because I'm checking for the attribute (that doesn't exist yet):
```bash
"tags": getattr(args, "monitor_tags", None)
```
So when that PR is merged to update the argument group, it will work here, and I can either update the PR here to not check if the attribute is there (it will be) or open another one in the case this PR is already merged.
Finally, I added a bunch of documetation for how to use monitor with containerize. I say "mostly working" because I can't do a full test run with this new version until the container base is built with the updated spack (the request to the monitor server for an env install was missing so I had to add it here).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Inline codecov annotations make the code hard to read, and they add annotations
in files that seemingly have nothing to do with the PR. Sadly, they add a whole
lot of noise and not a lot of benefit over looking at the PR on codecov. We
should just have people look at the coverage on codecov itself.
* New package: py-pyusb
Change-Id: I606127858b961b5841c60befc5a8353df0f9f38c
* fixup dependencies
Change-Id: I0c9b0ccee693d2c4e847717950d4ce64cb319794
* fixup 2
Change-Id: Ibaccbdafd865e363564f491054e4e4ceb778727b
* Update var/spack/repos/builtin/packages/py-pyusb/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
A patch no longer applies cleanly as its fixed in v4.0.6 - fix it here
==> Installing openmpi-4.0.6-in47f6rxspbnyibkdx6x4ekg6piujobd
==> No binary for openmpi-4.0.6-in47f6rxspbnyibkdx6x4ekg6piujobd found: installing from source
==> Fetching https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.6.tar.bz2
Reversed (or previously applied) patch detected! Assume -R? [n]
Apply anyway? [n]
2 out of 2 hunks ignored -- saving rejects to file opal/include/opal/sys/gcc_builtin/atomic.h.rej
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
When running executables from build dependencies, we want to avoid that
`LD_PRELOAD` and `DYLD_INSERT_LIBRARIES` any of their shared libs build
by spack with system libraries.
The Z3 solver provides a Z3Config.cmake file when built using the CMake build
system. This submission changes the package build system to inherit the
CMakePackage type. In addition to changing the build system, this submission:
- Adds the GMP variant
- Removes v4.4.0 and v4.4.1 as CMake was implemented starting with v4.5.0
This adds a package for `irep`, a tool for reading `lua` input decks from
Fortran, C, and C++.
`irep` can be built with either `lua` or `luajit`. To address this, we also add
a virtual package for lua called `lua-lang`. `luajit` isn't, by default, a drop-in
replacement for `lua`, but we add a `+lualinks` variant to it that adds symlinks
that make it behave like `lua@5.1`. With this variant enabled, it provides the
`lua-lang` virtual. `lua` always provides `lua-lang`.
- [x] add `irep` package
- [x] add `+lualinks` variant to `lua-luajit`
- [x] create `lua-lang` virtual, provided by `lua` and `luajit+lualinks`
Co-authored-by: Kayla Richarda Butler <butler59@quartz1148.llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* libdrm: fix one configure error and require libpciaccess
Failure with `LIBS`: the linker can't find `-lrt` so configure fails on
darwin-bigsur %apple-clang@12.0.5
```
>> 22 configure: error: in `/private/var/folders/gy/mrg1ffts2h945qj9k29s1l1dvvmbqb/T/s3j/spack-s
tage/spack-stage-libdrm-2.4.100-ofhk6m25n2pi427ihnxmvjkfmgyzlrqc/spack-src':
>> 23 configure: error: C compiler cannot create executables
24 See `config.log' for more details
See build log for details:
/var/folders/gy/mrg1ffts2h945qj9k29s1l1dvvmbqb/T/s3j/spack-stage/spack-stage-libdrm-2.4.100-ofhk6m25n2pi427ihnxmvjkfmgyzlrqc/spack-build-out.txt
```
* libpciaccess: Mark conflict with darwin
```
make[2]: *** [common_init.lo] Error 1
make[2]: *** Waiting for unfinished jobs....
common_interface.c:75:10: fatal error: 'sys/endian.h' file not found
^~~~~~~~~~~~~~
```
and
```
common_init.c:73:3: error: "Unsupported OS"
```
and others
* extending example for buildcaches
I was attempting to create a local build cache from a directory, and I found the
docs for both buildcaches and mirrors, but did not connect the docs that the
url variable could be the local filesystem variable. I am extending the docs for
buildcaches with an example of creating and interacting with one on the filesystem
because I suspect other users will run into this need and possibly not find what
they are looking for.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* adding as follows to spack mirror list
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* update url, add all new versions and fix installation
* add wxparaver package and set the old paraver package as deprecated
* remove update of deprecated package
* remove old version from new wxparaver
* Update url
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
It is currently kind of confusing to the reader to distinguish spack buildcache install
and spack install, and it is not clear how to use a build cache once a mirror is added.
Hopefully this little big of description can help (and I hope I got it right!)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Use the 'version_yearlike' attribute instead of 'version' to
check if the SPACK_COMPILER_EXTRA_RPATHS should be set to include
the built-in 'libfabrics'.
When using the bare 'version', the comparison is wrong when
building with 'intel-parallel-studio', which has the version
format '<edition>.YYYY.Nupdate', due to the leading '<edition>'.
xfsprogs currently does not install with error message:
FATAL ERROR: could not find a valid ini.h header.
Adding this package libinih, and including it as
a dependency for xfsprogs seems to fix the issue. It could be
that we only need to add it for newer versions (if it worked before)
and maybe a maintainer can comment on that.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Pagination on Github prevent spack from easily parse all available
versions. Also, due to recent migration to GitHub, tarballs for
versions up to 3.12.13 have be regenerated, changing the hash.
The current URL will apparently be supported, so we keep it, and give
the alternative one as a comment.
This should fix#24278
$INSTALLDIR/lib/python3.7/site-packages/IPython/core/events.py contains an
import from backcall even in @7.3.0, so dependency on py-backcall needs
to start earlier.
Restrict poppler version for texlive to poppler@:0.84
Should fix#19946
See also https://github.com/NixOS/nixpkgs/issues/79170
Looks like poppler@0.84 upgraded their header files to use the C++ cstdio
instead of the C stdio.h. Since TeX is using C, not C++. this causes problems.
* zfp: several package improvements
- add variants for build targets, language bindings, backends
- ensure selected variants are compatible with zfp version
- point to GitHub (not LLNL) tar balls
- add dependencies
- update link to homepage
- add maintainers
* zfp: address suggestions by Spack team
- use conflicts() instead of raising exceptions
- use define() and define_from_variant() where applicable
* Apply suggestions from code review
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Fix ZFP OpenMP build.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* [py-keyboard] created template
* [py-keyboard]
- updated homepage
- added dependency for OSX
- added description
- removed fixmes
* [py-keyboard] Until py-pyobjc can be created, specifying conflict with platform=darwin
* [py-keyboard] is verb
* Update of Flecsi Spackage
Update of flecsi spackage to reconcile differences between flecsi@1:1.9
and flecsi@2: for future support purposes
* Removing Unnecessary Conditional
Removing unused conditional. Initially the plan was to switch based on
version in `cmake_args` but this was not necessary as build system
variable names remained mostly the same and conflicts prevent the rest.
For the most part, if a variant is there it does not need to check
against what version of the code is being built.
* Updated CI To Reconcile Flecsi Changes
Updated CI to target flecsi@1.4.2 which best matches the previous
release version and reconciled change in variant name
The common.inc script in TBB uses the environ var 'OS' to determine
the platform it's on. On Linux, this is normally empty and TBB falls
back to uname. But some systems set this to 'CentOS Linux 8' which is
descriptive, but not exactly what common.inc is looking for.
Instead, take the value from python and explicitly set OS to what TBB
expects to avoid this problem.
Since the two packages share a common history, the installation
procedure has been factored into a common base class.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* Tcl: fix TCLLIBPATH
* Fix TCL|TK|TIX_LIBRARY paths
* Fix TCL_LIBRARY, no tcl8.6 subdir
* Don't rely on os.listdirs sorting
For tcl and tk, we also install the source directory, so there are
two init.tcl and tk.tcl locations. We want the one in lib/lib64,
which should come before the one in share.
* Add more patches
* Fix dylib on macOS
* Tk: add smoke tests
* Tix: add smoke test
Extracting specs for the result of a solve has been factored
as a method into the asp.Result class. The method account for
virtual specs being passed as initial requests.
Minimizing compiler mismatches in the DAG and preferring newer
versions of packages are now higher priority than trying to use as
many default values as possible in multi-valued variants.
According to the docs, r is needed for plotting, but plotting is
untested. In addition, the specific version requirement of java for gatk
could lead to multiple installations of r being triggered in an
environment. That might cause people to have to be deliberate about
java in a deployment. All in all, it seems that r is better as a
variant for gatk.
* Set job_id for SGE in darshan-runtime package
* Use a multi value variant for scheduler
Only one scheduler can be selected so make the variant multi valued and
set multi=False.
* hdf-eos5: Fix issue when linking against hdf5+szip (#23411)
Should fix issue #23411 when linking against hdf5+szip
Also fix bug if hdf5 does not depend on zlib
Reluctantly added payerle as a maintainer
Added version 1.1.13
Fixed versions for dependencies based on README.md for package
In particular:
* versions 1.1.x require python@3, at least 3.4 and for 1.1.13 at least 3.6
* py-osqp had been pinned to version 0.4.1, but README.md either shows
no version restriction, of 0.4.1 and higher
* @1.1.13 requires at least 1.1.6 of py-scs
* I am assuming since 1.1.x is python@3 only, py-six no longer required
(it was not explicitly showing up in README.md for these versions)
Since the module roots were removed from the config file,
`--print-shell-vars` cannot find the module roots anymore. Fix it by
using the new `root_path` function. Moreover, the roots for lmod and
modules seems to have been flipped by accident.
* add versions 2.2.0.2 and 2.2.1.1
* Add maintainer
Added Ishaan as additional maintainer as he is also maintainer of the Python bindings
* add new major precice version as dependency
The VALID_VERSION regex didn't check that the version string was
completely valid, only that a prefix of it was. This version ensures
the entire string represents a valid version.
This makes a few related changes.
1. Make the SEGMENT_REGEX identify *which* arm it matches by what groups
are populated, including whether it's a string or int component or a
separator all at once.
2. Use the updated regex to parse the input once with a findall rather
than twice, once with findall and once with split, since the version
components and separators can be distinguished by their group status.
3. Rather than "convert to int, on exception stay string," if the int
group is set then convert to int, if not then construct an instance
of the VersionStrComponent class
4. VersionStrComponent now implements all of the special string
comparison logic as part of its __lt__ and __eq__ methods to deal
with infinity versions and also overloads comparison with integers.
5. Version now uses direct tuple comparison since it has no per-element
special logic outside the VersionStrComponent class.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* New Package:py-haphpipe@1.0.3
* removed llvm restrict. & changed freebayes
* Style fix
* Removed pip, wheel, added url for deps list
* used proper gsutil naming
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* url src for deps, samtools fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* petsc: add hip variant
* libceed: add 0.8, disable occa by default, and let autodetect AVX
Disabling OCCA because backend updates did not make this release and
there are some known bugs so most users won't have reason to use OCCA.
https://github.com/CEED/libCEED/pull/688
* WIP: ceed: 4.0 release
* MFEM package updates (#19748)
* MFEM package updates
* mfem: flake8
* [mfem] Various fixes and tweaks.
[arpack-ng] Add a patch to fix building with IBM XL Fortran.
[libceed] Fix building with IBM XL C/C++.
[pumi] Add C++11 flag for version 2.2.3.
* [mfem] Fix the shared CUDA build.
Reported by: @MPhysXDev
* [mfem] Fix a TODO item
* [mfem] Tweak the AmgX dependencies
* [suite-sparse] Fix the version of the mpfr dependency
* MFEM: add initial HIP support using the ROCmPackage.
* MFEM: add 'slepc' variant.
* MFEM: update the patch for v4.2 for SLEPc.
* mfem: apply 'mfem-4.2-slepc.patch' just to v4.2.
* ceed: apply 'spack style'
* [mfem] Add a patch for mfem v4.2 to work with petsc v3.15.0.
[laghos] Add laghos version 3.1 based on the latest commit in
the repository; this version works with mfem v4.2.
[ceed] For ceed v4.0 use laghos v3.1.
* [libceed] Explicitly set 'CC_VENDOR=icc' when using 'intel'
compiler.
* [mfem] Allow pumi >= 2.2.3 with mfem >= 4.2.0.
[ceed] Use pumi v2.2.5 with ceed v4.0.0.
* [ceed] Explicitly use occa v1.1.0 with ceed v4.0.0.
Use mfem@4.2.0+rocm with ceed@4.0.0+mfem+hip.
* [ceed] Add NekRS v21 as a dependency for ceed v4.0.0.
* [ceed] Fix NekRS version: 21 --> 21.0
* [ceed] Propagate +cuda variant to petsc for ceed v4.0.
* [mfem] Propagate '+rocm' variant to some other packages.
* [ceed] Use +rocm variant of nekrs instead of +hip.
* [ceed] Do not enable magma with ceed@4.0.0+hip.
* [libceed] Fix hip build with libceed@0.8.
* [laghos] For v3.1, use the release .tar.gz file instead of commit.
* Remove cuda & hip variants as they are inherited
* [ceed] Remove comments and FIXMEs about 'magma+hip'.
* [ceed] [libceed] Remove TODOs about occa + hip.
* libceed: use ROCmPackage and +rocm
* petsc: use ROCmPackage for HIP
* libceed, petsc: use CudaPackage
* ceed: forward cuda_arch and amdgpu_target
* [mfem] Use Spack's CudaPackage as a base class; as a result,
'cuda_arch' values should not include the 'sm_' prefix.
Also, propagate 'cuda_arch' and 'amdgpu_target' variants
to enabled dependencies.
* petsc: variant is +rocm, package name is hip
Co-authored-by: Jed Brown <jed@jedbrown.org>
Co-authored-by: Thilina Rathnayake <thilinarmtb@gmail.com>
Passing absolute paths from pipeline generate job to downstream rebuild jobs
causes problems when the CI_PROJECT_DIR is not the same for the generate and
rebuild jobs. This has happened, for example, when gitlab checks out the
project into a runner-specific directory and different runners are chosen
for the generate and rebuild jobs.
* ensure that the stage root exists for `spack stage -p <PATH>`
* add test to verify `spack stage -p <PATH>` works!
* move out shared tmp staging path setup to a fixture to fix the test
* Simplified the spack.util.gpg implementation
All the classes defined in this Python module,
which were previously used to construct singleton
instances, have been removed in favor of four
global variables. These variables are initialized
lazily, like before.
The API of the module has been unchanged for the
most part. A few tests have been modified to use
the new global names.
1. add version 2021.05.15.
2. add patch to build old revs with gcc 11.x, version 2021.15.05
already has patch integrated, fixes#23667.
3. add variant +debug to build unoptimized, debug version.
4. add variant +viewer to include hpcviewer and add viewer path to
hpctoolkit module.
5. add dependency on memkind to workaround a glibc problem found on
some Cray platforms.
For me the buildcache force overwrite option does not work. It tries to
delete a file, but errors with a key error, apparently because the
leading / has to be removed.
* util.tty.log: read up to 100 lines if ready
Rework to read up to 100 lines from the captured stdin as long as data
is ready to be read immediately. Adds a helper function to poll with
`select` for ready data. This showed a roughly 5-10x perf improvement
for high-rate writes through the logger with relatively short lines.
* util.tty.log: Defer flushes to end of ready reads
Rather than flush per line, flush per set of reads. Since this is a
non-blocking loop, the total perceived wait is short.
* util.tty.log: only scan each line once, usually
Rather than always find all control characters then substitute them all,
use `subn` to count the number of control characters replaced. Only if
control characters exist find out what they are. This could be made
truly single pass with sub with a function, but it's a more intrusive
change and this got 99%ish of the performance improvement (roughly
another 2x in some cases).
* util.tty.log: remove check for `readable`
Python < 3 does not support a readable check on streams, should not be
necessary here since we control the only use and it's explicitly a
stream to be read.
* e4s ci: enable full e4s
* add llvm-amdgpu to list of specs needing an xlarge tagged runner
* comment out qt and qwt because of intermittent build failures
* remove +rocm specs because rocblas job consistently fails due to infrastructure
* qt: skip multimedia when ~opengl
On 5.9 on macOS the multimedia option causes build errors; on other
platforms and versions it should probably be assumed inoperative anyway.
* qt: Omit flags when disabling multimedia
```
ERROR: Unknown command line option '-no-pulseaudio'.
```
* Work around another qt@5.9 error
* qt: Fix build error on darwin
This PR allows users to `--export`, `--export-secret`, or both to export GPG keys
from Spack. The docs are updated that include a warning that this usually does not
need to be done.
This addresses an issue brought up in slack, and also represented in #14721.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the config section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
* New Package:py-ucsf-pyem
* Dep additions, eun env deletion
* extraction step change
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
### Overview
The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment. The two key changes here which aim to improve reproducibility are:
1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts. This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.
To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:
- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build
#### Some highlights
One consequence of this change will be much smaller pipeline yaml files. By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.
Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace. With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows. If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.
There are some changes to be aware of in how pipelines should be set up after this PR:
#### Pipeline generation
Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile. This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts. If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:
```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack ci generate --check-index-only
+ --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+ - "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags: ["spack", "public", "medium", "x86_64"]
interruptible: true
```
Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`. This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.
#### Rebuild jobs
Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts. When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs. The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`. The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.
When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment. If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate. No other changes to existing custom rebuild scripts should be required as a result of this PR.
As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI. The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`. If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:
```
To reproduce this build locally, run:
spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```
When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you can copy-paste to run a local reproducer of the CI job.
This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.
This PR erelies on
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
* embree: allow for compiling with gcc 7.3
strip out unsupported -mprefer-vector-width=256
* embree: fix build on AMD CPUs
The ISAs that embree is compiled for have to match the CPU
features enabled by the compiler, as embree derives theISA
that it compiles for from the latter.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Spack's source mirror was previously in a plain old S3 bucket. That will still
work, but we can do better. This switches to AWS's CloudFront CDN for hosting
the mirror.
CloudFront is 16x faster (or more) than the old bucket.
- [x] change mirror to https://mirror.spack.io
* New package:py-coveralls
* dep fixes
* added python constraint
* pyyaml version constraint
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- [x] add `in_buildcache` field to DB records to indicate what parts of an index,
which includes roots and dependencies, are in the buildcache.
- [x] add `mark()` method to DB for setting values on single nodes of the DAG.
This also fixes the build with %gcc@11:. According to upstream, the
proper solution is to disable -Werror=array-bounds since the stable
branch will not receive a patch for newer compilers.
* Update py-pint and fix runtime dependency on setuptools
Without the runtime dependency on setuptools, importing pint yields:
0.11:
ModuleNotFoundError: No module named 'pkg_resources'
0.17:
ModuleNotFoundError: No module named 'packaging'
* Fix
* Address comments
I would like to be able to export (and save and then load programatically)
spack blame metadata, so this commit adds a spack blame --json argument,
along with developer docs for it
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
This work will come in two phases. The first here is to allow saving of a local result
with spack monitor, and the second will add a spack monitor command so the user can
do spack monitor upload.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently if one package does `depends_on('pkg default_library=shared')`
and another does `depends_on('pkg default_library=both')`, you'd get a
concretization error.
With this PR one package can do `depends_on('pkg default_library=shared')`
and another depends_on('default_library=static'), and it would concretize to
`pkg default_library=shared,static`
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Package update to version 1.0.2
* switched submodule boolean to string
* switched from string to bools
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- Changed to cmake package with backward compatibility with older
makefile
- Removed unused cmake variable 'blas_blas_libs'
- Added new version 5.2.2 which change to external blas variable
- Remove unused tcsh dependency
- Change URL to use git repository for current and future versions
- Add older 4.2 version
- Add conflict for older versions with apple-clang
This adds RHEL8's `/usr/libexec/platform-python` to Spack's list of preferred
pythons. It will only be used if no other `python` is available in the `PATH`.
We have been testing with this python for a while now, and it seems to do all
that we need. If Spack one day isn't able to work with it, we'll take it out,
but for now it is useful to allow Spack to be used on RHEL8 without a dedicated
`python` installation.
Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.
When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`). That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations. It's a win now to make `Repo.exists()` check files directly.
**Fix:**
This PR does a number of things to speed up `spack load`, `spack info`, and other commands:
- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
(avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`
The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).
Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.
The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.
This PR thus modifies spack.hook in the following ways:
- Use __import__ instead of spack.util.imp.load_source (this
addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
to stay local
fixes#22786
Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.
We remove system paths from search variables like PATH and
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages
may be installed to prefixes that are not actually system paths
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.
If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.
This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.
This makes a similar change to spack cd.
If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).
Co-authored-by: Gregory Becker <becker33@llnl.gov>
fixes#22294
A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.
This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.
fixes#22565
This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.
Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.
This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.
* clingo: modify recipe for bootstrapping
Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping
* Remove option that breaks on linux
* Give more hints for the current Python
* Disable CLINGO_BUILD_PY_SHARED for bootstrapping
* bootstrapping: try to detect the current python from std library
This is much faster than calling external executables
* Fix compatibility with Python 2.6
* Give hints on which compiler and OS to use when bootstrapping
This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.
* Use spec_for_current_python to constrain module requirement
(cherry picked from commit d5fa509b07)
* ASP-based solver: avoid adding values to variants when they're set
fixes#22533fixes#21911
Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.
* Ensure disjoint sets have a clear semantics for external packages
fixes#22547
SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.
In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.
This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.
We add one special case for this: dependencies of externals.
We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.
- [x] change asp.py so that this can't happen -- we now only generate
dependency types for possible dependencies.
This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.
Given some directives in a package like:
```python
depends_on("foo@1.0+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```
We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:
```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
attr(Name, Arg1) : required_dependency_condition(ID, Name, Arg1);
attr(Name, Arg1, Arg2) : required_dependency_condition(ID, Name, Arg1, Arg2);
attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
dependency_condition(ID, Parent, Dependency);
node(Parent).
```
And we handled `foo@1.0+bar` and `mpi@2:` parts ("imposed constraints")
like this:
```prolog
attr(Name, Arg1, Arg2) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2).
attr(Name, Arg1, Arg2, Arg3) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```
These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.
Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:
```prolog
condition_holds(ID) :-
condition(ID);
attr(Name, A1) : condition_requirement(ID, Name, A1);
attr(Name, A1, A2) : condition_requirement(ID, Name, A1, A2);
attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).
attr(Name, A1) :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```
this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:
```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```
The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:
```prolog
dependency_holds(Package, Dependency, Type) :-
dependency_condition(ID, Package, Dependency),
dependency_type(ID, Type),
condition_holds(ID).
```
Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:
```python
def package_dependencies_rules(self, pkg, tests):
"""Translate 'depends_on' directives into ASP logic."""
for _, conditions in sorted(pkg.dependencies.items()):
for cond, dep in sorted(conditions.items()):
condition_id = self.condition(cond, dep.spec, pkg.name) # create a condition and get its id
self.gen.fact(fn.dependency_condition( # associate specifics about the dependency w/the id
condition_id, pkg.name, dep.spec.name
))
# etc.
```
- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals
LocalWords: concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords: virtuals def pkg cond dep fn refactor github py
This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.
(cherry picked from commit 413c422e53)
* Allow the bootstrapping of clingo from sources
Allow python builds with system python as external
for MacOS
* Ensure consistent configuration when bootstrapping clingo
This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.
* Github actions: test clingo with bootstrapping from sources
* Add command to inspect and clean the bootstrap store
Prevent users to set the install tree root to the bootstrap store
* clingo: documented how to bootstrap from sources
Co-authored-by: Gregory Becker <becker33@llnl.gov>
(cherry picked from commit 10e9e142b7)
Bash has a builtin `fc` that will override the compiler if you use "fc",
so it's better to use the full spack-supplied compiler path.
Additionally, the filter regex in the docs was wrong: it replaced the
entire assignment operation with the RHS.
* py-kubernetes: add new package
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-kubernetes: remove alpha/beta versions, fix dependency types
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR updates the abinit package. The underlying build system has
several changes from previous versions, which are reflected in the
package recipe.
- added version 9.4.2
- removed commented out code
- add new libxml2 variant, with dependency and conflicts
- add dependency on atompaw
- depend on fftw-api when ~openmp
This allows other fftw implementations to be used. This PR adds MKL.
- depend on netcdf explicitly
- remove hdf5 variant as hdf5 is required
- only use wannier90 if +mpi as the wannier90 spack package is MPI only
- allow newer versions of libxc for abinit 9
- split configure options for versions before and after abinit 9
- always use MPI compiler wrappers
- add patch to remove march settings for version 9
- Set conflict for fftw~openmp if abinit+openmp
This allows the virtual fftw-api to be used for the dependency. If fftw
is the fftw-api provider then bail if fftw~openmp is set when
abinit+openmp is used.
- Set conflicts for +openmp and mkl
- Be explicit about +mkl for intel-parallel-studio
- Add TODO entry for switching conflicts/depends_on logic
* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo
package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically
* clingo-bootstrap: apple-clang options to bootstrap statically on darwin
* clingo: fix the path of the Python interpreter
In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.
This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.
* clingo: the commit for "spack" version has been updated.
Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.
- [x] make `master` the default so we don't have to keep telling people
to install `clingo@master`. We'll update the preferred version when
there's a new release.
* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
if an environment is active and if yes we fetch all
uninstalled specs.
* Update pylint to 2.8.2
* Update var/spack/repos/builtin/packages/py-pylint/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Address comments
* Update
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.
This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated
(cherry picked from commit 4558dc06e2)
The context manager can be used to swap the current
configuration temporarily, for any use case that may need it.
(cherry picked from commit 553d37a6d6)
The method is now called "use_repositories" and
makes it clear in the docstring that it accepts
as arguments either Repo objects or paths.
Since there was some duplication between this
contextmanager and "use_repo" in the testing framework,
remove the latter and use spack.repo.use_repositories
across the entire code base.
Make a few adjustment to MockPackageMultiRepo, since it was
stating in the docstring that it was supposed to mock
spack.repo.Repo and was instead mocking spack.repo.RepoPath.
(cherry picked from commit 1a8963b0f4)
There clingo-cffi job has two issues to be solved:
1. It uses the default concretizer
2. It requires a package from https://test.pypi.org/simple/
The former can be fixed by setting the SPACK_TEST_SOLVER
environment variable to "clingo".
The latter though requires clingo-cffi to be pushed to a
more stable package index (since https://test.pypi.org/simple/
is meant as a scratch version of PyPI that can be wiped at
any time).
For the time being run the tests in a container. Switch back to
PyPI whenever a new official version of clingo will be released.
* Support clingo when used with cffi
Clingo recently merged in a new Python module option based on cffi.
Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.
manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi
(cherry picked from commit 93ed1a410c)
* Improve error message for inconsistencies in package.py
Sometimes directives refer to variants that do not exist.
Make it such that:
1. The name of the variant
2. The name of the package which is supposed to have
such variant
3. The name of the package making this assumption
are all printed in the error message for easier debugging.
* Add unit tests
(cherry picked from commit 7226bd64dc)
The "fact" method before was dealing with multiple facts
registered per call, which was used when we were emitting
grounded rules from knowledge of the problem instance.
Now that the encoding is changed we can simplify the method
to deal only with a single fact per call.
(cherry picked from commit ba42c36f00)
* Modification to R environment
This PR modifies how the R environmnet is presented, and fixes
installing the standalone Rmath library.
- The Rmath build and install methods are combined into one
- Set parallel=False when installing Rmath
- remove the run environment that set up variables for libraries and
headers that are not really needed, and pollute the environment.
* Add setup_run_environment back
- Add back the setup_run_environment with LD_LIBRARY_PATH and
PKG_CONFIG_PATH.
- Adjust documentation to reflect the current code.
The previous `gasnet` spack package was not vetted/approved by the GASNet library maintainers. This one is.
Notably adds build-time testing and smoke-testing.
Convert network variants into a multi-valued `conduits` variant has the minor advantage of enabling a concise `conduits=none` spec, but the major drawback that it degrades the `spack info gasnet` output.
* py-lazyarray: add new version 0.3.2
Change-Id: Ie8a40f3ff1fe7477e27f6085b9ad6673395258b2
* fixup dependencies
Change-Id: I4b2fb7a0abb462f8df74c383c67517065cd95b67
* Update var/spack/repos/builtin/packages/py-lazyarray/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-batchspawner
Change-Id: I508bad7ba7f1fc32c2f6c0bfccf35d864cf47ced
* fixup
Change-Id: If183933ce40a8d12214ea24acc683cb046fcfbcb
* fix broken version
Change-Id: Ie4dd8d18465877cd8f9cb862112af37d85b1c30f
* fixup license
Change-Id: I51d92a6d229f6a6b56eea6e53c65ed31fe59f6af
* Update var/spack/repos/builtin/packages/py-batchspawner/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Example replacement:
```
'-D(\w+)(:BOOL)?=\{0\}'\.\s*format\s*\(\s*'(ON|YES|true|TRUE)' if '\+(\w+)' in (self\.)?spec else '(OFF|NO|false|FALSE)'\)
```
with
```
self.define_from_variant('\1', '\4')
```
This will cause failures if any variants were misspelled: I have already caught two packages with nonexistent variants.
Spack uses curl to fetch URL resources. For locally-stored resources
it uses curl's file protocol; when using this protocol, curl expects
that the URL encoding conforms to RFC 3986 (which reserves characters
like '?' and '=' for special use).
We were not performing this encoding, and found a resource where
curl was interpreting this in an unfavorable way (succeeding, but
producing an empty file). This commit properly encodes URLs when
using curl's file protocol.
This error did not likely come up before because in most contexts
Spack was either fetching via http or it was using URLs without
offending characters (for example, the sha-based URLs in mirrors
never contain these characters).
* Add versions 1.9.4 and 1.9.4.1 for cbtf-* packages
* Add versions 2.4.2 and 2.4.2.1 for openspeedshop packages
* Remove older versions
* Switch from generic dependency on elf to a dependency on the
elfutils implementation for cbtf-* and openspeedshop packages
* For llvm-openmp-ompt, relax dependency on libelf to elf (cbtf-krell
now depends on elfutils, and llvm-openmp-ompt, so unless this
dependency is relaxed there would be a conflict)
* Update CMake build_type to support Debug, Release, RelWithDebInfo
in cbtf-* and openspeedshop packages
* Update libmonitor patches when building as a dependency of
cbtf-krell
Pass -ef to the cce fortran compiler, fix the build system to use the correct openmp flag for CCE
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
This also changes the checksum for 1.22.1 because I switched the package
to use the proper upstream tarballs to get rid of the autotools
dependencies. Moreover, a few dependencies were missing. netdata also
requires a few directories to be created in its prefix to actually work.
Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.
When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`). That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations. It's a win now to make `Repo.exists()` check files directly.
**Fix:**
This PR does a number of things to speed up `spack load`, `spack info`, and other commands:
- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
(avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`
- [x] `analyze` isn't commonly used; move it to long help
(`spack -H` vs `spack -h`). Give it its own section.
- [x] make it clear from `spack -h` that `spack module` can generate
module files
- [x] shorten help for `spack style`
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the `config` section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
TODO:
- [x] code changes to support multiple module sets
- [x] code changes to support modules relative to a view
- [x] Tests for multiple module configurations
- [x] Tests for modules relative to a view
- [x] Backwards compatibility for module roots from config section
- [x] Backwards compatibility for default module set without the name specified
- [x] Tests for backwards compatibility
Simplify logic by just enabling or disabling fsync as user specified
(default to off currently). Also remove the 4.1 version check, since
that version isn't actually supported in here.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
The implementation for __str__ has been simplified to traverse the spec directly,
and doesn't call anymore the flat_dependencies method. Dead code has been
removed.
For configure (e.g. for hdf5) to pass, this option needs to be pulled out when invoked in ccld mode.
I thought it had fixed the issue but I still saw it after that. After some digging, my guess is that I was able
to get hdf5 to build with ifort instead of ifx. Lot of overlapping changes occurring at the time, as it were.
There are still outstanding issues building hdf5 with ifx, and Intel is looking into what appears to be a
compiler bug, but this manifests during build and is likely a separate issue.
I have verified that the making the edit in 'ccld' mode removes the -loopopt=0 and enables hdf5 to pass
configure. It should be fine to make the edit in 'ld' mode as well, but I have not tested that and didn't
include an -or- condition for it.
Add new release of SEACAS.
Update netcdf-c version to recent release which fixes some issues that have caused problems in past
Use release version of CGNS instead of develop
* Update Nalu-Wind to remove SuperLU from Trilinos requirement. Also simplify Nalu-Wind package.
* Leave boost option in nalu-wind.
* Add git branches into TPL requirements. Update OpenFAST for change to main branch.
Currently, environment views blink out of existence during the view regeneration, and are slowly built back up to their new and improved state. This is not good if other processes attempt to access the view -- they can see it in an inconsistent state.
This PR fixes makes environment view updates atomic. This requires a level of indirection (via symlink, similar to nix or guix) from the view root to the underlying implementation on the filesystem.
Now, an environment view at `/path/to/foo` is a symlink to `/path/to/._foo/<hash>`, where `<hash>` is a hash of the contents of the view. We construct the view in its content-keyed hash directory, create a new symlink to this directory, and atomically replace the symlink with one to the new view.
This PR has a couple of other benefits:
* It future-proofs environment views so that we can implement rollback.
* It ensures that we don't leave users in an inconsistent state if building a new view fails for some reason.
For background:
* there is no atomic operation in posix that allows for a non-empty directory to be replaced.
* There is an atomic `renameat2` in the linux kernel starting in version 3.15, but many filesystems don't support the system call, including NFS3 and NFS4, which makes it a poor implementation choice for an HPC tool, so we use the symlink approach that others tools like nix and guix have used successfully.
* Added the option to use high performance linkers: gold and lld, for
LBANN. Including them as build flags causes unnecessary propagation
to all dependent packages, reducing package reuse.
fixes#22351
The ASP-based solver now accounts for the presence
in the DAG of deprecated versions and tries to minimize
their number at highest priority.
* gobject-introspection: fix for Python 3.9.
* Fixes the too long line formatting issue.
* gobject-introspection: limits the scope of the patch
Co-authored-by: Robert Mijakovic <robert.mijakovic@lxp.lu>
Variants explicitly set in an abstract root spec are considered
as defaults for the package they refer to, and they override
what is in packages.yaml and in package.py. This is relevant
only for multi-valued variants, where a constraint may extend
an already default value.
* Fixes to flex
- Prefer the version that doesn't need all the patches and extra build
tools
- Make dependency on gettext optional under the nls variant (off by
default)
- Drop the dependency on help2man if we don't have to regenerate the man
pages (when no patches are necessary)
* Bring back gettext dep as it is used during autoconf
The code for guessing cpu archtype based on craype modules names got confused,
at least on LLNL RZ prototype systems. In particular a (L) or (D) at the end of a craype-x86-xxx or other
cpu architecture module was geting the logic confused.
With this patch, any white space + remaining characters in the moduel name are removed.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
There have been a lot of questions and some confusion recently surrounding Spack installation test capabilities so this PR is intended to clean up and refine the documentation for "Checking an installation".
It aims to better distinguish between checks that are performed during an installation (i.e., build-time tests) and those that can be done days and weeks after the software has been installed (i.e., install (or smoke) tests).
* Enhancing package gmsh to more options, new version
* Enhancing package gmsh, url from https
* Enhancing package gmsh, following reviewer 1
* Improving package gmsh from reviewer
* Adding MED dependency
* Removing env variables and unused dependency (netgen/tetgen)
`flag_handler` currently passes all flags via injection. This makes it
impossible to override the default flags provided by autotools (for
instance, `binutils cflags='-O2'` will still build with `-O2 -g`).
Instead, use injection for our workaround flags and pass other flags to
the build system.
When we first merged the ASP-based solver, unit-tests
were run in a Docker container with root permissions
and that was preventing a few tests to succeed.
Since some time though, clingo is tested as a regular
user within Github Actions VMs, so we should start to
run checks again.
* geos: Fix config issues with python bindings using python3 (#23479)
This should fix some config issues when building geos with python
bindings and using python3 --- the geos configuration scripts had
a few python2-isms.
I only tested (lightly; geos built and I can import geos in python3)
on 3.8.1, but I did check that the patch can at least be applied
in 3.5.
I belatedly discovered that geos dropped all the SWIG bindings
in @3.9, so I also added some conflicts on the +python and +ruby
options to note that they are not supported in 3.9.
* geos: adding omitted patch file
In an active concretize environment, support installing one or more
cli specs only if they are already present in the environment. The
`--no-add` option is the default for root specs, but optional for
dependency specs. I.e. if you `spack install <depspec>` in an
environment, the dependency-only spec `depspec` will be added as a
root of the environment before being installed. In addition,
`spack install --no-add <spec>` fails if it does not find an
unambiguous match for `spec`.
Added the checksum for 4.1.2 and 4.2.0
The `parallel` variant did the exact same behavior as the `mpi` variant, but they had different default values than each other. Both variants set the value of `-DCGNS_ENABLE_PARALLEL`, so it was unclear which variant was "winning" and could definitely result in a non-intuitive build. Did a grep of the spack packages and none of them where using the `parallel` variant to control the cgns options. Retained the `mpi` variant as that one is being used by multiple packages.
One issue that remains to be solved is that the default integer size has changed from 32-bit to 64-bit for the 4.2.0 release. This is controlled by the `int64` variant which currently defaults to `OFF`. There should maybe be some thought about changing the default to match the default of the current release, or maybe having a version-specific default... For now, left the behavior as it has been for previous versions.
The patch available in spack does not patch
cleanly for the 4.1.1 and presumably later releases.
See Open MPI commit b8a8096a3f153380f95af8f285f48e926eb18bf1
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
SILO has optional support for compression libraries that require
C++ (hzip and fpzip). This patch exposes those options as variants
to enable configuration of SILO without the C++ libraries for C
applications. hzip and fpzip are enabled by default to preserve
current behavior.
Like compilers targets now try to minimize
mismatches, instead of maximizing matches.
Deduction of mismatches is reworked to be
the opposite of a match, since computing
that is faster.
The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).
Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.
* py-keras: new version
* Adds missing dependencies.
* Removes the newline which is against formatting rules.
* py-keras: limits some dependencies to older versions
* py-keras: restricts dependencies
* pykeras: fixes dependency ranges :)
Co-authored-by: Robert Mijakovic <robert.mijakovic@lrz.de>
Co-authored-by: Robert Mijakovic <robert.mijakovic@lxp.lu>
The loading protocol mandates that the the module we are going
to import needs to be already in sys.modules before its code is
executed, so to prevent unbounded recursions and multiple loading.
Loading a module from file exits early if the module is already
in sys.modules
When installing OneAPI packages as root (e.g. in a container), the
installer places cache files in /var/intel/installercache that
interfere with future Spack installs. This ensures that when
running an installation as a root user that this is removed.
* Adding hip support
* Added new blaspp version and rocm support. Fixed error in mesa18 package.
* Correcting variant name.
* Code style fixes
* Change of name of library
* Change "make check" to correctly run from the build directory.
* Upgraded version to fix testing errors
* Fixed testing directory
* Removed unnecessary variant entry (already inherited from CudaPackage)
* Generalization of version matching logic
* Code style
* Corrected version requirement
SCR moved to a component version some time ago, but never had a
release associated with these changes. SCR v2 is a legacy version
that is no longer being developed/supported. In preparation for an
upcoming SCR v3 release, there is now a 3.0rc1 release available to
users.
This adds the 3.0rc1 release to the spack package and deprecates the
older versions.
Additional changes include:
- Enforce using the main branches of the components when installing
scr@develop
- Enforce SCR v3 uses at least the recently released versions of each
of the components
- Use a simple `detect_scheduler()` function in an attempt to be
smarter about setting the default resource manager and not require
users to always manually provide the variant
- Add/update variants that were recently added to AXL and KVTree
components
- Fix cmake arg naming bug of setting `SCR_CONFIG_FILE`
- `SCR_ASYNC_API` is now being handled by a component and is only
needed by the legacy versions.
* Added checksum for recently released 4.8.0
* Added `enable-fsync` variant. The `fsync` flag was added to the configuration as of version 4.1.0 and later. Originally, it defaulted to `on`, but at version 4.3.0 and later, it was changed to default to `off` and a `enable-fsync` configuration flag was added to enable it.
The spack package has the `--enable-fsync` specified with no way to disable for all builds of netcdf-c 4.1.0 and later. This can cause horrendously slow I/O for certain use cases (e.g. 7 seconds with no-fsync versus 2300 seconds with fsync enabled). With the new variant, the default build behavior matches the default of non-spack netCDF.
* Metall: add version 0.2
* Add Metall v0.3
* Update Metall package to v0.4 and v0.5.
* Metall package: add v0.6
* Metall package: add v0.7
* Metall package: add v0.8 and v0.9
* Add Metall package v0.10
* Metall package: set run_environment METALL_ROOT
* Metall package: removed blanks
* Metall package: add v0.11 and v0.12
* Metall package: change required cmake version
* Metall package: support build test
* Metall package: add v0.13
* Metall package: change to use setup_build_environment
gettext uses a test with <libxml2/libxml/someheader.h> to locate a header,
and libxml2 itself includes <libxml/otherheader.h>, so both have to be
in the include path.
* Building binutils with gold implies building ld
* add +ld to llvm to make the old concretizer happy and add +gas to gcc since that's used in the package.py
* Remove sys
* Metall: add version 0.2
* Add Metall v0.3
* Update Metall package to v0.4 and v0.5.
* Metall package: add v0.6
* Metall package: add v0.7
* Metall package: add v0.8 and v0.9
* Add Metall package v0.10
* Metall package: set run_environment METALL_ROOT
* Metall package: removed blanks
* Metall package: add v0.11 and v0.12
* Metall package: change required cmake version
* qt: update versions and URLs
- Add LTS releases of 5.12.10, 5.9.9, 5.6.3
- Mark other minor versions of 5 as deprecated
- Use https
- The URL for older QT versions changed recently to "new_archive"
- Prefer xz instead of gz for >=5.6 because 5.6.3 isn't available as
gz. This invalidates the SHA of 5.7-5.8.
* mxnet: new version 1.8.0
use submodules on master
introduce constraints on cuda versions supported
handle USE_MKLDNN->USE_ONEDNN conversion
* * use define for USE_CUTENSOR
* fix up dependencies for 2.0.0+
libtirpc puts its header files under prefix/include/tirpc, but
spack was returning just prefix/include for location of headers.
This will cause spack to return both prefix/include and
prefix/include/tirpc for headers, so both
include <rpc/xdr.h>
or
include <tirpc/rpc/xdr.h>
should work.
Help dependents find libraries/headers. Like intel-oneapi-mkl, this
package offers several different versions of libraries that conflict.
This PR chooses one of those versions. When
https://github.com/spack/spack/discussions/22749 is resolved, this
package should be updated to choose which libraries to use.
Previously the tau package got the cxx and cc names from
os.path.basename(self.compiler.cxx), however if the path to the compiler
looks like "/usr/bin/g++-10.2.0" then tau's custom build system doesn't
recognize it. What we want instead is something that looks like "g++"
which is exactly what cxx_names[0] gives us. We already did this for
fortran, so I am not sure why we didn't do it here. Not doing this
causes a build failure when tau tries to use a polyfill (vector.h,
iostream.h) that doesn't seem to be packaged with tau.
Additionally, tau needs some help finding mpi include directories when
building with MPI, so we provide them. Unfortunately, we can't just say
that the compilers are mpicc and mpicxx in the previous fix to have
these things found automatically. This is because tau assumes we always
need the polyfill when the compilers are set to these values which again
causes a build failure.
The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.
This PR thus modifies spack.hook in the following ways:
- Use __import__ instead of spack.util.imp.load_source (this
addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
to stay local
Complete overhaul of the Legion package to better capture a more
up-to-date set of configuration options and variants. This update
adds additional flexibility and features that were requested by
users.
* Add version 21.03.0 and "stable" branch
* Remove all older numeric versions
* Add support for CUDA, Python, PAPI support and more
* Add maintainer
* This no longer uses the Spack `gasnet` package: it defaults to
using an embedded gasnet or can be pointed to an external
* MUMPS: Use GEMMT BLAS extension when possible.
This should improve the performance and is recommanded by the developers.
* MUMPS: Add a new "openmp" variant.
* MUMPS: Add a "blr_mt" variant.
This improves performance when using OpenMP but might not be compatible with all multithreaded BLAS.
Set the path to javah via the JAVAH environment variable. If it is
a version of java that does not have javah it will fall back to `javac
-h`. Without specifying this the build could pick up a javah from the
system.
- add version 3.4.0
- add patch for bam2wig when version 3.4.0
- url format changed again, hopefully stable now
- added missing python dependency when version >3.3.1
- have older version compile with htslib, samtools ,bcftools
- new dependencies for version 3.4.0
- sqlite
- mysql-client
- mysqlpp
- lp-solve
- suite-sparse
- refactored filtering code
- set python interpreter in scripts
This is as much a question as it is a minor fine-tuning of the docs. I've been known to add things to an environment by editing the `spack.yaml` file directly. When I read the previous version of this sentence, I was afraid that `spack add` was actually doing *two* things, modifying the `spack.yaml` and updating something else that defined the roots of the Environment. A bit of experimentation suggests that editing the `spack.yaml` file is sufficient to change the roots.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This adds that new version to the package, updates the url, and
updates the hash of v0.0.3 for the new url.
This also updates the KVTree dependency as MPI is required to be
enabled in KVTree for er to work.
rankstr is now also required by er for recently added tests.
PR #22864 added a patch to hpctoolkit to fix an issue with gcc 10.x, and the patch was applied to all revs unconditionally. But this was fixed in hpctoolkit master on Aug 11, 2020, so the patch should only apply to old revs.
Fixes#22951.
Update package with 4.1 sha keys.
Use variant to disable openmp in the build of llvm-amdgpu.
Set CPATH, LIBRARY_PATH so that clang knows to look in the rocm-openmp-extras for headers/libraries.
Disable flang warnings as Spack thinks they are errors.
In ROCm 4.1, the plugin changed names from hsa -> amdgpu.
Update HSA_INCLUDE for 4.1.0.
Clingo has been released on PyPI, so there
are no more concerns on our CI depending
on pypy.test for installing the wheel.
Apparently we have parts of Spack which
are not compatible with kcov > 3.4
UnifyFS has been integrated with updated versions of its mochi-margo
dependency (and mochi-margo's mercury and libfabric dependencies).
This removes support for version 0.9.0
fixes#22786
Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.
These were deprecated when the custom cuda_arch list was
removed. Also fixed up the Aluminum dependencies for Hydrogen and
DiHydrogen. Turns out that Aluminum v0.6.0 didn't have a correct
version in CMake and thus the interaction with older versions of
Hydrogen and DiHydrogen needed to be corrected.
This isn't a significant issue, but I noticed that the docstring incorrectly references "tty.fail" and I wanted to quickly fix it to reflect the correct command, tty.die. I also wanted to fix the docstrings to not be large clumps, to what @tgamblin suggested after I wrote this - having one line at the top that is a quick summary, and more verbose after that.
* New package: py-pymumps
Python bindings for MUMPS, a parallel sparse direct solver
* py-pymumps: fixing flake issues
* py-pymumps: fix dependency types
Following suggestion of @adamjstewart
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This has been checked with gcc on ubuntu 16.04, which ships binutils 2.26 by
default, using spack's binutils 2.36. Only the combination +gas and ~ld
seems to trigger this incompatibility with debug symbols (gcc -g -O2
main.c fails with the error in the comment above the conflict)
- Add dependency on eigen package
- Add last version known to work with ROOT 6.16.00. Until recently GenFit lacked
any tagged versions, therefore, we use a commit hash
FFTW:
(1) Condition to ensure Quad precision is not supported in MPI under FFTW base class
AMDFFTW:
(1) Support for debug and quad precision for aocc compiler
(2) Dedicated variant for threads for enabling SMP threads
(3) Restricted simd features to 'sse2', 'avx' and 'avx2'
(4) Removed float simd features
(5) If debug option is enabled, configure option will be appended with --enable-debug option
(6) Condition to ensure amd-fast-planner is supported from 3.0 onwards under amdfftw derived class
(7) New variant amd-fast-planner - This option will reduce the planning time without much tradeoff in the performance. It is supported for single and double precisions.
(8) Removed following flags for amdfftw - '--enable-threads', '--enable-fma' and '--enable-sse'
MDSplus is a set of software tools for data acquisition and storage and
a methodology for management of complex scientific data.
https://www.mdsplus.org
Co-authored-by: Marijn van Vliet <marijn.vanvliet@aalto.fi>
This provides initial support for [spack monitor](https://github.com/spack/spack-monitor), a web application that stores information and analysis about Spack installations. Spack can now contact a monitor server and upload analysis -- even after a build is already done.
Specifically, this adds:
- [x] monitor options for `spack install`
- [x] `spack analyze` command
- [x] hook architecture for analyzers
- [x] separate build logs (in addition to the existing combined log)
- [x] docs for spack analyze
- [x] reworked developer docs, with hook docs
- [x] analyzers for:
- [x] config args
- [x] environment variables
- [x] installed files
- [x] libabigail
There is a lot more information in the docs contained in this PR, so consult those for full details on this feature.
Additional tests will be added in a future PR.
In debug mode, processes taking an exclusive lock write out their node name to
the lock file. We were using `getfqdn()` for this, but it seems to produce
inconsistent results when used from within some github actions containers.
We get this error because getfqdn() seems to return a short name in one place
and a fully qualified name in another:
```
File "/home/runner/work/spack/spack/lib/spack/spack/test/llnl/util/lock.py", line 1211, in p1
assert lock.host == self.host
AssertionError: assert 'fv-az290-764....cloudapp.net' == 'fv-az290-764'
- fv-az290-764.internal.cloudapp.net
+ fv-az290-764
!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!
== 1 failed, 2547 passed, 7 skipped, 22 xfailed, 2 xpassed in 1238.67 seconds ==
```
This seems to stem from https://bugs.python.org/issue5004.
We don't really need to get a fully qualified hostname for debugging, so use
`gethostname()` because its results are more consistent. This seems to fix the
issue.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
New version has new dependencies (which are also added here as new
packages):
* perl-mce
* perl-threads
* perl-thread-queue
The new version of genemark-et also has a different URL scheme.
* Add a +gui variant (default off) which adds dependencies on
qt, paraview, and qwt
* Backport upstream patch when installing version 8.4 (this patch
is already applied for versions >= 9.0)
Both binary packages would otherwise require X11 and Mesa libraries to
be installed on the host to run. Make sure they use the Spack-provided
libraries by patching the `rpath` via `patchelf`.
* Clarify stub compiler definition in compilers.yaml
* Update explanation of why stub compiler definition is needed
* Add note about required module definition when using Spack-installed
intel-parallel-studio as intel-compiler
* Add suggestion about updating package config preferences based on
choice of variants when installing intel-parallel-studio to avoid
reinstallation
on multilib distros with lib/lib64 (rather than lib32/lib) the library ends up in a dir lib64/ instead of lib/, breaking the libs property (and the cp2k+spglib build)
We remove system paths from search variables like PATH and
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages
may be installed to prefixes that are not actually system paths
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.
See #17270.
```
make[2]: Entering directory `/tmp/vavolkl/spack-stage/spack-stage-qt-5.14.2-63dapppjbq6vqh3le7pazsprijls7cfl/spack-src/qtwebengine/src'
/bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `echo Modules will not be built. Python version 2 (2.7.5 or later) is required to build QtWebEngine.'
make[2]: *** [errorbuild] Error 1
```
We set LC_ALL=C to encourage a build process to generate ASCII
output (so our logger daemon can decode it). Most packages
respect this but it appears that intel-oneapi-compilers does
not in some cases (see #22813). This reads the output of the build
process as UTF-8, which still works if the build process respects
LC_ALL=C but also works if the process generates UTF-8 output.
For Python >= 3.7 all files are opened with UTF-8 encoding by
default. Python 2 does not support the encoding argument on
'open', so to support Python 2 the files would have to be
opened in byte mode and explicitly decoded (as a side note,
this would be the only way to handle other encodings without
being informed of them in advance).
* bugfix: fix representation of null in spack_yaml output
Nulls were previously printed differently by `spack config blame config`
and `spack config get config`. Fix this in the `spack_yaml` dumpers.
* bugfix: `spack config blame` should print all lines of config
`spack config blame` was not printing all lines of configuration because
there were no annotations for empty lines in the YAML dump output. Fix
this by removing empty lines.
Fixed previously unspecified python dependency and ensured that spack's
python is what exodus@v2016 uses. Also, in the process, identified many
missing versions
* new package: gatetools
This PR adds the gatetools package and dependencies. The gatetools
package is a set of command line tools for gate. Since it is primarily a
CLI, although python modules can be loaded, it is named gatetools as
opposed to py-gatetools.
* Fix quote characterss to avoid test error
* Found another UTF8 character that was tripping up tests
* Another UTF-8 character to replace
* Remove py-python-box dependency and package file
* Make numpy a variant
- py-setuptools needs to be a run dependendency
This was masked by py-numpy having py-setuptools as a run dependency.
* Add missing build depency on py-pytest-runner
- set constraint for geant4 to version 10.6 as gate does not work with
geant-10.7+
- set GATE_USE_ITK: Although RTK is built under ITK, there are some ITK
macros that need to be set explicitly.
As pointed out in https://github.com/STEllAR-GROUP/hpx/issues/5239,
there is an issues in OTF2 <=2.2 where a variable is not properly
initialized. As currently no release of OTF2 is available fixing this,
the patch should be applied.
* [py-scikit-image] Added py-setuptools back into depends_on. Otherwise it is putting skimage in scikit_image-version-pyX.Y-arch.egg dir under site-packages
* [py-scikit-image] Added latest version
* [py-scikit-image] Added py-numpy version dependency when package version greater than 0.18
* [py-scikit-image] Updates to python dependency
* Fix typo
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- Use debugoptimized as default build type, just like RelWithDebInfo for cmake
- Do not strip by default, and add a default_library variant which conveniently support both shared and static
- Add a maintainer
- Help libtool to find the correct paths to libraries
- Handle externals from system directories
- Enable eccodes for older versions
* The fltk package can build libraries with opengl support. By default, the configure script looks for opengl headers in the sytem include paths. If 'devel' packages have not been installed on the system it omits the 'ftlk_gl.so' library. This can break packages like 'octave' which expects 'fltk' to have opengl support and looks for the library 'fltk_gl'.
Make opengl support explicit in fltk by adding a dependency on 'gl' and adding a new variant of the same name 'gl' (default On).
With these modifications 'fltk_gl' and 'octave' build successfully on CentOS8.
The default behavior is to always enable opengl.
https://www.fltk.org/doc-1.3/intro.html
* Add patch for latest hwloc@:1 to locate ncurses
This way we don't have to depend on ncurses~termlib, which may run into
issues when another package explicitly depends on ncurses+termlib
* Move termcap to the back, cause it's a system symlink on macos and isn't set by spack
- add new version, 4.09.1
- use github url
- convert to autotools package
- deprecate version 4.07b: This version requires manual download and is
a binary only installation.
- version 4.0.7 was not building
- version 4.0.9 was not setting search correctly due to an extra "return"
in config
- added version 4.1.2-p1
- new version needs py-h5py
- new version does not need utf8 patch
- url format changed
Add a conflict for CUDA and shared libraries in Ascent.
The new concretizer will automatically change the default for
Ascent in that case. Until then, dependencies like WarpX need
to hint the `~shared` wish explicitly.
This initial package recipe uses a custom-built wrapper to drive an internal CMake file. Since nekRS also includes built-in copies of several dependencies such as BLAS and HYPRE, it cannot be linked with other such dependencies. However, to work with the `ceed` metapackage, we cannot add `^blas` conflicts to nekRS.
See https://github.com/spack/spack/pull/22519 for discussion.
By default, clingo doesn't show any optimization criteria (maximized or
minimized sums) if the set they aggregate is empty. Per the clingo
mailing list, we can get around that by adding, e.g.:
```
#minimize{ 0@2 : #true }.
```
for the 2nd criterion. This forces clingo to print out the criterion but
does not affect the optimization.
This PR adds directives as above for all of our optimization criteria, as
well as facts with descriptions of each criterion,like this:
```
opt_criterion(2, "number of non-default variants")
```
We use facts in `concretize.lp` rather than hard-coding these in `asp.py`
so that the names can be maintained in the same place as the other
optimization criteria.
The now-displayed weights and the names are used to display optimization
output like this:
```console
(spackle):solver> spack solve --show opt zlib
==> Best of 0 answers.
==> Optimization Criteria:
Priority Criterion Value
1 version weight 0
2 number of non-default variants (roots) 0
3 multi-valued variants + preferred providers for roots 0
4 number of non-default variants (non-roots) 0
5 number of non-default providers (non-roots) 0
6 count of non-root multi-valued variants 0
7 compiler matches + number of nodes 1
8 version badness 0
9 non-preferred compilers 0
10 target matches 0
11 non-preferred targets 0
zlib@1.2.11%apple-clang@12.0.0+optimize+pic+shared arch=darwin-catalina-skylake
```
Note that this is all hidden behind a `--show opt` option to `spack
solve`. Optimization weights are no longer shown by default, but you can
at least inspect them and more easily understand what is going on.
- [x] always show optimization criteria in `clingo` output
- [x] add `opt_criterion()` facts for all optimizationc criteria
- [x] make display of opt criteria optional in `spack solve`
- [x] rework how optimization criteria are displayed, and add a `--show opt`
optiong to `spack solve`
CachedCMakePackage is a CMakePackage subclass for using CMake initial
cache. This feature of CMake allows packages to increase reproducibility,
especially between spack builds and manual builds. It also allows
packages to sidestep certain parsing bugs in extremely long cmake
commands, and to avoid system limits on the length of the command line.
Co-authored by: Chris White <white238@llnl.gov>
* Add patch for Intel C++ compiler
- On some machines (in particular MacOSX Catalina), the icpc in some way
utilizes the preprocessor of the associated "developer tools" used by
icpc. This leads to, in some cases, a preprocessor claiming support for
__tuple_element_packs, even though icpc (as of version 21.1) can't
actually parse such code. Just use the MPARK_TUPLE_ELEMENT_PACK impl
with __icc until icpc supports it, to avoid issues with developer tools
that are untested.
- The same patch has been PRed against mpark-variant
In the face of two consecutive spaces in the command line, the compiler wrapper would skip all remaining arguments, causing problems building py-scipy with Intel compiler. This PR solves the problem.
* Fixed compiler wrapper in the face of extra spaces between arguments
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Backwards incompatible cleanup to target single-tarball-per-arch builds
going forwards.
* Replace per-distro versions with new per-arch builds, and add
url_for_version to avoid specifying per tarball.
* Customise environment setup to avoid adding lib to LD_LIBRARY_PATH.
* Update homepage and licensing URLs.
* Avoid shell interpretation when running textinstall.sh.
* Added NickRF as maintainer.
Use `conflicts` directive whenever possible.
This allows failing early when conflicting variants are used.
Do not silently ignore `+parmetis` variant when `~metis`.
Instead throw an error during concretization.
Simplify the "Makefile.inc" generation.
This will make easier to add new variants in the future.
* Added version patch for 1.4.0 tag on mpark-variant
Redirected urls to git and github tags.
* Updated to commit hashes
* Update var/spack/repos/builtin/packages/mpark-variant/package.py
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Update var/spack/repos/builtin/packages/mpark-variant/package.py
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Update var/spack/repos/builtin/packages/mpark-variant/package.py
Co-authored-by: Anthony J Zukaitis <zukaitis@lanl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Original commit message:
This feature of CMake allows packages to increase reproducibility, especially between
Spack- and manual builds. It also allows packages to sidestep certain parsing bugs in
extremely long ``cmake`` commands, and to avoid system limits on the length of the
command line.
Adding:
Co-authored by: Chris White <white238@llnl.gov>
This reverts commit c4f0a3cf6c.
CachedCMakePackage is a specialized class for packages built using CMake initial cache.
This feature of CMake allows packages to increase reproducibility, especially between
Spack- and manual builds. It also allows packages to sidestep certain parsing bugs in
extremely long ``cmake`` commands, and to avoid system limits on the length of the
command line.
Autoconf before 2.70 will erroneously pass ifx's -loopopt argument to the
linker, requiring all packages to use autoconf 2.70 or newer to use ifx.
This is a hotfix enabling ifx to be used in Spack. Instead of bothering
to upgrade autoconf for every package, we'll just strip out the
problematic flag if we're in `ld` mode.
- [x] Add a conditional to the `cc` wrapper to skip `-loopopt` in `ld`
mode. This can probably be generalized in the future to strip more
things (e.g., via an environment variable we can constrol from
Spack) but it's good enough for now.
- [x] Add a test ensuring that `-loopopt` arguments are stripped in link
mode, but not in compile mode.
Since `lazy_lexicographic_ordering` handles `None` comparison for us, we
don't need to adjust the spec comparators to return empty strings or
other type-specific empty types. We can just leverage the None-awareness
of `lazy_lexicographic_ordering`.
- [x] remove "or ''" from `_cmp_iter` in `Spec`
- [x] remove setting of `self.namespace` to `''` in `MockPackage`
We have been using the `@llnl.util.lang.key_ordering` decorator for specs
and most of their components. This leverages the fact that in Python,
tuple comparison is lexicographic. It allows you to implement a
`_cmp_key` method on your class, and have `__eq__`, `__lt__`, etc.
implemented automatically using that key. For example, you might use
tuple keys to implement comparison, e.g.:
```python
class Widget:
# author implements this
def _cmp_key(self):
return (
self.a,
self.b,
(self.c, self.d),
self.e
)
# operators are generated by @key_ordering
def __eq__(self, other):
return self._cmp_key() == other._cmp_key()
def __lt__(self):
return self._cmp_key() < other._cmp_key()
# etc.
```
The issue there for simple comparators is that we have to bulid the
tuples *and* we have to generate all the values in them up front. When
implementing comparisons for large data structures, this can be costly.
This PR replaces `@key_ordering` with a new decorator,
`@lazy_lexicographic_ordering`. Lazy lexicographic comparison maps the
tuple comparison shown above to generator functions. Instead of comparing
based on pre-constructed tuple keys, users of this decorator can compare
using elements from a generator. So, you'd write:
```python
@lazy_lexicographic_ordering
class Widget:
def _cmp_iter(self):
yield a
yield b
def cd_fun():
yield c
yield d
yield cd_fun
yield e
# operators are added by decorator (but are a bit more complex)
There are no tuples that have to be pre-constructed, and the generator
does not have to complete. Instead of tuples, we simply make functions
that lazily yield what would've been in the tuple. If a yielded value is
a `callable`, the comparison functions will call it and recursively
compar it. The comparator just walks the data structure like you'd expect
it to.
The ``@lazy_lexicographic_ordering`` decorator handles the details of
implementing comparison operators, and the ``Widget`` implementor only
has to worry about writing ``_cmp_iter``, and making sure the elements in
it are also comparable.
Using this PR shaves another 1.5 sec off the runtime of `spack buildcache
list`, and it also speeds up Spec comparison by about 30%. The runtime
improvement comes mostly from *not* calling `hash()` `_cmp_iter()`.
* New package py-argh
* Fixed deps
* Changed setuptools type
* Update var/spack/repos/builtin/packages/py-argh/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Make -j flag less exceptional
The -j flag in spack behaves differently from make, ctest, ninja, etc,
because it caps the number of jobs to an arbitrary number 16.
Spack will behave like other tools if `spack install` uses a reasonable
default, and `spack install -j <num>` *overrides* that default.
This will be particularly useful for Spack usage outside of a traditional
HPC context and for HPC centers that encourage users to compile on
login nodes with many cores instead of on compute nodes, which has
become increasingly common as individual nodes have more cores.
This maintains the existing default value of min(num_cpus, 16). However,
as it is right now, Spack does a poor job at determining the number of
cpus on linux, since it doesn't take cgroups into account. This is
particularly problematic when using distributed builds with slurm. This PR
also introduces `spack.util.cpus.cpus_available()` to consolidate
knowledge on determining the number of available cores, and improves
core detection for linux. This should also improve core detection for Docker/
Kubernetes, which also use cgroups.
This commit extends the API of the __call__ method of the
SpackCommand class to permit passing global arguments
like those interposed between the main "spack" command
and the subsequent subcommand.
The functionality is used to fix an issue where running
```spack -e . location -b some_package```
ends up printing the name of the environment instead of
the build directory of the package, because the location arg
parser also stores this value as `arg.env`.
fixes#22294
A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.
This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.
* Fixed a bug in the DiHydrogen package where the variant legacy was
changed to distconv and wasn't fully propagated. Cleaned up the
openmp variants on the blas library packages in DiHydrogen and
Elemental. Extended support for Aluminum v1.0 in LBANN, Hydrogen, and
DiHydrogen. Fixed a when clause in the LBANN dependencies.
* Removed the upper range limit for the Aluminum library dependence
* Update var/spack/repos/builtin/packages/dihydrogen/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Remote buildcache indices need to be stored in a place that does not
require writing to the Spack prefix. Move them from the install_tree to
the misc_cache.
fixes#22565
This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.
Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.
This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.
* Make stage use concrete specs from environment
Same as in https://github.com/spack/spack/pull/21642, the idea is that
we want to easily stage a package that fails to build in a complex
environment. Instead of making the user create a spec by hand (basically
transforming all the rules in the environment manifest into a spec,
defying the purpose of the environment...), use the provided spec as a
filter for the already concretized specs. This also speeds up things,
cause we don't have to reconcretize.
This adds MPICC=/path/to/intel-oneapi/mpicc etc to he dependents build stage enabling the use of the compiler wrappers.
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
* clingo: modify recipe for bootstrapping
Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping
* Remove option that breaks on linux
* Give more hints for the current Python
* Disable CLINGO_BUILD_PY_SHARED for bootstrapping
* bootstrapping: try to detect the current python from std library
This is much faster than calling external executables
* Fix compatibility with Python 2.6
* Give hints on which compiler and OS to use when bootstrapping
This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.
* Use spec_for_current_python to constrain module requirement
* ASP-based solver: avoid adding values to variants when they're set
fixes#22533fixes#21911
Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.
* Ensure disjoint sets have a clear semantics for external packages
fixes#22547
SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.
* py-dask-glm: Push again for testing with git.
* py-dask-glm: Fixed the pointed out OSS dependency setting to type=build.
* py-dask-glm: Set depends_on to type=build in the OSS to be built when building the document.
* py-dask-glm: Fix type of depends_on (py-scikit-learn)
Co-authored-by: miura <miura@fx7-pg01.cm.cluster>
* fix issue #22228 build of gdk-pixbuf
* Update var/spack/repos/builtin/packages/gdk-pixbuf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This package used to be a part of rpm, but now is being developed separately.
It will supposedly be moved to a sourceware branch (it is maintained by
redhat) but I do not know if this will happen soon. We need it in order
to change locations in binaries that are built in /tmp and then moved
elsewhere. I will ping @woodard who might be able to give us an estimate
if we should include this development repository or wait for it to be
moved elsewhere. Once this is merged, we will want to use the bootstrap
approach to install and use the library from spack.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.
* Replace URL computation in base IntelOneApiPackage class with
defining URLs in component packages (this is expected to be
simpler for now)
* Add component_dir property that all oneAPI component packages must
define. This property names a directory that should exist after
installation completes (useful for making sure the install was
successful) and also defines the search location for the
component's environment update script.
* Add needed dependencies for components (e.g. intel-oneapi-dnn
requires intel-oneapi-tbb). The compilers provided by
intel-oneapi-compilers need some components under certain
circumstances (e.g. when enabling SYCL support) but these were
omitted since the libraries should only be linked when a
dependent package requests that feature
* Remove individual setup_run_environment implementations and use
IntelOneApiPackage superclass method which sources vars.sh
(located in a subdirectory of component_dir)
* Add documentation for IntelOneApiPackge build system
Co-authored-by: Vasily Danilin <vasily.danilin@yandex.ru>
This is to help debug situations like #22383, where python3.4 is
accidentally preferred over python2. It will also help on systems where
there is no python2 available or some other issue.
* QA: reduce number of unit tests for jobs not in the matrix
* Fixup for CentOS6 dependencies
* Put correct conditions back in place
* Add dependency on changes
* Change default FFT implementation to FFTW
To account for the default changing with casacore v3.4.0, as well as the
CMake logic for getting the FFTPack implementation.
* Switch to using spec.satisfies() for Python CMake values
This PR will update the urls to not have www (not needed),
the repository user should be hpcng instead of sylabs (technically
GitHub maintains the old links but this might not be forever) and
also added 3.7.1 and 3.7.2 versions of Singularity, newly released
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Beginning with version 2.4.1, the python interpreter line changed from
"#!/usr/bin/env python" to "#!/usr/bin/env python3"
That caused the bowtie2-build and bowtie2-inspect scripts to have a
trailing '3' at the end of the interpreter line. This PR fixes that. I
also observed that older versions do not build with intel-oneapi-tbb
so added a conflicts statement for that.
PRs that change only package recipes will only run tests under "package_sanity.py" and without coverage. This should result in a huge drop the cpu-time spent in CI for most PRs.
* updated deps to get gtkplus to build
* gtk-doc requires docbook-xml 4.3
* patch gtk-doc build to find xml catalogs
* patch gtk-doc build to find xml catalogs
* patch gtk-doc build to find xml catalogs
* add new version, fix macOS build error
* reorder docbook versions from newest to oldest
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added 2 new configure patch files to build WRF 3.9.1.1 and 4.2
with aocc@3.0
* Renamed patch files used for building WRF 3.9.1.1 and 4.2 with
aocc@2.3 (mostly, this also removes -march=native from AOCCOPT
and updates LIBMVEC options for aocc@2.3)
* unit tests: mark slow tests as "maybeslow"
This commit also removes the "network" marker and
marks every "network" test as "maybeslow". Tests
marked as db are maintained, but they're not slow
anymore.
* GA: require style tests to pass before running unit-tests
* GA: make MacOS unit tests fail fast
* GA: move all unit tests into the same workflow, run style tests as a prerequisite
All the unit tests have been moved into the same workflow so that a single
run of the dorny/paths-filter action can be used to ask for coverage based
on the files that have been changed in a PR. The basic idea is that for PRs
that introduce only changes to packages coverage is not necessary, this
resulting in a faster execution of the tests.
Also, for package only PRs slow unit tests are skipped.
Finally, MacOS and linux unit tests are now conditional on style tests passing
meaning that e.g. we won't waste a MacOS worker if we know that the PR has
flake8 issues.
* Addressed review comments
* Skipping slow tests on MacOS for package only recipes
* QA: make tests on changes correct before merging
In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.
This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.
We add one special case for this: dependencies of externals.
We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.
- [x] change asp.py so that this can't happen -- we now only generate
dependency types for possible dependencies.
This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.
Given some directives in a package like:
```python
depends_on("foo@1.0+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```
We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:
```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
attr(Name, Arg1) : required_dependency_condition(ID, Name, Arg1);
attr(Name, Arg1, Arg2) : required_dependency_condition(ID, Name, Arg1, Arg2);
attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
dependency_condition(ID, Parent, Dependency);
node(Parent).
```
And we handled `foo@1.0+bar` and `mpi@2:` parts ("imposed constraints")
like this:
```prolog
attr(Name, Arg1, Arg2) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2).
attr(Name, Arg1, Arg2, Arg3) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```
These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.
Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:
```prolog
condition_holds(ID) :-
condition(ID);
attr(Name, A1) : condition_requirement(ID, Name, A1);
attr(Name, A1, A2) : condition_requirement(ID, Name, A1, A2);
attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).
attr(Name, A1) :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```
this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:
```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```
The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:
```prolog
dependency_holds(Package, Dependency, Type) :-
dependency_condition(ID, Package, Dependency),
dependency_type(ID, Type),
condition_holds(ID).
```
Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:
```python
def package_dependencies_rules(self, pkg, tests):
"""Translate 'depends_on' directives into ASP logic."""
for _, conditions in sorted(pkg.dependencies.items()):
for cond, dep in sorted(conditions.items()):
condition_id = self.condition(cond, dep.spec, pkg.name) # create a condition and get its id
self.gen.fact(fn.dependency_condition( # associate specifics about the dependency w/the id
condition_id, pkg.name, dep.spec.name
))
# etc.
```
- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals
LocalWords: concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords: virtuals def pkg cond dep fn refactor github py
* Rewrite relative dev_spec paths internally to absolute paths in case of relocation of the environment file
* Test relative paths for dev_path in environments
* Add a --keep-relative flag to spack env create
This ensures that relative paths of develop paths are not expanded to
absolute paths when initializing the environment in a different location
from the spack.yaml init file.
Currently, regardless of a spec being concrete or not, we validate its variants in `spec_clauses` (part of `SpackSolverSetup`).
This PR skips the check if the spec is concrete.
The reason we want to do this is so that the solver setup class (really, `spec_clauses`) can be used for cases when we just want the logic statements / facts (is that what they are called?) and we don't need to re-validate an already concrete spec. We can't change existing concrete specs, and we have to be able to handle them *even if they violate constraints in the current spack*. This happens in practice if we are doing the validation for a spec produced by a different spack install.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
As of OpenBLAS 0.3.13, leaving off `TARGET` by default optimizes most
code for the host system -- adding flags that cause the resulting
library to fail (SIGILL) on older systems. This change should ensure
that a "x86_64" target for example will work across deployment systems.
https://github.com/xianyi/OpenBLAS/issues/3139
This pull request will add the ability for a user to add a configuration argument on the fly, on the command line, e.g.,:
```bash
$ spack -c config:install_tree:root:/path/to/config.yaml -c packages:all:compiler:[gcc] list --help
```
The above command doesn't do anything (I'm just getting help for list) but you can imagine having another root of packages, and updating it on the fly for a command (something I'd like to do in the near future!)
I've moved the logic for config_add that used to be in spack/cmd/config.py into spack/config.py proper, and now both the main.py (where spack commands live) and spack/cmd/config.py use these functions. I only needed spack config add, so I didn't move the others. We can move the others if there are also needed in multiple places.
Was getting the following error:
```
$ spack test list
==> Error: issubclass() arg 1 must be a class
```
This PR adds a check in `has_test_method` (in case it is re-used elsewhere such as #22097) and ensures a class is passed to the method from `spack test list`.
Updated the versions for DiHydrogen and Aluminum. Added new constraints on versions of Aluminum that are used across the software stack. Cleaned up the dependency on DiHydrogen for LBANN.
* py-chainer: Add test method for ChainerMN (continued #21848, #21940)
* py-chainer: Fixed the word in the message
* py-chainer: Delete unnecessary imports
* py-chainer: Incorporation of the measures pointed out in #21940 was insufficient.
Adds several EpetraExt_BUILD_* options as well as an Amesos2_ENABLE_Basker option. Adds `none` as an option to `gotype=`, which should be among the options since 'none' is specifically handled later in the package definition.
Adds `stokhos` and `trilinoscouplings` as options in spack which already are available in CMake for Trilinos (e.g. Trilinos_ENABLE_Stokhos:BOOL=)
* py-pytest-html recipe
* added missing deps + copyright
* Update var/spack/repos/builtin/packages/py-pytest-html/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pytest-metadata/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Patch eospac's Makefile.-linux-gnu.hashes to consider only `$(notdir
$(F90))` when constructing a key to look up compiler flags in the
_F90-CPUINFO_COMP_FLAGS associative array.
This patch was accepted into eospac itself after the release of
6.4.2beta, so apply it only to 6.4.2beta and earlier releases.
- Fix faulty patch
- Only use GEARSHIFFT_BACKEND_FFTW_PTHREADS if ~openmp
- Explicitly disable float16 support
- Use correct minimum required Boost version
- Add variants for Intel MKL and ROCm rocfft
This is a workaround for an issue with how "spack install" is invoked from within "spack ci rebuild". The fact that we don't get an exception or even the actual returncode when using the object returned by spack.util.executable.which('spack') to install the target spec means we get no indication of failures about the install command itself. Instead we rely on the subsequent buildcache creation failure to fail the job.
In the past, we only had the binutils variant, which included the
bootstrapping flag. Now that we have a separate bootstrap variant, fix
the nvptx conflict accordingly.
* Add intel cluster package update2 for 2020
* add pacifica cli tools, and pager
* remove boilerplate code
* update flake8 lints
* update flake8 lint, missed one
* add a description for pager
* Shorten a line
* Remove whitespace
* check on dependencies and move urls to proper place
* Remove import package as it seems it is not required
* add requests to the uploader config
* remove blank Line
* change to build and run for packages
* add run and build to the packages
* move from url method to pypi method
* adjust requirements based on feedback from adamjstewart
* remove python 3 requirement, and add setuptools-scm
* remove dependence on python
Co-authored-by: Evan Felix <evan.felix@pnnl.gov>
Unlike the other commands of the `R CMD` interface, the `INSTALL` command
will read `Renviron` files. This can potentially break builds of r-
packages, depending on what is set in the `Renviron` file. This PR adds
the `--vanilla` flag to ensure that neither `Rprofile` nor `Renviron` files
are read during Spack builds of r- packages.
Cray added necessary functionality for CMake to support fortran preprocessing using crayftn. This patch is necessary for the current release of cmake (3.19), with this patched expected to be in the 3.20 release of Cmake. The included patch is from kitware.
see https://gitlab.kitware.com/cmake/cmake/-/merge_requests/5882
Co-authored-by: James Elliott <jjellio@sandia.govv>
This adds a `--path` option to `spack python` that shows the `python`
interpreter that Spack is using.
e.g.:
```console
$ spack python --path
/Users/gamblin2/src/spack/var/spack/environments/default/.spack-env/view/bin/python
```
This is useful for debugging, and we can ask users to run it to
understand what python Spack is picking up via preferences in `bin/spack`
and via the `SPACK_PYTHON` environment variable introduced in #21222.
`spack test list` will show you which *installed* packages can be tested
but it won't show you which packages have tests.
- [x] add `spack test list --all` to show which packages have test methods
- [x] update `has_test_method()` to handle package instances *and*
package classes.
* adding package for libabigail, which we likely will need to use it for an analysis!
* includes variant for documentation (doxygen and pysphinx are associated dependencies)
* Improve R package creation
This PR adds the `list_url` attribute to CRAN R packages when using
`spack create`. It also adds the `git` attribute to R Bioconductor
packages upon creation.
* Switch over to using cran/bioc attributes
The cran/bioc entries are set to have the '=' line up with homepage
entry, but homepage does not need to exist in the package file. If it
does not, that could affect the alignment.
* Do not have to split bioc
* Edit R package documentation
Explain Bioconductor packages and add `cran` and `bioc` attributes.
* Update lib/spack/docs/build_systems/rpackage.rst
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update lib/spack/docs/build_systems/rpackage.rst
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Simplify the cran attribute
The version can be faked so that the cran attribute is simply equal to
the CRAN package name.
* Edit the docs to reflect new `cran` attribute format
* Use the first element of self.versions() for url
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Remove casacore's old version of the file with a package patch()
function, and depend on a modern CMake for the build.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
1. Add version 2021.03.01.
2. Cleanup the binutils dependencies now that 2.35.2 and 2.36 are available.
3. Require gcc 7.x or later for current 2021 version.
4. Simplify the xz depends to always require +pic.
This works around a glitch in the original concretizer.
This allows users to use relative paths for mirrors and repos and other things that may be part of a Spack environment. There are two ways to do it.
1. Relative to the file
```yaml
spack:
repos:
- local_dir/my_repository
```
Which will refer to a repository like this in the directory where `spack.yaml` lives:
```
env/
spack.yaml <-- the config file above
local_dir/
my_repository/ <-- this repository
repo.yaml
packages/
```
2. Relative to the environment
```yaml
spack:
repos:
- $env/local_dir/my_repository
```
Both of these would refer to the same directory, but they differ for included files. For example, if you had this layout:
```
env/
spack.yaml
repository/
includes/
repos.yaml
repository/
```
And this `spack.yaml`:
```yaml
spack:
include: includes/repos.yaml
```
Then, these two `repos.yaml` files are functionally different:
```yaml
repos:
- $env/repository # refers to env/repository/ above
repos:
- repository # refers to env/includes/repository/ above
```
The $env variable will not be evaluated if there is no active environment. This generally means that it should not be used outside of an environment's spack.yaml file. However, if other aspects of your workflow guarantee that there is always an active environment, it may be used in other config scopes.
For opt-in packages in Spack, its common that the `cuda` variant
is disabled by default.
This also simplifies downstream usage in multi-variants for
backends in user code.
* Allow the bootstrapping of clingo from sources
Allow python builds with system python as external
for MacOS
* Ensure consistent configuration when bootstrapping clingo
This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.
* Github actions: test clingo with bootstrapping from sources
* Add command to inspect and clean the bootstrap store
Prevent users to set the install tree root to the bootstrap store
* clingo: documented how to bootstrap from sources
Co-authored-by: Gregory Becker <becker33@llnl.gov>
- as outlined in merge-request #21336 some clang compilers
can trigger erroneous floating point exceptions.
OpenFOAM normally traps FPE, but disable this in the etc/controlDict
for specific compilers:
change "trapFpe digit;" -> "trapFpe 0;"
Eliminate previous use of FOAM_SGIFPE env variable in favour of
using the etc/controlDict setting - cleaner and robuster.
Co-authored-by: Mark Olesen <Mark.Olesen@esi-group.com>
* py-importlib: Python 2.7 is needed to build.
added depends_on('python@2.7.0:2.7.99')
* Update var/spack/repos/builtin/packages/py-importlib/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This check is performed in cmake_args rather than with a 'conflicts'
statement because matching on !clang (i.e. any compiler that is not
clang) cannot currently be done with our spec syntax.
If a user creates a wrapper for the ifx binary called ifx_orig,
this causes the ifx --version command to produce:
$ ifx --version
ifx_orig (IFORT) 2021.1 Beta 20201113
Copyright (C) 1985-2020 Intel Corporation. All rights reserved.
The regex for ifx currently expects the output to begin with
"ifx (IFORT)..." so the wrapper would not be detected as ifx. This
PR removes the need for the static "ifx" string which allows wrappers
to be detected as ifx.
In general, the Intel compiler regexes do not include the invoked
executable name (i.e., ifort, icc, icx, etc.), so this is not
expected to cause any issues.
* asciidoc: current sourceforge a2x needs python2, new github release python3
* asciidoc: current sourceforge a2x needs python2, new github release python3
* asciidoc: making python 2.3 to 2.7 able to cope with asciidoc
* asciidoc: current sourceforge a2x needs python2, new github release python3
* asciidoc: current sourceforge a2x needs python2, new github release python3
* asciidoc: current sourceforge a2x needs python2, new github release python3
* asciidoc: current sourceforge a2x needs python2, new github release python3
* Fix sensei@develop
Should work with all options but libsim.
Current releases don't work with ~catalyst
See
https://gitlab.kitware.com/sensei/sensei/-/merge_requests/240
for the fix for develop.
Current releases work only with paraview 5.7 and 5.6
See
https://gitlab.kitware.com/sensei/sensei/-/merge_requests/239
for the fix for develop (which works with 5.9)
* Fix libsim.
* Fix warnings.
* Fix python runtime.
* Many changes:
* Reworked cmake options top use the CMakePackage option helpers
* Simplified and consolidated options
* Replaced adios with adios2 variant
* Added vtkm variant (not yet working)
* paraview: Fix downstream consumers getting the wrong FindMPI
* vtk: Fix downstream consumers getting the wrong FindMPI
* Add +ascent, +adios2; remove +adios; variants off by default
* Fix catalyst python logic
* sensei: cleanup formatting
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
if an environment is active and if yes we fetch all
uninstalled specs.
When using an external package with the old concretizer, all
dependencies of that external package were severed. This was not
performed bidirectionally though, so for an external package W with
a dependency on Z, if some other package Y depended on Z, Z could
still pull properties (e.g. compiler) from W since it was not
severed as a parent dependency.
This performs the severing bidirectionally, and adds tests to
confirm expected behavior when using config from DAG-adjacent
packages during concretization.
Allow libfuse to build without setuid binary and bump versions of both
libfuse and fuse-overlayfs.
Still doesn't solve the issue where this package tries to install things
into /etc/init.d though.
kcov CMakeLists.txt generates the "kcov" executable only if
certain dependencies are found. These dependencies are
"libbfd", "libopcodes" and "libiberty", hence the dependency
on binutils.
There clingo-cffi job has two issues to be solved:
1. It uses the default concretizer
2. It requires a package from https://test.pypi.org/simple/
The former can be fixed by setting the SPACK_TEST_SOLVER
environment variable to "clingo".
The latter though requires clingo-cffi to be pushed to a
more stable package index (since https://test.pypi.org/simple/
is meant as a scratch version of PyPI that can be wiped at
any time).
For the time being run the tests in a container. Switch back to
PyPI whenever a new official version of clingo will be released.
This allows for quickly configuring a spack install/env to use upstream packages by default. This is particularly important when upstreaming from a set of officially supported spack installs on a production cluster. By configuring such that package preferences match the upstream, you ensure maximal reuse of existing package installations.
* n2p2: Add new package
* remove ,
* Resurrection of , and changed " to single
* changed example.command to example.co
* n2p2: Added v2.1.1
* n2p2: Changed the type of depends_on.
Since there are many variables being set I thought it would be a good idea to document them better and slightly simplify the logic for external vs not-external.
Fixes for gitlab pipelines
* Remove accidentally retained testing branch name
* Generate pipeline w/out debug mode
* Make jobs interruptible for auto-cancel pending
* Work around concretization conflicts
* Support clingo when used with cffi
Clingo recently merged in a new Python module option based on cffi.
Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.
manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi
* Spec.splice feature
Construct a new spec with a dependency swapped out. Currently can only swap dependencies of the same name, and can only apply to concrete specs.
This feature is not yet attached to any install functionality, but will eventually allow us to "rewire" a package to depend on a different set of dependencies.
Docstring is reformatted for git below
Splices dependency "other" into this ("target") Spec, and return the result as a concrete Spec.
If transitive, then other and its dependencies will be extrapolated to a list of Specs and spliced in accordingly.
For example, let there exist a dependency graph as follows:
T
| \
Z<-H
In this example, Spec T depends on H and Z, and H also depends on Z.
Suppose, however, that we wish to use a differently-built H, known as H'. This function will splice in the new H' in one of two ways:
1. transitively, where H' depends on the Z' it was built with, and the new T* also directly depends on this new Z', or
2. intransitively, where the new T* and H' both depend on the original Z.
Since the Spec returned by this splicing function is no longer deployed the same way it was built, any such changes are tracked by setting the build_spec to point to the corresponding dependency from the original Spec.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.
This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.
This makes a similar change to spack cd.
If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* Made DiHydrogen a required dependencies on newer versions of LBANN.
Added an explicit variant for enabling Boost-dependent callbacks.
Updated the separation for embedded Python and the Python front end
code and associated dependencies.
* Bugfix on ROCm include in DiHydrogen
Drops:
* C_INCLUDE_PATH
* CPLUS_INCLUDE_PATH
* LIBRARY_PATH
* INCLUDE
We already decided to use C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, INCLUDE over CPATH here:
https://github.com/spack/spack/pull/14749
However, none of these flags apply to Fortran on Linux. So for consistency it seems better to make the user use -I and -L flags by hand or through pkgconfig.
BlasPP by ECP SLATE will fail to install by default
(`spack install blaspp`) because:
- the default BLAS installation in Spack is OpenBLAS
- BlasPP conflicts with `threads=none` for all recent OpenBLAS releases
OpenBLAS introduced a threadsafe compile option
with 0.3.7+ aka `USE_LOCKING`:
```
61 # If you want to build a single-threaded OpenBLAS, but expect to call this
62 # from several concurrent threads in some other program, comment this in for
63 # thread safety. (This is done automatically for USE_THREAD=1 , and should not
64 # be necessary when USE_OPENMP=1)
65 # USE_LOCKING = 1
```
According to tests, with `spack install --test root blaspp`,
this exactly addresses the issues in BlasPP tests.
It also seems to be a good option to set by default for OpenBLAS and
users that do not need this safety net can always disable it.
Solve issues with newer OpenBLAS by requiring
`+locking` over none-default threading options.
* Improve error message for inconsistencies in package.py
Sometimes directives refer to variants that do not exist.
Make it such that:
1. The name of the variant
2. The name of the package which is supposed to have
such variant
3. The name of the package making this assumption
are all printed in the error message for easier debugging.
* Add unit tests
* Also removed LBANN CUDA CMake flags that are set by the
version of Hydrogen that is compiled against.
* Updated recipes to use HWLOC 2.3 with ROCm to enable
topology awareness.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* genesis: New package.
* fujitsu-ssl2: fix unit test error
* genesis: Fix for comments and add test method
* genesis: Fix for comments
* genesis: Fix for comments
* libblastrampoline: new package
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
It turns out there are certain cases where having Open MPI use an external hwloc messes up other
applications that also rely on hwloc, but a different version.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
* VTK-m: No `pic` variant
A leftover conflict between `shared` and `pic` variants, the
latter is not part of the package anymore, leads to a solver
error with clingo.
This removes the outdated conflict section.
* VTK-m: Kokkos AMD GPU variant changed
Set the minimun C++ standard for LBANN, Hydrogen, and DiHydrogen to
C++17. The minumim C++ standard for Aluminum is C++14. Add new
versions for Aluminum, Hydrogen, and DiHydrogen. Added support for
high performance linkers in LBANN recipe (gold and lld). Added
variants to LBANN for enabling embedded Python support independently
from the Python front end.
* py-fenics-instant: new package for legacy fenics 2016 and 2017 versions
* Update var/spack/repos/builtin/packages/py-fenics-instant/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The signature for configure_args in the template for new
RPackage packages was incorrect (different than what is
defined and used in lib/spack/spack/build_systems/r.py)
See issue #21774
Keep spack.store.store and spack.store.db consistent in unit tests
* Remove calls to monkeypatch for spack.store.store and spack.store.db:
tests that used these called one or the other, which lead to
inconsistencies (the tests passed regardless but were fragile as a
result)
* Fixtures making use of monkeypatch with mock_store now use the
updated use_store function, which sets store.store and store.db
consistently
* subprocess_context.TestState now transfers the serializes and
restores spack.store.store (without the monkeypatch changes this
would have created inconsistencies)
Since signals are fundamentally racy, We can't bound the amount of time
that the `test_foreground_background_output` test will take to get to
'on', we can only observe that it transitions to 'on'. So instead of
using an arbitrary limit, just adjust the test to allow either 'on' or
'off' followed by 'on'.
This should eliminate the spurious errors we see in CI.
Follow-up to #17110
### Before
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/apple-clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
### After
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
`CC` and `SPACK_CC` were being set correctly, but `PATH` was using the name of the compiler `apple-clang` instead of `clang`. For most packages, since `CC` was set correctly, nothing broke. But for packages using `Makefiles` that set `CC` based on `which clang`, it was using the system compilers instead of the compiler wrappers. Discovered when working on `py-xgboost@0.90`.
An alternative fix would be to copy the symlinks in `env/clang` to `env/apple-clang`. Let me know if you think there's a better way to do this, or to test this.
* add to LD_LIBRARY_PATH so that it finds libimf.so
* amrex: fix handling of CUDA arch (#20786)
* amrex: fix handling of CUDA arch
* amrex: fix style
* amrex: fix bug
* Update var/spack/repos/builtin/packages/amrex/package.py
* Update var/spack/repos/builtin/packages/amrex/package.py
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* ecp-data-vis-sdk: Combine the vis and io SDK packages (#20737)
This better enables the collective set to be deployed togethor satisfying
eachothers dependencies
* r-sf: fix dependency error (#20898)
* improve documentation for Rocm (hip amd builds) (#20812)
* improve documentation
* astyle: Fix makefile for install parameter (#20899)
* llvm-doe: added new package (#20719)
The package contains duplicated code from llvm/package.py,
will supersede solve.
* r-e1071: added v1.7-4 (#20891)
* r-diffusionmap: added v1.2.0 (#20881)
* r-covr: added v3.5.1 (#20868)
* r-class: added v7.3-17 (#20856)
* py-h5py: HDF5_DIR is needed for ~mpi too (#20905)
For the `~mpi` variant, the environment variable `HDF5_DIR` is still required. I moved this command out of the `+mpi` conditional.
* py-hovorod: fix typo on variant name in conflicts directive (#20906)
* fujitsu-fftw: Add new package (#20824)
* pocl: added v1.6 (#20932)
Made version 1.5 or lower conflicts with a64fx.
* PCL: add new package (#20933)
* r-rle: new package (#20916)
Common 'base' and 'stats' methods for 'rle' objects, aiming to make it
possible to treat them transparently as vectors.
* r-ellipsis: added v0.3.1 (#20913)
* libconfig: add build dependency on texinfo (#20930)
* r-flexmix: add v2.3-17 (#20924)
* r-fitdistrplus: add v1.1-3 (#20923)
* r-fit-models: add v0.64 (#20922)
* r-fields: add v11.6 (#20921)
* r-fftwtools: add v0.9-9 (#20920)
* r-farver: add v2.0.3 (#20919)
* r-expm: add v0.999-6 (#20918)
* cln: add build dependency on texinfo (#20928)
* r-expint: add v0.1-6 (#20917)
* r-envstats: add v2.4.0 (#20915)
* r-energy: add v1.7-7 (#20914)
* r-ellipse: add v0.4.2 (#20912)
* py-fiscalyear: add v0.3.0 (#20911)
* r-ecp: add v3.1.3 (#20910)
* r-plotmo: add v3.6.0 (#20909)
* Improve gcc detection in llvm. (#20189)
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
* hatchet: updated urls (#20908)
* py-anuga: add new package (#20782)
* libvips: added v8.10.5 (#20902)
* libzmq: add platform conditions to libbsd dependency (#20893)
* r-dtw: add v1.22-3 (#20890)
* r-dt: add v0.17 (#20889)
* r-dosnow: add v1.0.19 (#20888)
* add version 1.0.16 to r-doparallel (#20886)
* add version 1.3.7 to r-domc (#20885)
* add version 0.9-15 to r-diversitree (#20884)
* add version 1.3-3 to r-dismo (#20883)
* add version 0.6.27 to r-digest (#20882)
* add version 1.5 to r-rngtools (#20887)
* add version 1.5.8 to r-dicekriging (#20877)
* add version 1.4.2 to r-httr (#20876)
* add version 1.28 to r-desolve (#20875)
* add version 2.2-5 to r-deoptim (#20874)
* add version 0.2-3 to r-deldir (#20873)
* add version 1.0.0 to r-crul (#20870)
* add version 1.1.0.1 to r-crosstalk (#20869)
* add version 1.0-1 to r-copula (#20867)
* add version 5.0.2 to r-rcppparallel (#20866)
* add version 2.0-1 to r-compositions (#20865)
* add version 0.4.10 to r-rlang (#20796)
* add version 0.3.6 to r-vctrs (#20878)
* amrex: add ROCm support (#20809)
* add version 2.0-0 to r-colorspace (#20864)
* add version 1.3-1 to r-coin (#20863)
* add version 0.19-4 to r-coda (#20862)
* add version 1.3.7 to r-clustergeneration (#20861)
* add version 0.3-58 to r-clue (#20860)
* add version 0.7.1 to r-clipr (#20859)
* add version 2.2.0 to r-cli (#20858)
* add version 0.4-3 to r-classint (#20857)
* add version 0.1.2 to r-globaloptions (#20855)
* add version 2.3-56 to r-chron (#20854)
* add version 0.4.10 to r-checkpoint (#20853)
* add version 2.0.0 to r-checkmate (#20852)
* add version 1.18.1 to r-catools (#20850)
* add version 1.2.2.2 to r-modelmetrics (#20849)
* add version 3.0-4 to r-cardata (#20847)
* add version 1.0.1 to r-caracas (#20846)
* r-lifecycle: new package at v0.2.0 (#20845)
* add version 3.0-10 to r-car (#20844)
* add version 3.4.5 to r-processx (#20843)
* add version 1.5-12.2 to r-cairo (#20842)
* add version 0.2.3 to r-cubist (#20841)
* add version 2.6 to r-rmarkdown (#20838)
* add version 1.2.1 to r-blob (#20819)
* add version 4.0.4 to r-bit (#20818)
* add version 2.4-1 to r-bio3d (#20816)
* add version 0.4.2.3 to r-bibtex (#20815)
* add version 3.1-4 to r-bayesm (#20807)
* add version 1.2.1 to r-backports (#20806)
* add version 2.0.3 to r-argparse (#20805)
* add version 5.4-1 to r-ape (#20804)
* add version 0.8-18 to r-amap (#20803)
* r-pixmap: added new package (#20795)
* zoltan: source code location change (#20787)
* refactor path logic
* added some paths to make compilers and libs discoverable
* add to LD_LIBRARY_PATH so that it finds libimf.so
and cleanup PEP8
* refactor path logic
* adding paths to LIBRARY_PATH so compiler wrappers will find -lmpi
* added vals for CC=icx, CXX=icpx, FC=ifx to generated module
* back out changes to intel-oneapi-mpi, save for separate PR
* Update var/spack/repos/builtin/packages/intel-oneapi-compilers/package.py
path is joined in _ld_library_path()
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
* set absolute paths to icx,icpx,ifx
* dang close parenthesis
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
Co-authored-by: mic84 <mrosso@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
Co-authored-by: darmac <xiaojun2@hisilicon.com>
Co-authored-by: Danny Taller <66029857+dtaller@users.noreply.github.com>
Co-authored-by: Tomoyasu Nojiri <68096132+t-nojiri@users.noreply.github.com>
Co-authored-by: Shintaro Iwasaki <siwasaki@anl.gov>
Co-authored-by: Glenn Johnson <glenn-johnson@uiowa.edu>
Co-authored-by: Kelly (KT) Thompson <KineticTheory@users.noreply.github.com>
Co-authored-by: Henrique Mendonça <henrique@users.noreply.github.com>
Co-authored-by: h-denpo <57649496+h-denpo@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Thomas Green <tomgreen66@hotmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
Co-authored-by: Abhinav Bhatele <bhatele@cs.umd.edu>
Co-authored-by: a-saitoh-fj <63334055+a-saitoh-fj@users.noreply.github.com>
Co-authored-by: QuellynSnead <quellyn@lanl.gov>
* sbang pushed back to callers;
star moved to util.lang
* updated unit test
* sbang test moved; local tests pass
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
fixes#20736
Before this one line fix we were erroneously deducing
that dependency conditions hold even if a package
was external.
This may result in answer sets that contain imposed
conditions on a node without the node being present
in the DAG, hence #20736.
fixes#20611
The conflict was triggered by an invalid value of the
'scheduler' variant. This causes Spack to error when libyogrt
facts are validated by the ASP-based concretizer.
At some point in the past, the skip_patch argument was removed
from the call to package.do_install() this broke the --skip-patch
flag on the dev-build command.
Set up environment and dependent packages properly when building
with intel-oneapi-mpi as a dependency MPI provider (e.g. point to
mpicc compiler wrapper).
This properly sets PATH/CPATH/LIBRARY_PATH etc. to make the
Spack-generated module file for intel-oneapi-compilers useful
(without this, 'icx' would not be found after loading the module
file for intel-oneapi-compilers).
fixes#20679
In this refactor we have a single cardinality rule on the
provider, which triggers a rule transforming a dependency
on a virtual package into a dependency on the provider of
the virtual.
Every other predicate in the concretizer uses a `_set` suffix to
implement user- or package-supplied settings, but compiler settings use a
`_hard` suffix for this. There's no difference in how they're used, so
make the names the same.
- [x] change `node_compiler_hard` to `node_compiler_set`
- [x] change `node_compiler_version_hard` to `node_compiler_version_set`
Previously, the concretizer handled version constraints by comparing all
pairs of constraints and ensuring they satisfied each other. This led to
INCONSISTENT ressults from clingo, due to ambiguous semantics like:
version_constraint_satisfies("mpi", ":1", ":3")
version_constraint_satisfies("mpi", ":3", ":1")
To get around this, we introduce possible (fake) versions for virtuals,
based on their constraints. Essentially, we add any Versions,
VersionRange endpoints, and all such Versions and endpoints from
VersionLists to the constraint. Virtuals will have one of these synthetic
versions "picked" by the solver. This also allows us to remove a special
case from handling of `version_satisfies/3` -- virtuals now work just
like regular packages.
This converts the virtual handling in the new concretizer from
already-ground rules to facts. This is the last thing that needs to be
refactored, and it converts the entire concretizer to just use facts.
The previous way of handling virtuals hinged on rules involving
`single_provider_for` facts that were tied to the virtual and a version
range. The new method uses the condition pattern we've been using for
dependencies, externals, and conflicts.
To handle virtuals as conditions, we impose constraints on "fake" virtual
specs in the logic program. i.e., `version_satisfies("mpi", "2.0:",
"2.0")` is legal whereas before we wouldn't have seen something like
this. Currently, constriants are only handled on versions -- we don't
handle variants or anything else yet, but they key change here is that we
*could*. For a long time, virtual handling in Spack has only dealt with
versions, and we'd like to be able to handle variants as well. We could
easily add an integrity constraint to handle variants like the one we use
for versions.
One issue with the implementation here is that virtual packages don't
actually declare possible versions like regular packages do. To get
around that, we implement an integrity constraint like this:
:- virtual_node(Virtual),
version_satisfies(Virtual, V1), version_satisfies(Virtual, V2),
not version_constraint_satisfies(Virtual, V1, V2).
This requires us to compare every version constraint to every other, both
in program generation and within the concretizer -- so there's a
potentially quadratic evaluation time on virtual constraints because we
don't have a real version to "anchor" things to. We just say that all the
constraints need to agree for the virtual constraint to hold.
We can investigate adding synthetic versions for virtuals in the future,
to speed this up.
This code in `SpecBuilder.build_specs()` introduced in #20203, can loop
seemingly interminably for very large specs:
```python
set([spec.root for spec in self._specs.values()])
```
It's deceptive, because it seems like there must be an issue with
`spec.root`, but that works fine. It's building the set afterwards that
takes forever, at least on `r-rminer`. Currently if you try running
`spack solve r-rminer`, it loops infinitely and spins up your fan.
The issue (I think) is that the spec is not yet complete when this is
run, and something is going wrong when constructing and comparing so many
values produced by `_cmp_key()`. We can investigate the efficiency of
`_cmp_key()` separately, but for now, the fix is:
```python
roots = [spec.root for spec in self._specs.values()]
roots = dict((id(r), r) for r in roots)
```
We know the specs in `self._specs` are distinct (they just came out of
the solver), so we can just use their `id()` to unique them here. This
gets rid of the infinite loop.
Environment yaml files should not have default values written to them.
To accomplish this, we change the validator to not add the default values to yaml. We rely on the code to set defaults for all values (and use defaulting getters like dict.get(key, default)).
Includes regression test.
This creates a set of packages which all use the same script to install
components of Intel oneAPI. This includes:
* An inheritable IntelOneApiPackage which knows how to invoke the
installation script based on which components are requested
* For components which include headers/libraries, an inheritable
IntelOneApiLibraryPackage is provided to locate them
* Individual packages for DAL, DNN, TBB, etc.
* A package for the Intel oneAPI compilers (icx/ifx). This also includes
icc/ifortran but these are not currently detected in this PR
We have to repeat all the spec attributes in a number of places in
`concretize.lp`, and Spack has a fair number of spec attributes. If we
instead add some rules up front that establish equivalencies like this:
```
node(Package) :- attr("node", Package).
attr("node", Package) :- node(Package).
version(Package, Version) :- attr("version", Package, Version).
attr("version", Package, Version) :- version(Package, Version).
```
We can rewrite most of the repetitive conditions with `attr` and repeat
only for each arity (there are only 3 arities for spec attributes so far)
as opposed to each spec attribute. This makes the logic easier to read
and the rules easier to follow.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Continuing to convert everything in `asp.py` into facts, make the
generation of ground rules for conditional dependencies use facts, and
move the semantics into `concretize.lp`.
This is probably the most complex logic in Spack, as dependencies can be
conditional on anything, and we need conditional ASP rules to accumulate
and map all the dependency conditions to spec attributes.
The logic looks complicated, but essentially it accumulates any
constraints associated with particular conditions into a fact associated
with the condition by id. Then, if *any* condition id's fact is True, we
trigger the dependency.
This simplifies the way `declared_dependency()` works -- the dependency
is now declared regardless of whether it is conditional, and the
conditions are handled by `dependency_condition()` facts.
There are currently no places where we do not want to traverse
dependencies in `spec_clauses()`, so simplify the logic by consolidating
`spec_traverse_clauses()` with `spec_clauses()`.
`version_satisfies/2` and `node_compiler_version_satisfies/3` are
generated but need `#defined` directives to avoid " info: atom does not
occur in any rule head:" warnings.
This PR addresses a number of issues related to compiler bootstrapping.
Specifically:
1. Collect compilers to be bootstrapped while queueing in installer
Compiler tasks currently have an incomplete list in their task.dependents,
making those packages fail to install as they think they have not all their
dependencies installed. This PR collects the dependents and sets them on
compiler tasks.
2. allow boostrapped compilers to back off target
Bootstrapped compilers may be built with a compiler that doesn't support
the target used by the rest of the spec. Allow them to build with less
aggressive target optimization settings.
3. Support for target ranges
Backing off the target necessitates computing target ranges, so make Spack
handle those properly. Notably, this adds an intersection method for target
ranges and fixes the way ranges are satisfied and constrained on Spec objects.
This PR also:
- adds testing
- improves concretizer handling of target ranges
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Currently, version range constraints, compiler version range constraints,
and target range constraints are implemented by generating ground rules
from `asp.py`, via `one_of_iff()`. The rules look like this:
```
version_satisfies("python", "2.6:") :- 1 { version("python", "2.4"); ... } 1.
1 { version("python", "2.4"); ... } 1. :- version_satisfies("python", "2.6:").
```
So, `version_satisfies(Package, Constraint)` is true if and only if the
package is assigned a version that satisfies the constraint. We
precompute the set of known versions that satisfy the constraint, and
generate the rule in `SpackSolverSetup`.
We shouldn't need to generate already-ground rules for this. Rather, we
should leave it to the grounder to do the grounding, and generate facts
so that the constraint semantics can be defined in `concretize.lp`.
We can replace rules like the ones above with facts like this:
```
version_satisfies("python", "2.6:", "2.4")
```
And ground them in `concretize.lp` with rules like this:
```
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) } 1
:- version_satisfies(Package, Constraint).
version_satisfies(Package, Constraint)
:- version(Package, Version), version_satisfies(Package, Constraint, Version).
```
The top rule is the same as before. It makes conditional dependencies and
other places where version constraints are used work properly. Note that
we do not need the cardinality constraint for the second rule -- we
already have rules saying there can be only one version assigned to a
package, so we can just infer from `version/2` `version_satisfies/3`.
This form is also safe for grounding -- If we used the original form we'd
have unsafe variables like `Constraint` and `Package` -- the original
form only really worked when specified as ground to begin with.
- [x] use facts instead of generating rules for package version constraints
- [x] use facts instead of generating rules for compiler version constraints
- [x] use facts instead of generating rules for target range constraints
- [x] remove `one_of_iff()` and `iff()` as they're no longer needed
I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.
So far, the command line is faster than running through Python, but I'm
working on fixing that. I found that if I do this:
```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp") # code from spack solve --show asp hdf5
control.load("display.lp")
control.ground([("base", [])])
control.solve(...)
```
It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.
So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.
Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.
This modification ensures that open-ended variants (variants accepting any string
or any integer) are projected to the finite set of values that are relevant for this
concretization.
Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.
The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.
- [x] move ad-hoc value handling code into spec_clauses so we do it in
one place for CLI and packages
- [x] move handling of `variant_possible_value`, etc. into
`concretize.lp`, where we can automatically infer variant existence
more concisely.
- [x] simplify/clarify some of the code for variants in `spec_clauses()`
fixes#20055
Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.
This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.
fixes#20040
Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.
refers #20040
Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.
This prevents the solver from avoiding expected configurations
just because they contain directives like:
depends_on('pkg+foo')
and `+foo` is not the default variant value for pkg.
fixes#19981
This commit adds support for target ranges in directives,
for instance:
conflicts('+foo', when='target=x86_64:,aarch64:')
If any target in a spec body is not a known target the
following clause will be emitted:
node_target_satisfies(Package, TargetConstraint)
when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.
fixes#20019
Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:
conflicts('%gcc@X.Y:', when='@:A.B')
where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.
refers #20079
Added docstrings to 'concretize' and 'concretized' to
document the format for tests.
Added tests for the activation of test dependencies.
refers #20040
This modification emits rules like:
provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").
for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.
Follow-up to #17110
### Before
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/apple-clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
### After
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
`CC` and `SPACK_CC` were being set correctly, but `PATH` was using the name of the compiler `apple-clang` instead of `clang`. For most packages, since `CC` was set correctly, nothing broke. But for packages using `Makefiles` that set `CC` based on `which clang`, it was using the system compilers instead of the compiler wrappers. Discovered when working on `py-xgboost@0.90`.
An alternative fix would be to copy the symlinks in `env/clang` to `env/apple-clang`. Let me know if you think there's a better way to do this, or to test this.
The dependencies needed a little clean up as several dependencies are
only needed for the +X variant. This PR consolidates all of the
dependencies that actually require +X and explicitly disables them when
~X to prevent accidentally picking up system libraries.
- modified the description of the +X variant
- arranges dependencies to group them
- added missing dependency on xz
- removed unneeded dependencies
- freetype
- glib
- set dependencies when +X
- cairo
- jpeg
- libpng
- libtiff
- tcl/tk
- R uses tcl/tk together, so only tk needs to be depended on, and only
when +X
- moved tcl/tk resources to with/without-x test
- added explicit with/without settings for
- cairo
- jpeglib
- libpng
- libtiff
- tcltk
The fixture was introduced in #19690 maybe accidentally.
It's not used in unit tests, and though it should be
mutable it seems an exact copy of it's immutable version.
Before this change, in pipeline environments where runners do not have access
to persistent shared file-system storage, the only way to pass buildcaches to
dependents in later stages was by using the "enable-artifacts-buildcache" flag
in the gitlab-ci section of the spack.yaml. This change supports a second
mechanism, named "temporary-storage-url-prefix", which can be provided instead
of the "enable-artifacts-buildcache" feature, but the two cannot be used at the
same time. If this prefix is provided (only "file://" and "s3://" urls are
supported), the gitlab "CI_PIPELINE_ID" will be appended to it to create a url
for a mirror where pipeline jobs will write buildcache entries for use by jobs
in subsequent stages. If this prefix is provided, a cleanup job will be
generated to run after all the rebuild jobs have finished that will delete the
contents of the temporary mirror. To support this behavior a new mirror
sub-command has been added: "spack mirror destroy" which can take either a
mirror name or url.
This change also fixes a bug in generation of "needs" list for each job. Each
jobs "needs" list is supposed to only contain direct dependencies for scheduling
purposes, unless "enable-artifacts-buildcache" is specified. Only in that case
are the needs lists supposed to contain all transitive dependencies. This
changes fixes a bug that caused the needs lists to always contain all transitive
dependencies, regardless of whether or not "enable-artifacts-buildcache" was
specified.
* py-typing: new version, avoid issues with newer versions of python
https://pypi.org/project/typing/
"For package maintainers, it is preferred to use
typing;python_version<"3.5" if your package requires it to support
earlier Python versions."
* update conflict version / more message detail
* change the depends_on, leave a comment suggesting correct usage
The actual, documented minimum version of the cfitsio dependency,
v3.181, is now neither available for (easy) download from NASA, nor as
a Spack package. No upper bound on version number exists (at this time).
Pipelines: DAG pruning
During the pipeline generation staging process we check each spec against all configured mirrors to determine whether it is up to date on any of the mirrors. By default, and with the --prune-dag argument to "spack ci generate", any spec already up to date on at least one remote mirror is omitted from the generated pipeline. To generate jobs for up to date specs instead of omitting them, use the --no-prune-dag argument. To speed up the pipeline generation process, pass the --check-index-only argument. This will cause spack to check only remote buildcache indices and avoid directly fetching any spec.yaml files from mirrors. The drawback is that if the remote buildcache index is out of date, spec rebuild jobs may be scheduled unnecessarily.
This change removes the final-stage-rebuild-index block from gitlab-ci section of spack.yaml. Now rebuilding the buildcache index of the mirror specified in the spack.yaml is the default, unless "rebuild-index: False" is set. Spack assigns the generated rebuild-index job runner attributes from an optional new "service-job-attributes" block, which is also used as the source of runner attributes for another generated non-build job, a no-op job, which spack generates to avoid gitlab errors when DAG pruning results in empty pipelines.
Add `manual_download = True` to packages that need to do manual
downloads but do not have the `manual_download attribute set. This
provides a message when installing these packages rather than a generic
fetch error.
Add versions 2020.12 and 2021.01. The viewer and trace viewer are now
integrated into a single program and one tar ball. Now available on
arm/aarch64 and now uses Java 11.
Update some things in hpctoolkit to prepare for a 2021.02.x release:
1. allow binutils to be built with +nls.
2. require libmonitor to be built with +dlopen.
3. allow rocm in more than just develop branch.
4. remove some conflicting setenv's in hpctoolkit module.
The SPACK_PYTHON environment variable can be set to a python interpreter to be
used by the spack command. This allows the spack command itself to use a
consistent and separate interpreter from whatever python might be used for package
building.
* add new flag when compiling mumps with %gcc@10.
* Fix style
* Try to fix formatting
* Use flag_handler approach suggested by @michaelkuhn
in the PR review.
* Delete former approach
* Another style issue
* Add another space
* More fixes
We still need mesa18 for some of our builds.
Those builds require python@2, normal mesa only works with
python@3.
* Remove the deprecation tag
* Add myself as a maintainer: I volunteer to help with this
package for the time being.
* There is only one version, no need to prefer it.
Modifications:
- Make use of SpackCommand objects wherever possible
- Deduplicated code when possible
- Moved cleaning of mirrors to fixtures
- Ensure mock configuration has a clear initialization order
* fixed install with ver 3 and python 3.0
* replaced @3 with @2.999
* [py-pyspark] added version requirements for py-py4j
* [py-pyspark] all versions require at least version 2.7 of python
* [py-pyspark] fixed comma syntax
Co-authored-by: Sid Pendelberry <sid@rit.edu>
The GROMACS package embeds references to its build tool chain.
Use the Spack utilities to make sure these references are correct
outside of the isolated Spack build environment.
`query()` calls `datetime.datetime.fromtimestamp` regardless of whether a
date query is being done. Guard this with an if statement to avoid the
unnecessary work.
Constructing a spec from a name instead of setting name directly forces
from_node_dict to call Spec.parse(), which is slow. Avoid this by using a
zero-arg constructor and setting name directly.
cmake was added as a runtime dependency to meson in #20449. This
introduces an unnecessary implicit cmake dependency, which increases
build time for meson considerably. cmake is only one of many methods for
finding dependencies (pkg-config, qmake etc.), which are also not
runtime dependencies of meson. Add cmake as a build dependency to mesa
instead.
This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.
This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated
The method is now called "use_repositories" and
makes it clear in the docstring that it accepts
as arguments either Repo objects or paths.
Since there was some duplication between this
contextmanager and "use_repo" in the testing framework,
remove the latter and use spack.repo.use_repositories
across the entire code base.
Make a few adjustment to MockPackageMultiRepo, since it was
stating in the docstring that it was supposed to mock
spack.repo.Repo and was instead mocking spack.repo.RepoPath.
Some compilers, such as the NV compilers, do not recognize -isystem
dir when specified without a space.
Works: -isystem ../include
Does not work: -isystem../include
This PR updates the compiler wrapper to include the space with -isystem.
Environment views fail when the tmpdir used for view generation is
on a separate mount from the install_tree because the files cannot
by symlinked between the two. The fix is to use an alternative
tmpdir located alongside the view.
* [py-moviepy] created template
* [py-moviepy] added dependencies
* [py-moviepy] removed fixmes, added homepage and description
* [py-moviepy] updated to pypi and updated checksum
* [py-moviepy] added setuptools dependency
* [py-moviepy] more specific version limit
* [py-moviepy] added checksum for version 1.0.1
* [py-moviepy] numpy restriction not nesessary here
* 3DTK: add new package
* Add missing opencv variants
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
* Fix cmake version req, add eigen dep
* Prefer trunk version
* Tell 3dtk where to find eigen
* Fix installation
* Fix installation
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
This PR fixes the case where groff fails to build if the spack install
path is really long. There are a couple of perl scripts that get built,
and used, during the build phase that will fail when the perl
interpreter line is too long. Filtering the lines will not work because
the files don not exist after the configure phase and patching after the
build phase is too late. This PR runs the scripts explicitly with the
spack perl via the $(PERL) variable in the call to the script.
* Procedure to deprecate old versions of software
* Add documentation
* Fix bug in logic
* Update tab completion
* Deprecate legacy packages
* Deprecate old mxnet as well
* More explicit docs
* py-dvc: new package
* Update var/spack/repos/builtin/packages/py-dvc/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-dvc: add version dependency for py-networkx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Clarify relaxed double precision option
This is only intended for use on the Fujitsu PRIMEHPC platform
* Fix typo
* Shorten line to keep linter happy
Added conflict for macsio@1.1~mpi after investigating source code. As of
1.1 tag macsio does not properly guard out MPI commands. This is
verified as corrected in @develop
* mxnet: convert to CMakePackage
* Package isn't installed yet, can't find libs
* Fix bug with GCC 8+ and CUDA 10 on PowerPC
* Add space
* Add patch to fix cmake cuda flags
* Space no longer needed
* Add patch to fix OpenBLAS linking
* Add missing CMake flag
* Fix env set, default to Distribution
* Add new version, patch
* added py-python-benedict recipe
* Update var/spack/repos/builtin/packages/py-python-benedict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
+ Provide optional variant `pythontools` (default False) that adds a run-time dependency on
`py-matplotlib`.
+ Latest versions require `cmake@3.18:` to support cuda features.
+ Enable a cmake option that forcibly disables qt support. Previously, draco would enable qt
support if it was available in the local build environment (outside of spack).
* graphviz: Remove ghostscript requirement when ~ghostscript
* Add doc variant and patch for 2.44.1
* Patch does not apply
* Update graphviz versions, using archives rather than git hash
* Complete implementation of doc variant
* Fix typo
This commit adds an option to the `external find`
command that allows it to search by tags. In this
way group of executables with common purposes can
be grouped under a single name and a simple command
can be used to detect all of them.
As an example introduce the 'build-tools' tag to
search for common development tools on a system
* add to LD_LIBRARY_PATH so that it finds libimf.so
* amrex: fix handling of CUDA arch (#20786)
* amrex: fix handling of CUDA arch
* amrex: fix style
* amrex: fix bug
* Update var/spack/repos/builtin/packages/amrex/package.py
* Update var/spack/repos/builtin/packages/amrex/package.py
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* ecp-data-vis-sdk: Combine the vis and io SDK packages (#20737)
This better enables the collective set to be deployed togethor satisfying
eachothers dependencies
* r-sf: fix dependency error (#20898)
* improve documentation for Rocm (hip amd builds) (#20812)
* improve documentation
* astyle: Fix makefile for install parameter (#20899)
* llvm-doe: added new package (#20719)
The package contains duplicated code from llvm/package.py,
will supersede solve.
* r-e1071: added v1.7-4 (#20891)
* r-diffusionmap: added v1.2.0 (#20881)
* r-covr: added v3.5.1 (#20868)
* r-class: added v7.3-17 (#20856)
* py-h5py: HDF5_DIR is needed for ~mpi too (#20905)
For the `~mpi` variant, the environment variable `HDF5_DIR` is still required. I moved this command out of the `+mpi` conditional.
* py-hovorod: fix typo on variant name in conflicts directive (#20906)
* fujitsu-fftw: Add new package (#20824)
* pocl: added v1.6 (#20932)
Made version 1.5 or lower conflicts with a64fx.
* PCL: add new package (#20933)
* r-rle: new package (#20916)
Common 'base' and 'stats' methods for 'rle' objects, aiming to make it
possible to treat them transparently as vectors.
* r-ellipsis: added v0.3.1 (#20913)
* libconfig: add build dependency on texinfo (#20930)
* r-flexmix: add v2.3-17 (#20924)
* r-fitdistrplus: add v1.1-3 (#20923)
* r-fit-models: add v0.64 (#20922)
* r-fields: add v11.6 (#20921)
* r-fftwtools: add v0.9-9 (#20920)
* r-farver: add v2.0.3 (#20919)
* r-expm: add v0.999-6 (#20918)
* cln: add build dependency on texinfo (#20928)
* r-expint: add v0.1-6 (#20917)
* r-envstats: add v2.4.0 (#20915)
* r-energy: add v1.7-7 (#20914)
* r-ellipse: add v0.4.2 (#20912)
* py-fiscalyear: add v0.3.0 (#20911)
* r-ecp: add v3.1.3 (#20910)
* r-plotmo: add v3.6.0 (#20909)
* Improve gcc detection in llvm. (#20189)
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
* hatchet: updated urls (#20908)
* py-anuga: add new package (#20782)
* libvips: added v8.10.5 (#20902)
* libzmq: add platform conditions to libbsd dependency (#20893)
* r-dtw: add v1.22-3 (#20890)
* r-dt: add v0.17 (#20889)
* r-dosnow: add v1.0.19 (#20888)
* add version 1.0.16 to r-doparallel (#20886)
* add version 1.3.7 to r-domc (#20885)
* add version 0.9-15 to r-diversitree (#20884)
* add version 1.3-3 to r-dismo (#20883)
* add version 0.6.27 to r-digest (#20882)
* add version 1.5 to r-rngtools (#20887)
* add version 1.5.8 to r-dicekriging (#20877)
* add version 1.4.2 to r-httr (#20876)
* add version 1.28 to r-desolve (#20875)
* add version 2.2-5 to r-deoptim (#20874)
* add version 0.2-3 to r-deldir (#20873)
* add version 1.0.0 to r-crul (#20870)
* add version 1.1.0.1 to r-crosstalk (#20869)
* add version 1.0-1 to r-copula (#20867)
* add version 5.0.2 to r-rcppparallel (#20866)
* add version 2.0-1 to r-compositions (#20865)
* add version 0.4.10 to r-rlang (#20796)
* add version 0.3.6 to r-vctrs (#20878)
* amrex: add ROCm support (#20809)
* add version 2.0-0 to r-colorspace (#20864)
* add version 1.3-1 to r-coin (#20863)
* add version 0.19-4 to r-coda (#20862)
* add version 1.3.7 to r-clustergeneration (#20861)
* add version 0.3-58 to r-clue (#20860)
* add version 0.7.1 to r-clipr (#20859)
* add version 2.2.0 to r-cli (#20858)
* add version 0.4-3 to r-classint (#20857)
* add version 0.1.2 to r-globaloptions (#20855)
* add version 2.3-56 to r-chron (#20854)
* add version 0.4.10 to r-checkpoint (#20853)
* add version 2.0.0 to r-checkmate (#20852)
* add version 1.18.1 to r-catools (#20850)
* add version 1.2.2.2 to r-modelmetrics (#20849)
* add version 3.0-4 to r-cardata (#20847)
* add version 1.0.1 to r-caracas (#20846)
* r-lifecycle: new package at v0.2.0 (#20845)
* add version 3.0-10 to r-car (#20844)
* add version 3.4.5 to r-processx (#20843)
* add version 1.5-12.2 to r-cairo (#20842)
* add version 0.2.3 to r-cubist (#20841)
* add version 2.6 to r-rmarkdown (#20838)
* add version 1.2.1 to r-blob (#20819)
* add version 4.0.4 to r-bit (#20818)
* add version 2.4-1 to r-bio3d (#20816)
* add version 0.4.2.3 to r-bibtex (#20815)
* add version 3.1-4 to r-bayesm (#20807)
* add version 1.2.1 to r-backports (#20806)
* add version 2.0.3 to r-argparse (#20805)
* add version 5.4-1 to r-ape (#20804)
* add version 0.8-18 to r-amap (#20803)
* r-pixmap: added new package (#20795)
* zoltan: source code location change (#20787)
* refactor path logic
* added some paths to make compilers and libs discoverable
* add to LD_LIBRARY_PATH so that it finds libimf.so
and cleanup PEP8
* refactor path logic
* adding paths to LIBRARY_PATH so compiler wrappers will find -lmpi
* added vals for CC=icx, CXX=icpx, FC=ifx to generated module
* back out changes to intel-oneapi-mpi, save for separate PR
* Update var/spack/repos/builtin/packages/intel-oneapi-compilers/package.py
path is joined in _ld_library_path()
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
* set absolute paths to icx,icpx,ifx
* dang close parenthesis
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
Co-authored-by: mic84 <mrosso@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
Co-authored-by: darmac <xiaojun2@hisilicon.com>
Co-authored-by: Danny Taller <66029857+dtaller@users.noreply.github.com>
Co-authored-by: Tomoyasu Nojiri <68096132+t-nojiri@users.noreply.github.com>
Co-authored-by: Shintaro Iwasaki <siwasaki@anl.gov>
Co-authored-by: Glenn Johnson <glenn-johnson@uiowa.edu>
Co-authored-by: Kelly (KT) Thompson <KineticTheory@users.noreply.github.com>
Co-authored-by: Henrique Mendonça <henrique@users.noreply.github.com>
Co-authored-by: h-denpo <57649496+h-denpo@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Thomas Green <tomgreen66@hotmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
Co-authored-by: Abhinav Bhatele <bhatele@cs.umd.edu>
Co-authored-by: a-saitoh-fj <63334055+a-saitoh-fj@users.noreply.github.com>
Co-authored-by: QuellynSnead <quellyn@lanl.gov>
The "fact" method before was dealing with multiple facts
registered per call, which was used when we were emitting
grounded rules from knowledge of the problem instance.
Now that the encoding is changed we can simplify the method
to deal only with a single fact per call.
* py-dictdiffer: fix offline dependencies
* Update var/spack/repos/builtin/packages/py-dictdiffer/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-flatten-dict: new recipe
* Update var/spack/repos/builtin/packages/py-flatten-dict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-flatten-dict: fix dependencies
* py-flatten-dict: fix dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo
package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically
* clingo-bootstrap: apple-clang options to bootstrap statically on darwin
* clingo: fix the path of the Python interpreter
In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.
This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.
* clingo: the commit for "spack" version has been updated.
- add variants for build targets, language bindings, backends
- ensure selected variants are compatible with zfp version
- point to GitHub (not LLNL) tar balls
- add dependencies
- update link to homepage
- add maintainers
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Patch provided by @Billae
Avoid the following error:
File "/home/danlipsa/projects/spack/lib/spack/llnl/util/tty/log.py", line 768, in _writer_daemon
line = _retry(in_pipe.readline)()
File "/home/danlipsa/projects/spack/lib/spack/llnl/util/tty/log.py", line 830, in wrapped
return function(*args, **kwargs)
File "/usr/lib/python3.8/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x97 in position 220: invalid start byte
This PR adds:
1. A patch that fixes a bug in version 2.70
(will be fixed upstream in the next release: https://savannah.gnu.org/support/?110396).
2. A fix for the way we patch shebang in bin/autom4te.in.
For 2, we need to keep the original modification timestamp of the file.
Otherwise, we either get an empty man page for autom4te (versions 2.69 and before)
or a failure at the build time (versions 2.70 and after).
The difference has to do with the update of the missing script: https://git.savannah.gnu.org/cgit/automake.git/commit/lib/missing?id=a22717dffe37f30ef2ad2c355b68c9b3b5e4b8c7
It will take time until developers of Autotools-based packages adjust their scripts
to the new version, therefore, 2.69 is marked as preferred.
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
* fix typo WONTON_ENABLE_Kokkos ---> TANGRAM_ENABLE_Kokkos
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
* fix typo WONTON_ENABLE_Kokkos ---> TANGRAM_ENABLE_Kokkos
* fix bugs in CMake section
* more compact cmake block
* update hash for 1.2.10 and add 1.2.11
* update recipe for Portage 3.0.0
* removing old versions - they won't build with the new recipe and the url specification doesn't work for them
* update version to 3.3.6
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
* added pytest-benchmark recipe
* Update var/spack/repos/builtin/packages/py-pytest-benchmark/package.py
Added Python2 dependence.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added package py-pytest-cpp
* Update var/spack/repos/builtin/packages/py-pytest-cpp/package.py
package is !=5.4.0 use @:5.3.999
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added package py-pytest-timeout
* Update var/spack/repos/builtin/packages/py-pytest-timeout/package.py
Added Python2.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added package py-openmc
* Update var/spack/repos/builtin/packages/py-openmc/package.py
specify branch when using branch names for versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
use run after fixture to install openmc lib
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
Simplify copying openmc library to py-openmc prefix using install
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
NumPy should be 1.9+
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix paren missing
* Update var/spack/repos/builtin/packages/py-openmc/package.py
fixed parens
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
use v0.11.0 in URL
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Sometimes we need to patch a file that is a dependency for some other
automatically generated file that comes in a release tarball. As a
result, make tries to regenerate the dependent file using additional
tools (e.g. help2man), which would not be needed otherwise.
In some cases, it's preferable to avoid that (e.g. see #21255). A way
to do that is to save the modification timestamps before patching and
restoring them afterwards. This PR introduces a context wrapper that
does that.
Python extensions use CC and LDSHARED from the sysconfig module to
build. When Spack installs Python, it replaces the Spack compiler
wrappers in these values with the underlying compilers (since these
wrappers are not useful outside of the context of running Spack).
In order to use the Spack compiler wrappers when building Python
extensions with Spack, Spack sets the LDSHARED environment variable
when running `Python.setup_py` (which overrides sysconfig). However,
many Python extensions use an alternative method to build (namely
PythonPackage.setup_py), which meant that LDSHARED was not set (and
RPATHs were not inserted for dependencies).
This commit makes the following changes:
* Sets LDSHARED in the environment: this applies to all commands
executed during the build, rather than for a single command
invocation
* Updates the logic to set LDSHARED: this replaces the compiler
executable in LDSHARED with the Spack compiler wrapper. This
means that for some externally-built instances of Python,
Spack will now switch to using the Spack wrappers when building
extensions. The behavior is expected to be the same for Spack-
built instances of Python.
* Performs similar modifications for LDCXXSHARED (to ensure RPATHs
are included for C++ codes)
On ppc64le and aarch64, Spack tries to execute any "config.guess" and
"config.sub" scripts it finds in the source package.
However, in the libsodium tarball, these files are present but not
executable. This causes the following error when trying to install
libsodium with spack:
Error: RuntimeError: Failed to find suitable substitutes for config.sub, config.guess
Fix this by chmod-ing the scripts in the patch() function of libsodium.
* recipe: add version 6.1.1 for pytest
add recipe for new dependency py-iniconfig
recipe: add version 6.1.1 for pytest
add recipe for new dependency py-iniconfig
* fix: 'SyntaxError: invalid syntax' during unittests
* requested changes on the pull request done
* requested changes on dep for py-pytest
* change constaint on python for importlib-metadata
* undo change on py-importlib-metada as requested
* bug fix
* bug fix on py-wcwidth
* fix as requested
* forget @ in when param
* forget a colon
* add new versions py-pytest and py-py
* fix setuptools* version
* add rule for more-itertools
* [py-intel-openmp] created template
* [py-intel-openmp] is wheel
* [py-intel-openmp] fixed version for linux
* [py-intel-openmp] removed fixmes, added homepage and description
* [py-intel-openmp] added macos support
* [py-intel-openmp] style fix
* petsc: add a +mkl-pardiso variant
mkl_pardiso solver is distributed with intel-mkl
* petsc: depend on mkl instead of intel-mkl
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The first of my two upstream patches to mypy landed in the 0.800 tag that was released this morning, which lets us use module and package parameters with a .mypy.ini file that has a files key. This uses those parameters to check all of spack in style, but leaves the packages out for now since they are still very, very broken. If no package has been modified, the packages are not checked, but if one has they are. Includes some fixes for the log tests since they were not type checking.
Should also fix all failures related to "duplicate module named package" errors.
Hopefully the next drop of mypy will include my other patch so we can just specify the modules and packages in the config file to begin with, but for now we'll have to live with a bare mypy doing a check of the libs but not the packages.
* use module and package flags to check packages properly
* stop checking package files, use package flag for libs
The packages are not type checkable yet, need to finish out another PR
before they can be. The previous commit also didn't check the libraries
properly, this one does.
Add version 4.12.6, 5.0.3
I think, the preferred was there to keep version 4.
But that's why we have spack, because people can install
whatever version they want.
And root has a properly versioned dependency.
* mumps: Fix for problematic src/makefile patch (#20590)
Minor change in src/Makefile between 5.2.0 and 5.3.3 causing patch to
break. Split into 2 patchfiles
* mumps: Additional patch for fixing #20590
This is to fix issue wherein build fails on Ubuntu due to undefined
symbols, despite symbols being included in other libraries referenced
on the compilation line. I believe the issue is that the inclusion
of libsmumps.so was (due to my original patch) causing
libmumps_common.so to be automatically loaded, but since libpords.so
was not also required, the error was occuring. I have added libpords.so
along with libmumps_common.so to be explicit dependencies of
libsmumps.so, etc., which seems to resolve the issue.
* ArrayFire: Add version 3.7.2.
* ArrayFire: Allow using MKL as the FFTW provider.
* ArrayFire: Ensure the libraries are properly found.
The required backend(s) can be specified in the library query.
* openssl: remove preprocessor flags incompatible with NVIDIA HPC SDK
* Update var/spack/repos/builtin/packages/openssl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Scott McMillan <smcmillan@nvidia.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [py-statsmodels] added version 0.12.1 and updated dependencies accordingly
* [py-statsmodels] added python requirements for new version and fixed formatting for readability
* added m4 dep to PVM recipe
* added libtirpc dep to PVM recipe
* decode str or bytestr string to unicode
* Resolved comments from @adamjstewart on setup_build_environment
* When the SCR spec specifies a resource_manager=SLURM or LSF flag, propagate the spec through to
the libyogrt scheduler=slurm or lsf
* Use libyogrt default scheduler option when the SCR spec does not specify LSF or SLURM
* updated relion for new versions
* Switched to checksum versions
* Enabled spack tracking for MKL and TBB when CPU optimizations are enabled
* Added variants to control MKL FFT and Ppatent feature
* Replaced tags with sha256 for older versions an and switched to virtual packages
* py-funcy: new recipe
* Update var/spack/repos/builtin/packages/py-funcy/package.py
add build and run python dependencies
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* sbang pushed back to callers;
star moved to util.lang
* updated unit test
* sbang test moved; local tests pass
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
* fixing outdated metis link
* updated url to the official website since the previous url was a GitHub repo that is an unofficial mirror that only contains the latest version
* py-dictdiffer: new recipie
* Update var/spack/repos/builtin/packages/py-dictdiffer/package.py
add correct setuptools dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update NEURON simulator package
- update recipe to support autoconf as well as cmake
- new versions >=7.8 support cmake
- remove old variants
- added patch for latest bug fix release 7.8.2
Co-authored-by: Kumbhar Pramod Shivaji <kumbhar@bbpv1.epfl.ch>
Co-authored-by: Kumbhar Pramod Shivaji <kumbhar@bb-c02vf1h0hv2r.epfl.ch>
* NAMD: FIX build +cuda
Hi,
If I try to compile NAMD with CUDA support, it fails because cannot file the file "{self.arch}.cuda" because it is undet the "arch" folder.
* NAMD: FIX mpi ~smp
Fix `spack install namd ^charmpp backend=mpi ~smp`
* ssht: New version 1.3.4
ssht changed configuration mechanism from "home-grown" to "cmake. The previously current version 1.2b1 (a beta release) is thus unfortunately not available any more.
* ssht: Don't set build type
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Don't use CUDA for hipblas
* old versions use TRY_CUDA
* Update var/spack/repos/builtin/packages/hipblas/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add version 2.2-2 to r-gwmodel
* Update var/spack/repos/builtin/packages/r-gwmodel/package.py
Fix comma, space issue.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add version 0.3.17 to r-inline
* Drop R version constraint
A really old version of R was specified in the 0.3.14 and 0.3.15
versions of r-inline. This constraint was dropped in the 0.3.17 version.
Drop it from the spack recipe as well.
* vasp: fix build with gfortran 10
Avoid Error: Type mismatch between actual argument at (1) and actual argument at (2)
* vasp: add version 6.1.1
* vasp 6: allow building without CUDA
* Adding PiP recipe
* pip@1 recipe (it seems working)
* change install dir hierarchy
* installing PiP man pages
* add pip-glibc & pip-gdb
* fix configure option designations, fix dependency types
* fix dependency type of pip
* use AutotoolsPackage in pip recipe
* add patch for pip-glibc & pip-gdb to enable 'disable-werror'
* change glibc install directory
* add linux distro check to pip-gdb
* create process-in-process package
* use flag_handler and join_path
* add gcc version constraint, change install-test to check-installed
* fix gcc version designations on conflicts()
* add constraint of target cpu, fix flake8 warnings
* add version constraint to resource()
* Some fixes to adapt the current version
not to execute 'piplnlibs'
change documentation install command
* Update
new branch name of PiP-gdb
adapting PiP-Testsuite
* update pip-gdb github urls
* The very first commit of Process-in-Process (PiP)
details can be found at https://github.com/RIKEN-SysSoft/PiP
* Fix comment style issues
* New Package: Process-in-Process (PiP) -- 2nd trial
* fix style issue
* change inline comments style (required to have two spaces)
Co-authored-by: Daiki Matsunaga <daikim@axe.bz>
Imagemagick-7.0.8 needs to link against libltdl. Otherwise, the build will fail with:
```
2 errors found in build log:
503 checking for libltdl...
504 checking ltdl.h usability... no
505 checking ltdl.h presence... no
506 checking for ltdl.h... no
507 checking for lt_dlinit in -lltdl... no
508 checking if libltdl package is complete... no
>> 509 configure: error: in `/tmp/gpjohnsn/spack-stage/spack-stage-imagemagick-7.0.8-7-4y44gaklhhciiwjzhfpxjfwdj5q
ltjp3/spack-src':
>> 510 configure: error: libltdl is required for modules and OpenCL builds
511 See `config.log' for more details
```
* add version 3.8.2 to r-gtools
* Improve formatting of description
In case the list gets formatted as a non-list:
- added semicolons to end of list items
- replaced dashes with [#]
* add version 1.30 to r-knitr
* Fix version constraints
- r-digest
- r-formatr
The version constraints on those packages should actually be in the `when`
clause.
'date' is a C++ header library offering extensive date and time
functionality for the C++11, C++14 and C++17 standards written by Howard
Hinnant and released under the MIT license. A slightly modified version
has been accepted (along with 'tz.h') as part of C++20. This package
regroups all header files from the upstream repository by Howard Hinnant
so that other R packages can use them in their C++ code. At present, few
of the types have explicit 'Rcpp' wrapper though these may be added as
needed.
Designed to ease the application and comparison of multiple hypothesis
testing procedures for FWER, gFWER, FDR and FDX. Methods are
standardized and usable by the accompanying 'mutossGUI'.
Utility functions that enhance the 'parallel' package and support the
built-in parallel backends of the 'future' package. For example,
availableCores() gives the number of CPU cores available to your R
process as given by the operating system, 'cgroups' and Linux
containers, R options, and environment variables, including those set by
job schedulers on high-performance compute clusters. If none is set, it
will fall back to parallel::detectCores(). Another example is
makeClusterPSOCK(), which is backward compatible with
parallel::makePSOCKcluster() while doing a better job in setting up
remote cluster workers without the need for configuring the firewall to
do port-forwarding to your local computer.
Contains third-party map tile provider information from 'Leaflet.js',
<https://github.com/leaflet-extras/leaflet-providers>, to be used with
the 'leaflet' R package. Additionally, 'leaflet.providers' enables users
to retrieve up-to-date provider information between package updates.
Provides a header only, C++11 interface to R's C interface. Compared to
other approaches 'cpp11' strives to be safe against long jumps from the
C API as well as C++ exceptions, conform to normal R function semantics
and supports interaction with 'ALTREP' vectors.
Query, set, delete credentials from the 'git' credential store. Manage
'GitHub' tokens and other 'git' credentials. This package is to be used
by other packages that need to authenticate to 'GitHub' and/or other
'git' repositories.
Importance sampling from the truncated multivariate normal using the GHK
(Geweke-Hajivassiliou-Keane) simulator. Unlike Gibbs sampling which can
get stuck in one truncation sub-region depending on initial values, this
package allows truncation based on disjoint regions that are created by
truncation of absolute values. The GHK algorithm uses simple Cholesky
transformation followed by recursive simulation of univariate truncated
normals hence there are also no convergence issues. Importance sample is
returned along with sampling weights, based on which, one can calculate
integrals over truncated regions for multivariate normals.
This release also includes the HDF5 VOL plugin so I've added an additional
funciton to ensure the HDF5_PLUGIN_PATH env var gets updated with the adios
install prefix
* intel-xed: add version 12.0.1
Rework the version numbers for intel-xed, now that xed has actual
releases and tags. Add releases 11.2.0 and 12.0.1. Rename 2019.03.01
to 10.2019.03 as a legacy version that fits in the new order.
Add variant +pic to compile libxed.a with PIC code so that it can be
linked into another shared library.
Add conflict for aarch64.
Add mwkrentel as maintainer.
* py-pyfiglet:new recipe
* Update var/spack/repos/builtin/packages/py-pyfiglet/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pyfiglet: use pypi url
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
fixes#20736
Before this one line fix we were erroneously deducing
that dependency conditions hold even if a package
was external.
This may result in answer sets that contain imposed
conditions on a node without the node being present
in the DAG, hence #20736.
fixes#20611
The conflict was triggered by an invalid value of the
'scheduler' variant. This causes Spack to error when libyogrt
facts are validated by the ASP-based concretizer.
At some point in the past, the skip_patch argument was removed
from the call to package.do_install() this broke the --skip-patch
flag on the dev-build command.
There's two issues with hip where it tries to autodetect the patch
version number from git (when installed), but it does not check if it
even is inside of a git repo. The result is we end up with a shared lib
with a trailing dash in the library suffix: `libamd64.so.x.y.z-`, which
confuses GCC. The patch tries to check if the `.git` folder exists, and
if it does not, it handles version numbering the same as when git was
not installed previously.
* opencl-c-headers: add new version 2020.12.18
* opencl-clhpp: add new version 2.0.13
* opencl-headers: now supports OpenCL 3.0 with new versions of opencl-c-headers and opencl-clhpp
* ocl-icd: add new version 2.2.14 add now can provide OpenCL 3.0
PaRSEC: the Parallel Runtime Scheduler and Execution Controller for micro-tasks on distributed heterogeneous systems.
Signed-off-by: Aurelien Bouteiller <bouteill@icl.utk.edu>
* py-tensorflow: 2.4.0 and dependency updates
* minor version updates
* fix numpy dependency
* dependency rework: compatible release issues, start to clarify cuda versions
* --incompatible_no_support_tools_in_action_inputs was removed in bazel 3.6
* adjustment to versions of cuda dependency, also make sure that
patches/filters still apply to certain release trains.
* python 3.8 and tf < 2.2 have issues
* missed py-grpcio version bump
Set up environment and dependent packages properly when building
with intel-oneapi-mpi as a dependency MPI provider (e.g. point to
mpicc compiler wrapper).
* eospac: add version 6.4.2beta
* eospac: clarify EOSPAC "beta" versions
Compared to 6.4.1, EOSPAC 6.4.2beta contains only one change, a fix
for an inability to read some SESAME files in ASCII format. From the
release announcement,
EOSPAC 6.4.2beta has been released for general use as the latest
(i.e., eospac6-latest) versions. This is a small patch to the
previously-released version 6.4.1, which was requested by an
affected user.
But the "beta" label can cause confusion, especially when a beta
version is the new preferred version, as is the case here. As
suggested by reviewers, add a comment clarifying EOSPAC's use of
"beta".
This properly sets PATH/CPATH/LIBRARY_PATH etc. to make the
Spack-generated module file for intel-oneapi-compilers useful
(without this, 'icx' would not be found after loading the module
file for intel-oneapi-compilers).
The C-Library for the current compiler should already be used by the compiler. So there is no point in returning any libs for this package.
Without this patch: if one uses this as an external package (as intended), then this will can inject system library paths into the build process at the wrong place.
fixes#20679
In this refactor we have a single cardinality rule on the
provider, which triggers a rule transforming a dependency
on a virtual package into a dependency on the provider of
the virtual.
This adds a -i option to "spack python" which allows use of the
IPython interpreter; it can be used with "spack python -i ipython".
This assumes it is available in the Python instance used to run
Spack (i.e. that you can "import IPython").
* Update recipe for AOMP.
Reduced repitition with version hashes.
Expanded dependency versioning.
Reduced repitition with cmake args.
Added version 3.10.0
* Update dependency versions and remove uneeded quotes.
* Update var/spack/repos/builtin/packages/aomp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update of Eccodes to 2.19.1
* PEP8
* PEP8
* PEP8-whitespace
* Update var/spack/repos/builtin/packages/eccodes/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Michael Blaschek <michael.blaschek@univie.ac.at>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Every other predicate in the concretizer uses a `_set` suffix to
implement user- or package-supplied settings, but compiler settings use a
`_hard` suffix for this. There's no difference in how they're used, so
make the names the same.
- [x] change `node_compiler_hard` to `node_compiler_set`
- [x] change `node_compiler_version_hard` to `node_compiler_version_set`
* OpenMPI: Depends on hwlock & libevent
Both hwlock & libevent are required dependencies of Open MPI.
While they are also shipped internally, newer releases (>=4.0)
will start looking for external packages by default.
This caused build issues of Open MPI 4.0.5 with Fortran on macOS
10.15.
* Open MPI 4.0: libevent external
Internally shipped libevent just works fine for prior releases.
#20076 moved Cray-specific MPICH support from the Spack MPICH package
to a new cray-mpich Package. This broke existing package installs
using external mpich on Cray systems. This PR keeps the cray-mpich
package but restores the Cray-specific MPICH support for older
installations.
In the future this support should be removed from the Spack mpich
package and users should be directed to use cray-mpich on Cray.
Previously, the concretizer handled version constraints by comparing all
pairs of constraints and ensuring they satisfied each other. This led to
INCONSISTENT ressults from clingo, due to ambiguous semantics like:
version_constraint_satisfies("mpi", ":1", ":3")
version_constraint_satisfies("mpi", ":3", ":1")
To get around this, we introduce possible (fake) versions for virtuals,
based on their constraints. Essentially, we add any Versions,
VersionRange endpoints, and all such Versions and endpoints from
VersionLists to the constraint. Virtuals will have one of these synthetic
versions "picked" by the solver. This also allows us to remove a special
case from handling of `version_satisfies/3` -- virtuals now work just
like regular packages.
This converts the virtual handling in the new concretizer from
already-ground rules to facts. This is the last thing that needs to be
refactored, and it converts the entire concretizer to just use facts.
The previous way of handling virtuals hinged on rules involving
`single_provider_for` facts that were tied to the virtual and a version
range. The new method uses the condition pattern we've been using for
dependencies, externals, and conflicts.
To handle virtuals as conditions, we impose constraints on "fake" virtual
specs in the logic program. i.e., `version_satisfies("mpi", "2.0:",
"2.0")` is legal whereas before we wouldn't have seen something like
this. Currently, constriants are only handled on versions -- we don't
handle variants or anything else yet, but they key change here is that we
*could*. For a long time, virtual handling in Spack has only dealt with
versions, and we'd like to be able to handle variants as well. We could
easily add an integrity constraint to handle variants like the one we use
for versions.
One issue with the implementation here is that virtual packages don't
actually declare possible versions like regular packages do. To get
around that, we implement an integrity constraint like this:
:- virtual_node(Virtual),
version_satisfies(Virtual, V1), version_satisfies(Virtual, V2),
not version_constraint_satisfies(Virtual, V1, V2).
This requires us to compare every version constraint to every other, both
in program generation and within the concretizer -- so there's a
potentially quadratic evaluation time on virtual constraints because we
don't have a real version to "anchor" things to. We just say that all the
constraints need to agree for the virtual constraint to hold.
We can investigate adding synthetic versions for virtuals in the future,
to speed this up.
This code in `SpecBuilder.build_specs()` introduced in #20203, can loop
seemingly interminably for very large specs:
```python
set([spec.root for spec in self._specs.values()])
```
It's deceptive, because it seems like there must be an issue with
`spec.root`, but that works fine. It's building the set afterwards that
takes forever, at least on `r-rminer`. Currently if you try running
`spack solve r-rminer`, it loops infinitely and spins up your fan.
The issue (I think) is that the spec is not yet complete when this is
run, and something is going wrong when constructing and comparing so many
values produced by `_cmp_key()`. We can investigate the efficiency of
`_cmp_key()` separately, but for now, the fix is:
```python
roots = [spec.root for spec in self._specs.values()]
roots = dict((id(r), r) for r in roots)
```
We know the specs in `self._specs` are distinct (they just came out of
the solver), so we can just use their `id()` to unique them here. This
gets rid of the infinite loop.
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
- [x] update all licensed files to say 2013-2021 using
`spack license update-copyright-year`
- [x] appease mypy with some additions to package.py that needed
for oneapi.py
This adds a new subcommand to `spack license` that automatically updates
the copyright year in files that should have a license header.
- [x] add `spack license update-copyright-year` command
- [x] add test
This adds two lines to `.gitattributes`:
- [x] exclude vendored code from GitHub's language calculation
- [x] recognize `.lp` files as Prolog (closest language to ASP that
linguist supports)
It looks like there have been two attempts
(https://github.com/github/linguist/issues/3867,
https://github.com/github/linguist/issues/4860) to add ASP as a language
to Linguist, but it's not widespread enough to be standard yet (or at
least the people who submitted the PRs haven't been able to show enough
stats to prove it). We'll settle for calling ASP "Prolog" for now as
that'll get us some syntax highlighting for `concretize.lp`.
* hdf-eos5: new package (HDF for Earth Observing Sytem using hdf v5)
* hdf-eos5: flake8 fixes
* hdf-eos5: trying to fix flake8 errors
* hdf-eos5: flake8 fix
* hdf-eos5: Fix to support Fortran codes
The -Df2cFortran compilation flag needed to support Fortran
* hdf-eos2: new package (HDF for Earth Observing System using hdf5)
* hdf-eos2: flake8 fixes
* hdf-eos2: fix to support Fortran
Need the compilation flag -Df2cFortran to allow support for Fortran
codes
libuuid is currently contained in util-linux, libuuid and uuid. This
change introduces a new virtual provider `uuid` and renames the existing
`uuid` package to `ossp-uuid`.
util-linux's libuuid is provided in the form of a separate package
util-linux-uuid to make sure that packages depending on uuid and
util-linux can use a separate uuid implementation, which the concretizer
does not allow if libuuid is contained in util-linux.
- added several patches
- added some missing dependencies
- remove unneeded dependencies
- add CUDA support
- disable queue support, which was limited, and broken anyway
- move package text that was specific to the package to a comment, so it
does not show up the environment module
- set conflicts for cuda and compilers
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* OpenMPI: Add version 4.1.0
* OpenMPI: Prefer version 4.0.5.
* OpenMPI: Update links
The download links changed, there is currently a redirection but it might not work forever. The website also switched to https.
Previously compiler-rt didn't correctly passthrough cmake
variables for python when building the various santizers.
This patch passes these variables through.
This patch may also correctly apply to any version of LLVM
to any version of LLVM that uses the newer monorepo style organization,
and any older llvm newer than 7.0.0 as long as the paths were set
appropriately. However, this was not done so because it was not
tested with older LLVM releases.
Fixes#19908
See also: https://bugs.llvm.org/show_bug.cgi?id=48180
This updates the UnifyFS packages to account for the latest v0.9.1
release.
Updates required and optional dependencies for the respective
releases.
Locks margo and mercury dependencies at specific versions while
integration with their latest versions is still in progress.
* PGI compiler has trouble with avx2 SIMD support
(https://github.com/FFTW/fftw3/issues/78)
* Hew to the project's preferred indentation standard.
* Expand '%nvhpc' logic to include '%pgi'.
* Exceeded the max line-length.
* Break up the long compound statement into nested if's.
* Inadvertently picked up an extraneous file.
* PGI compiler has trouble with avx2/avx-512 SIMD support, too.
* Add PGI runtime libs to LDFLAGS when '%pgi' in spec.
* Revert "Add PGI runtime libs to LDFLAGS when '%pgi' in spec."
This reverts commit 31c3ef8ea2.
* Add PGI runtime libs to LDFLAGS when '%pgi' in spec.
GCC looks for included files based on several env vars.
Remove C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, and OBJC_INCLUDE_PATH
from the build environment to ensure it's clean and prevent
accidental clobbering.
* Adding support for the CMake flags in LBANN that are missing.
* Added new flag to OpenCV dependency and removed negative variants
since OpenCV no longer turns on everything by default. Removed CMake
flags in LBANN that have been deprecated.
* Removed type='build' flags from dependencies so that they get linked
into a environment's view.
* Removed type='build' flags from dependencies so that they get linked
into a environment's view. Fixed DiHydrogen variant to enable
DistConv feature, renamed to +distconv from +legacy. Added conflicts
line to indicated that DistConv and ROCm don't work with +half
support.
* Fixed Flake8 and cleaned up ordering of variants.
* Flake8
* Backed out changes to not mark and cmake and ninja as build
dependencies, which was introduced to make sure that they appear in
a spack environment.
* Backed out changes to not mark doc related packages as build
dependencies, which was introduced to make sure that they appear
in a spack environment.
* Fixed how recipe communicates the intent to build and run tests to the
package CMake.
This is to make sure that the build system doesn't pick up a library that
would happen to be available.
Co-authored-by: Baptiste Jonglez <git@bitsofnetworks.org>
Environment yaml files should not have default values written to them.
To accomplish this, we change the validator to not add the default values to yaml. We rely on the code to set defaults for all values (and use defaulting getters like dict.get(key, default)).
Includes regression test.
This creates a set of packages which all use the same script to install
components of Intel oneAPI. This includes:
* An inheritable IntelOneApiPackage which knows how to invoke the
installation script based on which components are requested
* For components which include headers/libraries, an inheritable
IntelOneApiLibraryPackage is provided to locate them
* Individual packages for DAL, DNN, TBB, etc.
* A package for the Intel oneAPI compilers (icx/ifx). This also includes
icc/ifortran but these are not currently detected in this PR
I lost my mind a bit after getting the completion stuff working and
decided to get Mypy working for spack as well. This adds a
`.mypy.ini` that checks all of the spack and llnl modules, though
not yet packages, and fixes all of the identified missing types and
type issues for the spack library.
In addition to these changes, this includes:
* rename `spack flake8` to `spack style`
Aliases flake8 to style, and just runs flake8 as before, but with
a warning. The style command runs both `flake8` and `mypy`,
in sequence. Added --no-<tool> options to turn off one or the
other, they are on by default. Fixed two issues caught by the tools.
* stub typing module for python2.x
We don't support typing in Spack for python 2.x. To allow 2.x to
support `import typing` and `from typing import ...` without a
try/except dance to support old versions, this adds a stub module
*just* for python 2.x. Doing it this way means we can only reliably
use all type hints in python3.7+, and mypi.ini has been updated to
reflect that.
* add non-default black check to spack style
This is a first step to requiring black. It doesn't enforce it by
default, but it will check it if requested. Currently enforcing the
line length of 79 since that's what flake8 requires, but it's a bit odd
for a black formatted project to be quite that narrow. All settings are
in the style command since spack has no pyproject.toml and I don't
want to add one until more discussion happens. Also re-format
`style.py` since it no longer passed the black style check
with the new length.
* use style check in github action
Update the style and docs action to use `spack style`, adding in mypy
and black to the action even if it isn't running black right now.
We have to repeat all the spec attributes in a number of places in
`concretize.lp`, and Spack has a fair number of spec attributes. If we
instead add some rules up front that establish equivalencies like this:
```
node(Package) :- attr("node", Package).
attr("node", Package) :- node(Package).
version(Package, Version) :- attr("version", Package, Version).
attr("version", Package, Version) :- version(Package, Version).
```
We can rewrite most of the repetitive conditions with `attr` and repeat
only for each arity (there are only 3 arities for spec attributes so far)
as opposed to each spec attribute. This makes the logic easier to read
and the rules easier to follow.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This patch logic resovles a linking issue with ncurses in the mesa
package. This appears to be a recurring problem that was identified in
the mesa gitlab issues here:
https://gitlab.freedesktop.org/mesa/mesa/-/issues/2843
Using `_llvm_method = 'auto'` is broken. This patch replaces that with
`_llvm_method = 'config-tool'`, which is a hack, but makes it possible
to build.
I have commented on the closed issue (2843), referencing the original
author of the bug, and one of the mesa developers, so perhaps they will
fix the problem.
This PR does three related things to try to improve developer tooling quality of life:
1. Adds new options to `.flake8` so it applies the rules of both `.flake8` and `.flake_package` based on paths in the repository.
2. Adds a re-factoring of the `spack flake8` logic into a flake8 plugin so using flake8 directly, or through editor or language server integration, only reports errors that `spack flake8` would.
3. Allows star import of `spack.pkgkit` in packages, since this is now the thing that needs to be imported for completion to work correctly in package files, it's nice to be able to do that.
I'm sorely tempted to sed over the whole repository and put `from spack.pkgkit import *` in every package, but at least being allowed to do it on a per-package basis helps.
As an example of what the result of this is:
```
~/Workspace/Projects/spack/spack develop* ⇣
❯ flake8 --format=pylint ./var/spack/repos/builtin/packages/kripke/package.py
./var/spack/repos/builtin/packages/kripke/package.py:6: [F403] 'from spack.pkgkit import *' used; unable to detect undefined names
./var/spack/repos/builtin/packages/kripke/package.py:25: [E501] line too long (88 > 79 characters)
~/Workspace/Projects/spack/spack refactor-flake8*
1 ❯ flake8 --format=spack ./var/spack/repos/builtin/packages/kripke/package.py
~/Workspace/Projects/spack/spack refactor-flake8*
❯ flake8 ./var/spack/repos/builtin/packages/kripke/package.py
```
* qa/flake8: update .flake8, spack formatter plugin
Adds:
* Modern flake8 settings for per-path/glob error ignores, allows
packages to use the same `.flake8` as the rest of spack
* A spack formatter plugin to flake8 that implements the behavior of
`spack flake8` for direct invocations. Makes integration with
developer tooling nicer, linting with flake8 reports only errors that
`spack flake8` would report. Using pyls and pyls-flake8, or any other
non-format-dependent flake8 integration, now works with spack's rules.
* qa/flake8: allow star import of spack.pkgkit
To get working completion of directives and spack components it's
necessary to import the contents of spack.pkgkit. At the moment doing
this makes flake8 displeased. For now, allow spack.pkgkit and spack
both, next step is to ban spack * and require spack.pkgkit *.
* first cut at refactoring spack flake8
This version still copies all of the files to be checked as befire, and
some other things that probably aren't necessary, but it relies on the
spack formatter plugin to implement the ignore logic.
* keep flake8 from rejecting itself
* remove separate packages flake8 config
* fix failures from too many files
I ran into this in the PR converting pkgkit to std. The solution in
that branch does not work in all cases as it turns out, and all the
workarounds I tried to use generated configs to get a single invocation
of flake8 with a filename optoion to work failed. It's an astonishingly
frustrating config option.
Regardless, this removes all temporary file creation from the command
and relies on the plugin instead. To work around the huge number of
files in spack and still allow the command to control what gets checked,
it scans files in batches of 100. This is a completely arbitrary number
but was chosen to be safely under common line-length limits. One
side-effect of this is that every 100 files the command will produce
output, rather than only at the end, which doesn't seem like a terrible
thing.
* Dependencies of Go will now correctly set the GOPATH for the
appropriate spec to avoid using the user's default path.
* Bumped version to latest releases(1.15.6 & 1.14.13).
Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.
- [x] make `master` the default so we don't have to keep telling people
to install `clingo@master`. We'll update the preferred version when
there's a new release.
Continuing to convert everything in `asp.py` into facts, make the
generation of ground rules for conditional dependencies use facts, and
move the semantics into `concretize.lp`.
This is probably the most complex logic in Spack, as dependencies can be
conditional on anything, and we need conditional ASP rules to accumulate
and map all the dependency conditions to spec attributes.
The logic looks complicated, but essentially it accumulates any
constraints associated with particular conditions into a fact associated
with the condition by id. Then, if *any* condition id's fact is True, we
trigger the dependency.
This simplifies the way `declared_dependency()` works -- the dependency
is now declared regardless of whether it is conditional, and the
conditions are handled by `dependency_condition()` facts.
There are currently no places where we do not want to traverse
dependencies in `spec_clauses()`, so simplify the logic by consolidating
`spec_traverse_clauses()` with `spec_clauses()`.
`version_satisfies/2` and `node_compiler_version_satisfies/3` are
generated but need `#defined` directives to avoid " info: atom does not
occur in any rule head:" warnings.
Since zsh can load bash completion files natively, seems reasonable to just turn this on.
The only changes are to switch from `type -t` which zsh doesn't support to using `type`
with a regex and adding a new arm to the sourcing of the completions to allow it to work
for zsh as well as bash.
Could use more bash/dash/etc testing probably, but everything I've thought to try has
worked so far.
Notes:
* unit-test zsh support, fix issues
Specifically fixed word splitting in completion-test, use a different
method to apply sh emulation to zsh loaded bash completion, and fixed
an incompatibility in regex operator quoting requirements.
* compinit now ignores insecure directories
Completion isn't meant to be enabled in non-interactive environments, so
by default compinit will ask the user if they want to ignore insecure
directories or load them anyway. To pass the spack unit tests in GH
actions, this prompt must be disabled, so ignore explicitly until a
better solution can be found.
* debug functions test also requires bash emulation
COMP_WORDS is a bash-ism that zsh doesn't natively support, turn on
emulation for just that section of tests to allow the comparison to
work. Does not change the behavior of the functions themselves since
they are already pinned to sh emulation elsewhere.
* propagate change to .in file
* fix comment and update script based on .in
This PR addresses a number of issues related to compiler bootstrapping.
Specifically:
1. Collect compilers to be bootstrapped while queueing in installer
Compiler tasks currently have an incomplete list in their task.dependents,
making those packages fail to install as they think they have not all their
dependencies installed. This PR collects the dependents and sets them on
compiler tasks.
2. allow boostrapped compilers to back off target
Bootstrapped compilers may be built with a compiler that doesn't support
the target used by the rest of the spec. Allow them to build with less
aggressive target optimization settings.
3. Support for target ranges
Backing off the target necessitates computing target ranges, so make Spack
handle those properly. Notably, this adds an intersection method for target
ranges and fixes the way ranges are satisfied and constrained on Spec objects.
This PR also:
- adds testing
- improves concretizer handling of target ranges
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Cray's version of MPICH uses a different versioning system than
MPICH, so it has been split into its own package. It is an
external-only package (always provided by the system, never
installed by Spack).
* Kluge to get the gfortran linker to work correctly on Big Sur.
* Fixed formatting error; stetting the other.
* Removed spaces.
* Added comment, mainly to re-trigger Spack CI.
Currently, version range constraints, compiler version range constraints,
and target range constraints are implemented by generating ground rules
from `asp.py`, via `one_of_iff()`. The rules look like this:
```
version_satisfies("python", "2.6:") :- 1 { version("python", "2.4"); ... } 1.
1 { version("python", "2.4"); ... } 1. :- version_satisfies("python", "2.6:").
```
So, `version_satisfies(Package, Constraint)` is true if and only if the
package is assigned a version that satisfies the constraint. We
precompute the set of known versions that satisfy the constraint, and
generate the rule in `SpackSolverSetup`.
We shouldn't need to generate already-ground rules for this. Rather, we
should leave it to the grounder to do the grounding, and generate facts
so that the constraint semantics can be defined in `concretize.lp`.
We can replace rules like the ones above with facts like this:
```
version_satisfies("python", "2.6:", "2.4")
```
And ground them in `concretize.lp` with rules like this:
```
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) } 1
:- version_satisfies(Package, Constraint).
version_satisfies(Package, Constraint)
:- version(Package, Version), version_satisfies(Package, Constraint, Version).
```
The top rule is the same as before. It makes conditional dependencies and
other places where version constraints are used work properly. Note that
we do not need the cardinality constraint for the second rule -- we
already have rules saying there can be only one version assigned to a
package, so we can just infer from `version/2` `version_satisfies/3`.
This form is also safe for grounding -- If we used the original form we'd
have unsafe variables like `Constraint` and `Package` -- the original
form only really worked when specified as ground to begin with.
- [x] use facts instead of generating rules for package version constraints
- [x] use facts instead of generating rules for compiler version constraints
- [x] use facts instead of generating rules for target range constraints
- [x] remove `one_of_iff()` and `iff()` as they're no longer needed
* ParaView: add new ParaView-5.9.0-RC2 release
Signed-off-by: Vicente Adolfo Bolea Sanchez <vicente.bolea@kitware.com>
* Update var/spack/repos/builtin/packages/paraview/package.py
Indeed, I misunderstood the previous review. This looks good to me too.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.
So far, the command line is faster than running through Python, but I'm
working on fixing that. I found that if I do this:
```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp") # code from spack solve --show asp hdf5
control.load("display.lp")
control.ground([("base", [])])
control.solve(...)
```
It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.
So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.
This fixes a logging error observed on macOS 11.0.1 (Big Sur).
When performing a Spack install in debugging mode (e.g.
`spack -d install py-scipy`) Spack is supposed to write a log of
compiler wrapper command line invocations to the current working
directory.
Due to a regression error introduced by #18205, these files were
no-longer generated, and Spack was printing errors such as
"No such file or directory: None/." This is because the log file
directory gets set from `spack.main.spack_working_dir`, but that
variable is not set in the spawned process.
This PR ensures that the working directory (at the time of the
"spack install" invocation) is persisted to the subprocess.
Fixed hard tab in flux-sched edit and unbound hwloc in flux-core after
testing to better support modern MPIs in spack environments
Verified that flux-core@0.17 is when hwloc@2: became viable
Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.
This modification ensures that open-ended variants (variants accepting any string
or any integer) are projected to the finite set of values that are relevant for this
concretization.
2020.10.0 is the latest stable release, and the preferred version
for general use (when the user does not specify otherwise).
2020.11.0 is a prototype for the memory kinds feature that is also
available when requested.
Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.
The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.
- [x] move ad-hoc value handling code into spec_clauses so we do it in
one place for CLI and packages
- [x] move handling of `variant_possible_value`, etc. into
`concretize.lp`, where we can automatically infer variant existence
more concisely.
- [x] simplify/clarify some of the code for variants in `spec_clauses()`
* [cmd versions] add spack versions --new flag to only fetch new versions
format
[cmd versions] rename --latest to --newest and add --remote-only
[cmd versions] add tests for --remote-only and --new
format
[cmd versions] update shell tab completion
[cmd versions] remove test for --remote-only --new which gives empty output
[cmd versions] final rename
format
* add brillig mock package
* add test for spack versions --new
* [brillig] format
* [versions] increase test coverage
* Update lib/spack/spack/cmd/versions.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Update lib/spack/spack/cmd/versions.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Update geant4-data and individual datasets for Geant4 versions 10.6.3
and 10.7.0.
Update geant4 package with new versions 10.6.3 and 10.7.0. Update
dependencies on CLHEP and VecGeom with versions required for Geant4
10.7.
Add GEANT4_INSTALL_PACKAGE_CACHE=OFF to CMake args for 10.6 onwards.
Prevents install of the "package cahce" file that contains hard-coded
paths for dependencies, improving relocatability. It relies on Spack
setting CMAKE_PREFIX_PATH correctly in build/use environments that
consume the geant4 package.
`cmake @3.17:` is necessary to handle `cuda @11:` correctly. Earlier versions of `cmake` do not know that `cuda @11:` does not support `compute_30` any more, and list that compute capability as supported. This is handled in `cmake`'s file `Modules/FindCUDA/select_compute_arch.cmake`.
The bowtie2 Makefile uses `prefix`, not `PREFIX`, for versions before v2.4.
Credit to @tkameyama
Co-authored-by: george.hartzell <george.hartzell@sana.com>
* allow install of build-deps from cache via --include-build-deps switch
* make clear that --include-build-deps is useful for CI pipeline troubleshooting
fixes#20055
Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.
This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.
* bump up version for rocm-3.10.0 release
* bump up version for rocm-3.10.0
* remove duplicate version addition for 3.9.0
* bump up version for rocm-3.10.0 release
* bump up version for rocm-3.10.0 release
* bump up version for rocm-debug-agent and rocm-dbgapi
* bump up version for rocm-bandwidth-test,rocm-gdb,rocprofiler,roctracer for rocm-3.10.0
* add smoke test
* remove whitespaces
* fix minimum version issue
* reorder decorators & replace make with cmake build
* merge cmake build into one line
* reorganize smoke test function
Co-authored-by: Jieyang Chen <chenj3@ornl.gov>
* added dockerfile for opensuse leap 15
* updated maintainer info
* Update share/spack/docker/leap-15.dockerfile
* move copies and symlinks after package install
also use ${SPACK_ROOT} for spack calls as
this works with buildah
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* New package: py-qsymm
* py-qsymm: Convert to using tarballs from PyPi instead of git checkouts
* py-qsymm: add missing dependencies
* Update var/spack/repos/builtin/packages/py-qsymm/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-qsymm: Fix url to use pypi hidden download interface
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* AOCC-2.3.0 is now added to spack
Change-Id: I18fd9606e6fd9a288cc7dc6c6ead11ea17839a7c
* Added flag and version tests for AOCC-2.3.0
* Addressed review comments
Co-authored-by: vkallesh <Vijay-teekinavar.Kallesh@amd.com>
fixes#20040
Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.
refers #20040
Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.
This prevents the solver from avoiding expected configurations
just because they contain directives like:
depends_on('pkg+foo')
and `+foo` is not the default variant value for pkg.
* OpenBLAS: More Precise GCC Conflicts
Add more precise GCC conflicts so e.g. GCC 6 and GCC 7.5 don't fail.
* Compact syntax
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
As part of pull request #19452, a patch method was added to the mfem
package to delete byte order marks from 3 mfem source files. These
files first appeared in a stable release of mfem as of version
4.1. Consequently, attempts to install mfem 3.4 or mfem 4.0 fail
because no files exist at the path arguments of the filter_file
commands used to execute this operation. Decorating the patch method
so it runs only on mfem versions 4.1 and later resolves the errors
that were thrown due to files not found.
This commit adds that decorator.
* Qt: add options to disable docs and gui
- Add `~gui` option for minimal build
- Add `+doc` option to install docs, and attempt to disable the implicit
llvm dependency if not
- Removes the 'freetype' option which hasn't worked reliably in qt5, as
many of the gui components implicitly rely on freetype.
- Add and test version 5.15 (and skip qtlocation if disabling opengl)
- Refactor some of the dependency logic
I've tested this on linux with 5.15.2 and 4.8.7 in a couple of different
configurations.
* Address reviewer feedback and correctly disable llvm
* Fix qt doc generation
* py-rosdep: add new package
* setuptools needed at run-time
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* py-rospkg: add new package
* setuptools needed at run-time
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* py-catkin-pkg: add new package
* setuptools is needed at run-time
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
fixes#19981
This commit adds support for target ranges in directives,
for instance:
conflicts('+foo', when='target=x86_64:,aarch64:')
If any target in a spec body is not a known target the
following clause will be emitted:
node_target_satisfies(Package, TargetConstraint)
when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.
* spack recipe for gromacs with aocc compiler support
Change-Id: I364aab4a0aa2dcd44bc47eb50c81b2d94c99cfbd
* Removed arch and other associated compilers flags
Added cycle_subcounters variant
Co-authored-by: vkallesh <Vijay-teekinavar.Kallesh@amd.com>
fixes#20019
Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:
conflicts('%gcc@X.Y:', when='@:A.B')
where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.
* llvm-amdgpu: fix the build for version 3.9.0
Adapt the fix-system-zlib-ncurses.patch for version 3.9.0. Without
the patch, llvm-amdgpu builds, but then rocm-device-libs fails with
"cannot find -ltinfo."
Tighten the version requirements for cmake according to the
llvm/CMakeLists.txt file.
* Add a conflict for cmake 3.19.0.
refers #20079
Added docstrings to 'concretize' and 'concretized' to
document the format for tests.
Added tests for the activation of test dependencies.
refers #20040
This modification emits rules like:
provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").
for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.
* intel-tbb: patch for arm64 on macOS
as submitted upstream and used in homebrew
* intel-tbb: check patchable versions
* intel-tbb: avoid patch breakage when 2021.1 is released
2021.1-beta05 would be considered newer than 2021.1
* Add the 'exciting' package.
Version 14 (latest available) is defined.
An as-of-yet unpublished patch (dfgather.patch) from the developers is also
included.
* fixed flake8 errors (I *thought* I had already gotten them! OOPS!)
* Update var/spack/repos/builtin/packages/exciting/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixed install method to just do the install, and no build method is needed.
* *Actually* added the lapack dependency!
* removed variant from blas dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix: leading . is not needed in extension kwarg
* mfem: add support for NVIDIA AmgX
fix: proper spacing
* mfem: use conflict to indicate that AmgX is expected to depend on CUDA
fixes#19966
Global arrays supports GCC 10 since version 5.7.1,
therefore a conflict has been added to avoid old
releases to error at build-time.
Removed the 'blas' and 'lapack' variant since
BLAS and LAPACK are always a dependency, and
if not specified during configure, a version
of these APIs vendored with Global Arrays is
built.
Fixed a few options in configuration.
The point of this variant is to give the end user an option to use system
installed fabrics such as mofed instead of upstream fabrics such as rdma-core.
This was found to avoid run time errors on some systems.
Co-authored-by: nithintsk <nithintsk@github.com>
This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:
```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```
This caused clang's version to be misdetected as:
```
clang@11.0.0
Target:
```
This resulted in errors when trying to actually use it as a compiler.
When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:
```
==> Warning: "llvm@11.0.0+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for llvm@11.0.0+clang+lld+lldb]
```
Changing the regex to only match until the end of the line fixes these
problems.
Fixes: #19473
* Updated the cuDNN recipe to generate the proper version names for only
the arhcitecture that you are on. This prevents the concretizer from
selecting a source code version that is incompatible with your current
architecture. Additionally, add constraints to ensure that the
corresponding CUDA version is properly set as well.
* Added maintainer
* Fixed renaming for darwin systems
* Fixed flake8
* Fixed flake8
* Fixed range typo
* Update var/spack/repos/builtin/packages/cudnn/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixed style issues
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* seems to have been introduced errorously by users using gitk-based
workflows. This should be handled by the git package
* fixes build problems on OSX bigsur
* charmpp: various fixes
- change URLs to https
- address deprecated/renamed versions
- make it build with the cmake build system
* flake8
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This adds a new `mark` command that can be used to mark packages as either
explicitly or implicitly installed. Apart from fixing the package
database after installing a dependency manually, it can be used to
implement upgrade workflows as outlined in #13385.
The following commands demonstrate how the `mark` and `gc` commands can be
used to only keep the current version of a package installed:
```console
$ spack install pkgA
$ spack install pkgB
$ git pull # Imagine new versions for pkgA and/or pkgB are introduced
$ spack mark -i -a
$ spack install pkgA
$ spack install pkgB
$ spack gc
```
If there is no new version for a package, `install` will simply mark it as
explicitly installed and `gc` will not remove it.
Co-authored-by: Greg Becker <becker33@llnl.gov>
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.
Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.
This handles the following trickier aspects of testing with direct
support in Spack's package API:
- [x] Caching source or intermediate build files at build time for
use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
packages).
See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added -level_zero -rocm -opencl flags and sha256 for TAU v2.30.
* Removed the depends_on clause for OpenCL and added a variant for OneAPI level_zero.
* remove depends_on rocm
* remove depends_on rocprofiler
Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
The deprecatedProperties custom validator now can accept a function
to compute a better error message.
Improve error/warning message for deprecated properties
As of #18260, `spack load` and `spack env activate` now use
`prefix_inspections` from the modules configuration to decide
how to modify environment variables.
This updates the modules configuration documentation to describe
how to update environment variables with the `prefix_inspections`
section. This also updates the `spack load` and environments
documentation to refer to the new `prefix_inspections` documentation.
`spack load` and `spack env activate` now use the prefix inspections
defined in `modules.yaml`. This allows users to customize/override
environment variable modifications if desired.
If no `prefix_inspections` configuration is present, Spack uses the
values in the default configuration.
This PR reworks a few attributes in the container subsection of
spack.yaml to permit the injection of custom base images when
generating containers with Spack. In more detail, users can still
specify the base operating system and Spack version they want to use:
spack:
container:
images:
os: ubuntu:18.04
spack: develop
in which case the generated recipe will use one of the Spack images
built on Docker Hub for the build stage and the base OS image in the
final stage. Alternatively, they can specify explicitly the two
base images:
spack:
container:
images:
build: spack/ubuntu-bionic:latest
final: ubuntu:18.04
and it will be up to them to ensure their consistency.
Additional changes:
* This commit adds documentation on the two approaches.
* Users can now specify OS packages to install (e.g. with apt or yum)
prior to the build (previously this was only available for the
finalized image).
* Handles to avoid an update of the available system packages have been
added to the configuration to facilitate the generation of recipes
permitting deterministic builds.
This commit address the case of concretizing a root spec with a
transitive conditional dependency on a virtual package, provided
by an external. Before these modifications default variant values
for the dependency bringing in the virtual package were not
respected, and the external package providing the virtual was added
to the DAG.
The issue stems from two facts:
- Selecting a provider has higher precedence than selecting default variants
- To ensure that an external is preferred, we used a negative weight
To solve it we shift all the providers weight so that:
- External providers have a weight of 0
- Non external provider have a weight of 10 or more
Using a weight of zero for external providers is such that having
an external provider, if present, or not having a provider at all
has the same effect on the higher priority minimization.
Also fixed a few minor bugs in concretize.lp, that were causing
spurious entries in the final answer set.
Cleaned concretize.lp from leftover rules.
If a the default of a multi-valued variant is set to
multiple values either in package.py or in packages.yaml
we need to ensure that all the values are present in the
concretized spec.
Since each default value has a weight of 0 and the
variant value is set implicitly by the concretizer
we need to add a rule to maximize on the number of
default values that are used.
This commit introduces a new rule:
real_node(Package) :- not external(Package), node(Package).
that permits to distinguish between an external node and a
real node that shouldn't trim dependency. It solves the
case of concretizing ninja with an external Python.
`node_compiler_hard()` means that something explicitly asked for a node's
compiler to be set -- i.e., it's not inherited, it's required. We're
generating this in spec_clauses even for specs in rule bodies, which
results in conditions like this for optional dependencies:
In py-torch/package.py:
depends_on('llvm-openmp', when='%apple-clang +openmp')
In the generated ASP:
declared_dependency("py-torch","llvm-openmp","build")
:- node("py-torch"),
variant_value("py-torch","openmp","True"),
node_compiler("py-torch","apple-clang"),
node_compiler_hard("py-torch","apple-clang"),
node_compiler_version_satisfies("py-torch","apple-clang",":").
The `node_compiler_hard` there means we would have to *explicitly* set
py-torch's compiler to trigger the llvm-openmp dependency, rather than
just letting it be set by preferences. This is wrong; the dependency
should be there regardless of how the compiler was set.
- [x] remove fn.node_compiler_hard() call from spec_clauses when
generating rule body clauses.
If the version list passed to one_of_iff is empty, it still generates a
rule like this:
node_compiler_version_satisfies("fujitsu-mpi", "arm", ":") :- 1 { } 1.
1 { } 1 :- node_compiler_version_satisfies("fujitsu-mpi", "arm", ":").
The cardinality rules on the right and left above are never
satisfiale, and these rules do nothing.
- [x] Skip generating any rules at all for empty version lists.
As reported, conflicts with compiler ranges were not treated
correctly. This commit adds tests to verify the expected behavior
for the new concretizer.
The new rules to enforce a correct behavior involve:
- Adding a rule to prefer the compiler selected for
the root package, if no other preference is set
- Give a strong negative weight to compiler preferences
expressed in packages.yaml
- Maximize on compiler AND compiler version match
Variant of this kind don't have a list of possible
values encoded in the ASP facts. Since all we have
is a validator the list of possible values just includes
just the default value and possibly the value passed
from packages.yaml or cli.
This is done after the builder has actually built
the specs, to respect the semantics use with the
old concretizer.
Later we could move this to the solver as
a multivalued variant.
This is done after the builder has actually built
the specs, to respect the semantics use with the
old concretizer.
A better approach is to substitute the spec
directly in concretization.
The "none" variant value cannot be combined with
other values.
The '*' wildcard matches anything, including "none".
It's thus relevant in queries, but disregarded in
concretization.
- The test on concretization of anonymous dependencies
has been fixed by raising the expected exception.
- The test on compiler bootstrap has been fixed by
updating the version of GCC used in the test.
Since gcc@2.0 does not support targets later than
x86_64, the new concretizer was looking for a
non-existing spec, i.e. it was correctly trying
to retrieve 'gcc target=x86_64' instead of
'gcc target=core2'.
- The test on gitlab CI needed an update of the target
This commit adds support for specifying rules in
packages.yaml that refer to virtual packages.
The approach is to normalize in memory each
configuration and turn it into an equivalent
configuration without rules on virtual. This
is possible if the set of packages to be handled
is considered fixed.
The weight of the target used in concretization is, in order:
1. A specific per package weight, if set in packages.yaml
2. Inherited from the parent, if possible
3. The default target weight (always set)
Generate facts on externals by inspecting
packages.yaml. Added rules in concretize.lp
Added extra logic so that external specs
disregard any conflict encoded in the
package.
In ASP this would be a simple addition to
an integrity constraint:
:- c1, c2, c3, not external(pkg)
Using the the Backend API from Python it
requires some scaffolding to obtain a default
negated statement.
Conflict rules from packages are added as integrity
constraints in the ASP formulation. Most of the code
to generate them has been reused from PyclingoDriver.rules
The new concretizer and the old concretizer solve constraints
in a different way. Here we ensure that a SpackError is raised,
instead of a specific error that made sense in the old concretizer
but probably not in the new.
Instead of python callbacks, use cardinality constraints for package
versions. This is slightly faster and has the advantage that it can be
written to an ASP program to be executed *outside* of Spack. We can use
this in the future to unify the pyclingo driver and the clingo text
driver.
This makes use of add_weight_rule() to implement cardinality constraints.
add_weight_rule() only has a lower bound parameter, but you can implement
a strict "exactly one of" constraint using it. In particular, wee want to
define:
1 {v1; v2; v3; ...} 1 :- version_satisfies(pkg, constraint).
version_satisfies(pkg, constraint) :- 1 {v1; v2; v3; ...} 1.
And we do that like this, for every version constraint:
atleast1(pkg, constr) :- 1 {version(pkg, v1); version(pkg, v2); ...}.
morethan1(pkg, constr) :- 2 {version(pkg, v1); version(pkg, v2); ...}.
version_satisfies(pkg, constr) :- atleast1, not morethan1(pkg, constr).
:- version_satisfies(pkg, constr), morethan1.
:- version_satisfies(pkg, constr), not atleast1.
v1, v2, v3, etc. are computed on the Python side by comparing every
possible package version with the constraint.
Computing things like this has the added advantage that if v1, v2, v3,
etc. comprise *all* possible versions of a package, we can just omit the
rules for the constraint under consideration. This happens pretty
frequently in the Spack mainline.
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
ultimately hurt performance)
There are now three parts:
- `SpackSolverSetup`
- Spack-specific logic for generating constraints. Calls methods on
`AspTextGenerator` to set up the solver with a Spack problem. This
shouln't change much from solver backend to solver backend.
- ClingoDriver
- The solver driver provides methods for SolverSetup to generates an ASP
program, send it to `clingo` (run as an external tool), and parse the
output into function tuples suitable for `SpecBuilder`.
- The interface is generic and should not have to change much for a
driver for, say, the Clingo Python interface.
- SpecBuilder
- Builds Spack specs from function tuples parsed by the solver driver.
The original implementation was difficult to read, as it only had
single-letter variable names. This converts all of them to descriptive
names, e.g., P -> Package, V -> Virtual/Version/Variant, etc.
To handle unknown compilers propely in tests (and elsewhere), we need to
add unknown compilers from the spec to the list of possible compilers.
Rework how the compiler list is generated and includes compilers from
specs if the existence check is disabled.
Specs like hdf5 ^mpi were unsatisfiable because we added a requierment
for `node("mpi").`. This can't be resolved because "mpi" is not a
package.
- [x] Introduce `virtual_node()`, which says *some* provider must be in
the DAG.
This adds compiler flags to the ASP solve so that we can have conditions
based on them in the solve. But, it keeps order out of the solve to
avoid unneeded complexity and combinatorial explosions.
The solver determines which flags are on a spec, but the order is
determined by DAG precedence (childrens' flags take precedence over
parents' and are added on the right) and order (order flags were
specified on the command line is respected).
The solver is responsible for determining when to propagate flags, when
to inheit them from other nodes, when to take them from compiler
preferences, etc.
Weight microarchitectures and prefers more rercent ones. Also disallow
nodes where the compiler does not support the selected target.
We should revisit this at some point as it seems like if I play around
with the compiler support for different architectures, the solver runs
very slowly. See notes in comments -- the bad case was gcc supporting
broadwell and skylake with clang maxing out at haswell.
We didn't have a cardinality constraint for multi-valued variants, so the
solver wasn't filling them in.
- [x] add a requirement for at least one value for multi-valued variants
Variants like `cpu_target` on `openblas` don't have defineed values, but
they have a default. Ensure that the default is always a possible value
for the solver.
Spack was generating the same dependency connstraints twice in the output ASP:
```
declared_dependency("abinit", "hdf5", "link")
:- node("abinit"),
variant_value("abinit", "mpi", "True"),
variant_value("abinit", "mpi", "True").
```
This was because `AspFunction` was modifying itself when called.
- [x] fix `AspFunction` so that every call returns a new object
- [x] Add support for packages.yaml and command-line compiler preferences.
- [x] Rework compiler version propagation to use optimization rather than
hard logic constraints
Technically the ASP output order does not matter, but it's hard to diff
two different solve fomulations unless we order it.
- [x] make sure ASP output is emitted in a deterministic order (by
sorting all hash keys)
This needs more thought, as I am pretty sure the weights are not correct.
Or, at least, I'm not convinced that they do what we want in all cases.
See note in concretize.lp.
Solver now prefers newer versions like the old concretizer. Prefer
package preferences from packages.yaml, preferred=True, package
definition, and finally each version itself.
Competition output only prints out one model, so we do not have to
unnecessarily parse all the non-optimal models. We'll just look at the
best model and bring that in.
In practice, this saves a lot of JSON parsing and spec construction time.
Clingo actually has an option to output JSON -- use that instead of
parsing the raw otuput ourselves.
This also allows us to pick the best answer -- modify the parser to
*only* construct a spec for that one rather than building all of them
like we did before.
- Instead of using default logic, handle variant defaults by minimizing
the number of non-default variants in the solution.
- This actually seems to be pretty fast, and it fixes the long-standing
issue that writing this:
spack install hdf5 ^mpich
will fail if you don't specify hdf5+mpi. With optimization and
allowing enums to be enumerated, the solver seems to be able to quickly
discover that +mpi is the only way hdf5 can depend on mpich, and it
forces the switch to be thrown.
Use '1 { version(x); version(y); version(z) } 1.' instead of declaring
conflicts for non-matching versions. This keeps the sense of version
clauses positive, which will allow them to be used more easily in
conditionals later.
Also refactor `spec_clauses()` method to return clauses that can be used
in conditions, etc. instead of just printing out facts.
- This handles setting the compiler and falling back to a default
compiler, as well as providing default values for compilers/compiler
versions.
- Versions still aren't quite right -- you can't properly override
versions on compiler specs.
- Model architecture default settings and propagation off of variants
- Leverage ASP default logic to set architecture to default if it's not
set otherwise.
- Move logic out of Python and into concretize.lp as first-order rules.
We are relying on default logic in the variant handling in that we set a
default value if we never see `variant_set(P, V, X)`.
- Move the logic for this into `concretize.lp` instead of generating it
for every package.
- For programs that don't have explicit variant settings, clingo warns
that variant_set(P, V, X) doesn't appear in any rule head, because a
setting is never generated.
- Specifically suppress this warning.
- moving the dump logic into spack.solver.asp.solve() allows us to print
out useful debug info sooner
- prior approach required a successful solve to print out anyhting.
According to the documentation for spack and pkg-config,
$view/share/pkgconfig should also be a valid place to look
for package config files. This commit ensures that when
spack activate env $dir is called, the environment has this
directory in PKG_CONFIG_PATH.
As of #13100, Spack installs the dependencies of a _single_ spec in parallel.
Environments, when installed, can only get parallelism from each individual
spec, as they're installed in order. This PR makes entire environments build
in parallel by extending Spack's package installer to accept multiple root
specs. The install command and Environment class have been updated to use
the new parallel install method.
The specs and kwargs for each *uninstalled* package (when not force-replacing
installations) of an environment are collected, passed to the `PackageInstaller`,
and processed using a single build queue.
This introduces a `BuildRequest` class to track install arguments, and it
significantly cleans up the code used to track package ids during installation.
Package ids in the build queue are now just DAG hashes as you would expect,
Other tasks:
- [x] Finish updating the unit tests based on `PackageInstaller`'s use of
`BuildRequest` and the associated changes
- [x] Change `environment.py`'s `install_all` to use the `PackageInstaller` directly
- [x] Change the `install` command to leverage the new installation process for multiple specs
- [x] Change install output messages for external packages, e.g.:
`[+] /usr` -> `[+] /usr (external bzip2-1.0.8-<dag-hash>`
- [x] Fix incomplete environment install's view setup/update and not confirming all
packages are installed (?)
- [x] Ensure externally installed package dependencies are properly accounted for in
remaining build tasks
- [x] Add tests for coverage (if insufficient and can identity the appropriate, uncovered non-comment lines)
- [x] Add documentation
- [x] Resolve multi-compiler environment install issues
- [x] Fix issue with environment installation reporting (restore CDash/JUnit reports)
This change makes improvements to the `spack ci rebuild` command
which supports running gitlab pipelines on PRs from forks. Much
of this has to do with making sure we can run without the secrets
previously required for running gitlab pipelines (e.g signing key,
aws credentials, etc). Specific improvements in this PR:
Check if spack has precisely one signing key, and use that information
as an additional constraint on whether or not we should attempt to sign
the binary package we create.
Also, if spack does not have at least one public key, add the install
option "--no-check-signature"
If we are running a pipeline without any profile or environment
variables allowing us to push to S3, the pipeline could still
successfully create a buildcache in the artifacts and move on. So
just print a message and move on if pushing either the buildcache
entry or cdash id file to the remote mirror fails.
When we attempt to generate a pacakge or gpg key index on an S3
mirror, and there is nothing to index, just print a warning and
exit gracefully rather than throw an exception.
Support the use of PR-specific mirrors for temporary binary pkg
storage. This will allow quality-of-life improvement for developers,
providing a place to store binaries over the lifetime of a PR, so
that they must only wait for packages to rebuild from source when
they push a new commit that causes it to be necessary.
Replace two-pass install with a single pass and the new option:
--require-full-hash-match. Doing this also removes the need to
save a copy of the spack.yaml to be copied over the one spack
rewrites in between the two spack install passes.
Work around a mirror configuration issue caused by using
spack.util.executable to do the package installation.
* Update pipeline trigger jobs for PRs from forks
Moving to PRs from forks relies on external synchronization script
pushing special branch names. Also secrets will only live on the
spack mirror project, and must be propagated to the E4S project via
variables on the trigger jobs.
When this change is merged, pipelines will not run until we update
the "Custom CI configuration path" in the Gitlab CI Settings, as the
name of the file has changed to better reflect its purpose.
* Arg to MirrorCollection is used exclusively, so add main remote mirror to it
* Compute full hash less frequently
* Add tests covering index generation error handling code
* Add WRF 3.9.1.1 and improve recipe robustness
* Include version 3.9.1.1 as common benchmarking workload
* Fix compilation against recent glibc (detect spack installed libtirpc)
* Detect and handle failed compilation (upstream use make -i)
* WRF: PR changes round 1
fix build jobs
fix maintainers
fix pkgconfig dependency
use Executable to run compile stage
repair some overzealous autoformatting by black
* WRF: make recipe py26 compatible
* wrf: recipe review changes round 2
* more python 26 fixes
The unattended install using the pre-compiled binaries (tl-install)
needs a .profile file or it goes in interactive mode blocking the
install process forever
* Added guard for setting CUB_DIR to only when cuda variant is true
* Added support for OpenMP on OSX platforms
* Updated the way that LBANN, Hydrogen, and DiHydrogen handle
apple-clang with OpenMP and Clang installed on OS X via brew.
* Fixed bug in spec resolution
* Fixed merge conflict
* Fixed typo
* Fixed flake8
* AMD - Bumped up version for hip-rocclr, rocm-opencl, rocm-smi-lib
* AMD ROCm - HIP update and bump up version to 3.9.0 for rccl,debug agent, hip-rocclr and atmi
* Update package.py
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/hip/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Since #11598 sbang has been installed within the install_tree. This doesn’t play
nicely with install_tree padding, since sbang can’t do its job if it is installed in a
long path (this is the whole point of sbang).
This PR changes the padding specification. Instead of $padding inside paths,
we now have a separate `padding:` field in the `install_tree` configuration.
Previously, the `install_tree` looked like this:
```
/path/to/opt/spack_padding_padding_padding_padding_padding/
bin/
sbang
.spack-db/
...
linux-rhel7-x86_64/
...
```
```
This PR updates things to look like this:
/path/to/opt/
bin/
sbang
spack_padding_padding_padding_padding_padding/
.spack-db/
...
linux-rhel7-x86_64/
...
So padding is added at the start of all install prefixes *within* the unpadded
root. The database and all installations still go under the padded root.
This ensures that `sbang` is in the shorted possible path while also allowing
us to make long paths for relocatable binaries.
As of #18205, all packages must be pickle-able to be installed by
Spack.
This adds a test to check that each package can be pickled. If any
package fails to pickle, the test keeps going and collects the names
of all failed packages; it then takes the first one that failed and
attempts to re-pickle it, generating the full stack trace for the
failed pickle attempt.
Spack creates a separate process to do package installation. Different
operating systems and Python versions use different methods to create
it but up until Python 3.8 both Linux and Mac OS used "fork" (which
duplicates process memory, file descriptor table, etc.).
Python >= 3.8 on Mac OS prefers creating an entirely new process
(referred to as the "spawn" start method) because "fork" was found to
cause issues (in other words "spawn" is the default start method used
by multiprocessing.Process). Spack was dependent on the particular
behavior of fork to replicate process memory and transmit file
descriptors.
This PR refactors the Spack internals to support starting a child
process with the "spawn" method. To achieve this, it makes the
following changes:
- ensure that the package repository and other global state are
transmitted to the child process
- ensure that file descriptors are transmitted to the child process in
a way that works with multiprocessing and spawn
- make all the state needed for the build process and tests picklable
(package, stage, etc.)
- move a number of locally-defined functions into global scope so that
they can be pickled
- rework tests where needed to avoid using local functions
This PR also reworks sbang tests to work on macOS, where temporary
directories are deeper than the Linux sbang limit. We make the limit
platform-dependent (macOS supports 512-character shebangs)
See: #14102
In compiler bootstrapping pipelines, we add an artificial dependency
between jobs for packages to be built with a bootstrapped compiler
and the job building the compiler. To find the right bootstrapped
compiler for each spec, we compared not only the compiler spec to
that required by the package spec, but also the architectures of
the compiler and package spec.
But this prevented us from finding the bootstrapped compiler for a
spec in cases where the architecture of the compiler wasn't exactly
the same as the spec. For example, a gcc@4.8.5 might have
bootstrapped a compiler with haswell as the architecture, while the
spec had broadwell. By comparing the families instead of the architecture
itself, we know that we can build the zlib for broadwell with the gcc for
haswell.
* py-json-get: new package at 1.1.1
* py-json-get: new package at 1.1.1
* r-bigalgebra: new package at 0.8.4
* r-bigalgebra: new package at 0.8.4 with corrections
* Added an additional change to tarball and dependencies
* removing accidentally added file
* Added tarball that uses mirror and removed redundant dependencies
* Fixed version and added dep.
* Updated checksum
* Fixed urls
* Added list_url
Co-authored-by: las_djorton <las_djorton@build.las.iastate.edu>
* Add CUDA support to superlu-dist
* Use spec['cuda'].libs.directories[0] iso spec['cuda'].prefix.lib
so it works for both lib and lib64
The suggested:
args.append('-DTPL_CUDA_LIBRARIES=' +
spec['cuda'].libs.ld_flags)
did not work because it does not link with cuBLAS.
Currently, full JSON output is the only machine readable option for `spack find`
in an environment.
`spack find --format` is also designed to be machine readable, but we print extra
headers in environments.
-[x] don't print headers in `spack find` output when in an environment
* No version of yaml-cpp in spack can build shared AND
static libraries at the same time. So drop the "static"
variant and let "shared" handle that alone.
Or in other words: No version handles the
BUILD_STATIC_LIBS flag.
* The flag for building shared libraries changed from
BUILD_SHARED_LIBS to YAML_BUILD_SHARED_LIBS at some
point. So just pass both flags.
* Use the newer define_from_variant.
* [py-cuml] created template
* [py-cuml] setup phases and added build_directory
* [py-cuml] added dependencies
* [py-cuml] depends on libcumlprims
* [py-cuml] requiring multigpu version
* [py-cuml] figuring out the best way to get concretization to happen cleanly
* [py-cuml] removed singlegpu variat from libcuml
* [py-cuml] depends on py-cudf
* [py-cuml] depends on cupy
* [py-cuml] fixed typoo
* [py-cuml] depends on py-scipy
* [py-cuml] depends on py-treelite
* [py-cuml] py-treelite is now a variant of treelite
* [py-cuml] depends on joblib
* [py-cuml] depends on py-scikit-learn
* [py-cuml] flake8
* [py-cuml] added homepage and description. removed fixmes
* [py-cuml] updated checksum
* Enabling build of v1.9.x development branch.
* v1.8.1 is the preferred (stable) version.
* Fixing code style
Co-authored-by: Filippo Spiga <fspiga@nvidia.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [podio] put python dir in python path
* Update var/spack/repos/builtin/packages/podio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When invoking "buildcache list" multiple times, the command was
reporting no specs in the cache the second time around. The
presence of an up-to-date index was causing the internal
representation to be left un-initialized.
* tskit package
* Update var/spack/repos/builtin/packages/tskit/package.py
I can't see any hard requirement for 3.6:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixes following PR review
* Update var/spack/repos/builtin/packages/tskit/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Added a command to set up Spack for our tutorial at
https://spack-tutorial.readthedocs.io.
The command does some common operations we need first-time users to do.
Specifically:
- checks out a particular branch of Spack
- deletes spurious configuration in `~/.spack` that might be
left over from prior parts of the tutorial
- adds a mirror and trusts its public key
Version 5.32.0 has been out for quite a while and Linux distributions
are shipping it. I have also done a rebuild of some common packages with
the new version. Let's make it the preferred version.
* amrex: new options names for version > 20.11
* amrex: change option name DIM -> AMReX_SPACEDIM
* Update var/spack/repos/builtin/packages/amrex/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added code to help DiHydrogen find cuDNN and CUB
* Cleaning up dependencies on CUB and adding guards for when newer
versions of CUDA include CUB and it should be excluded.
* Changed Hydrogen to disable half support by default.
* Have LBANN force Hydrogen and DiHydrogen to build without half when the variant is disabled.
* Added explicit variants to enusre that if LBANN is build without Cuda,
Aluminum, or Half support, it enforces those constraints for Hydrogen
and DiHydrogen. Cleaned up the use of Python extend versus append in
LBANN and DiHydrogen recipes.
* Fixed Flake8
* [evtgen] add env var
* Update var/spack/repos/builtin/packages/evtgen/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
See #19784
virtualgl CMake system is looking for a specific libjpeg-turbo include
file, not present in libjpeg (currently the only other jpeg provider)
* cget package
* Update var/spack/repos/builtin/packages/cget/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cget/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cget/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Updates in LBANN an Aluminum code now allow working with versions
HWLOC 1.11.x and 2.x and up.
* Updating the minimum CMake version to address a pending PR in LBANN
that will require C++17 support and needs CMake to properly separate
the compiler flags from nvcc.
* Clarified the support for different versions of HWLOC in LBANN
Previously, we hardcoded a list of Spack versions which could be used by the containerize command.
This PR removes that list. It's a maintenance burden when cutting a release, and prevents older versions of Spack from creating containers to be used by newer versions.
* filtlong package
* Update var/spack/repos/builtin/packages/filtlong/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* bump up version for 3.9.0 release
* update version of rocminfo for rocm-3.9.0
* bump up rocm-cmake version for rocm-3.9.0
* bump up rocm-smi and rocmdevice-libs for 3.9.0
* bumpup comgr version for rocm_ 3.9.0
* bump rocm-clang-ocl for rocm-3.9.0
* bump hipify-clang for rocm-3.9.0
* Trilinos: Add STRUMPACK dependency
* break long lines, flake8 cleanup
* Use spec['strumpack'].libs.directories[0]
instead of spec['strumpack'].prefix.lib
because libraries may be in lib or lib64.
Likewise use headers.directories[0] iso prefix.include.
Suggested by adamjstewart
* allows UCX since v1.7 to build with more recent version of gdrcopy (v2.X)
* Update var/spack/repos/builtin/packages/ucx/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
There was an error introduced in #19209 where `full_hash()` and
`build_hash()` are called on older specs that we've read in from the DB;
older specs may not be able to compute these hashes (e.g. if they have
removed patches used in computing the full_hash).
When serializing a Spec, we want to generate the full/build hash when
possible, but we need a mechanism to skip it for Specs that have
themselves been read from YAML (and may not support this).
To get around this ambiguity and to fix the issue, we:
- Add an attribute to the spec called `_hashes_final`, that is `True`
if we can't lazily compute `build_hash` and `full_hash`.
- Set `_hashes_final` to `False` for new specs (i.e., lazily
computing hashes is ok)
- Set `_hashes_final` to `True` for concrete specs read in via
`from_node_dict`, as it may be too late to recompute hashes.
- Compute and write out all hashes in `node_dict_with_hashes` *if
possible*.
Effectively what this means is that we can round-trip specs that are
missing `_build_hash` and `_full_hash` without recomputing them, but for
all new specs, we'll compute them and store them. So Spack should work
fine with old DBs now.
* hip: rocminfo is a runtime requirement
* hip: +setup_run_environment, +setup_dependent_run_environment
* hip: run environment: get lib dir using libs.directories[0], not prefix.lib
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This fixes sbang relocation when using old binary packages, and updates
code in `relocate.py`.
There are really two places where we would want to handle an `sbang`
relocation:
1. Installing an old package that uses `sbang` with shebang lines like
`#!/bin/bash $spack_prefix/sbang`
2. Installing a *new* package that uses `sbang` with shebang lines like
`#!/bin/sh $install_tree/sbang`
The second case is actually handled automatically by our text relocation;
we don't need any special relocation logic for new shebangs, as our
relocation logic already changes references to the build-time
`install_tree` to point to the `install_tree` at intall-time.
Case 1 was not properly handled -- we would not take an old binary
package and point its shebangs at the new `sbang` location. This PR fixes
that and updates the code in `relocation.py` with some notes.
There is one more case we don't currently handle: if a binary package is
created from an installation in a short prefix that does *not* need
`sbang` and is installed to a long prefix that *does* need `sbang`, we
won't do anything. We should just patch the file as we would for a normal
install. In some upcoming PR we should probably change *all* `sbang`
relocation logic to be idempotent and to apply to any sort of shebang'd
file. Then we'd only have to worry about which files to `sbang`-ify at
install time and wouldn't need to care about these special cases.
* add python-docutils dependency
* adds symlink to script for better compatibility if py-docutils installation
* Improve post_install phase of py-docutils
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix review of rdma-core package
* improve formating of py-docutils package
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [NEW] Added amdfftw, amdlibflame and amdscalapack recipes
Updated base fftw, libflame and netlib-scalapack recipes
to accommodate the above listed AMD Optimizing CPU Libraries
which are a set of numerical routines optimized for AMD platforms.
Updated amdblis spack recipe
amdblis:
1. updated with amdblis 2.2 release
amdfftw:
1. "--enable-single" now work as synonym for "--enable-float"
amdlibflame:
1. Added enable_or_disable_threads() to set value for "--enable-multithreading" flag
Libflame:
1. Added enable_or_disable_threads() to set value for "--enable-multithreading" flag
2. Corrected invocation of "enable_or_disable('threads')"
Change-Id: I9da0a2c2c4e2075b7fa2776e7cfe6548a2e0b32f
* Added amd-toolchain-support as maintainers
Added team github account amd-toolchain-support
as maintainers for all the recipes owned by
AMD Optimizing CPU Libraries (AOCL) team
Change-Id: I9a7969bd48fc42cfbb88dd7bd93e0802c6138582
* Incorporated review comments
Updated packages.yaml with aocl components
Handled Flake8 test failures
Change-Id: I0a03f02d8c9f326b2434ec907958c3de3a8e18eb
* Readded accidental removal of stream recipe
amdfftw:
1. Updated the aocc clang selection as per spack standards
fftw:
1. Currently apple-clang section is redundant,
already it is handled in the conflict checks.
Change-Id: Idef4a3f61717eb81f321e0cd16e7ba9619eac846
* Fix for style and docs/validate (pull_request) test
unnumbered format placeholders from {} to {0}
Change-Id: If67a3374177ec067573e5504462d257712fafc05
* changed compiler references to Spack's compiler wrapper:spack_cc, spack_cxx, spack_fc
Change-Id: I7ae29c978fff16e37773913f14c84df232499763
* Removed 'single' variant from amdfftw recipe
Instead of conflict for apple-clang + openmp, handled this senario
via below available feature:
depends_on('llvm-openmp', when='%apple-clang +openmp')
Change-Id: I701b23d83e822a500ca3aaf2b60cc9ace09e13dc
* Added relevant info for users who prefers to use single precision
Change-Id: I3506e21da428ddef5fb7895b5aaed32c2a061ef6
* Minor changes on fftw, amdfftw and libflame
amdfftw:
1. Removed escape symbol to the single quotes
2. Rewording the conflict line from Recommended
to Required
fftw:
1. Reorded to following recommended sections:
versions, variants, dependencies, providers,
patches
libflame:
1. Added provides entry for 5.1.0 version
Change-Id: I21ebff99b6dfde031763154693ecb3f1fa47b476
* Removed single quote from amdfftw docstring to fix style failures
Change-Id: Ife939a5a2f5ccbc8879b730c7bebfe2fcfef9332
* camp: changes to support hip build
* hip: add fallback path for external hip to detect other rocm components
Co-authored-by: Greg Becker <becker33@llnl.gov>
fixes#15183
- Moved the container related content from
workflows.rst into containers.rst
- Deleted the docker_for_developers.rst file,
since it describes an outdated procedure
Co-authored-by: Axel Huebl <a.huebl@hzdr.de>
Co-authored-by: Omar Padron <omar.padron@kitware.com>
`config.get_config` now caches the results and returns the same
configuration if called multiple times with the same arguments
(i.e. the same section and scope).
As a consequence, it is expected that users will always call
update methods provided in the `config` module after changing
the configuration (even if manipulating it as a Python nested
dictionary). The following two examples should cover most
scenarios:
* Most configuration update logic in the core (e.g. relating to
adding new compiler) should call `Configuration.update_config`
* Tests that need to change the global configuration should use the
newly-provided `config.replace_config` function.
(if neither of these methods apply, then the essential requirement
is to use a method marked as `_config_mutator`)
Failure to call such a function after modifying the configuration
will lead to unexpected results (e.g. calling `get_config` after
changing the configuration will not reflect the changes since the
first call to get_config).
* Patched hypre to better add flags based on compiler.
* Update package.py
This file seems to have lots of edits, so the patch may succeed with offsets. Has anyone checked with spack patch to be sure it'll work with versions 2.15 - 2.20?
* "spack install" now has a "--require-full-hash-match" option, which
forces Spack to skip an available binary package when the full hash
doesn't match. Normally only a DAG-hash match is required, which
ensures equivalent Specs, but does not account for changing logic
inside the associated package.
* Add a local binary cache index which tracks specs that have a binary
install available in a remote binary cache. It is updated with
"spack buildcache list" or for a given spec when a binary package
is retrieved for that Spec.
In #18394 it was noted, that this package should be changed
from a generic "Package" to a "CMakePackage".
It makes a bunch of things easier.
And it uses all the common cmake code.
* Added hash values for LBANN v0.101 and Hydrogen v1.5.0. Updated the
LBANN package to be more successful in resolving a legal configuration
of MPI and HWLOC packages. This required the removal of the MPI
virtual package since it is unable to resolve dependencies with
minimum version requirements. As a result to enable a reasonable
install line for LBANN this requires explicit forwarding of MPI
variants to Hydrogen and Aluminum. Due to the lack of variant
forwarding, there are many explicitly replicated dependencies for both
LBANN and Hydrogen. Fixed the error in LBANN where gpu variant was
replaced by the cuda variant, but not all dependencies were fixed.
* Fixed the minumum cuDNN version for newer versions of LBANN.
* Added explicit versioning of the MPI libraries for DiHydrogen to avoid
all of the conflicts with minimum required versions of the OpenMPI library.
* Removed explicit MPI versions and went back to using the MPI virtual
dependency. Updated construction of variant forwarding to use
iterative construction of constraints and variants. This exacerbates
the challenges with backtracking in the current concretizer, but
should be fixed in the new concretizer.
* Added support for including the DiHydrogen library in LBANN as well as
support for the distributed convolution (DistConv) parallel
algorithms. Also include support for building with half precision.
* Moving dependencies around
* Added conflict statement to ensure that the variant dihydrogen is
required for distconv.
* Removed the preferred field
* Fixed Flake8 and cuDNN version bounds
* gemini dep py-cyordereddict +
* dep ipyparallel +
* py-ipython-cluster +
* py-cyordereddict URL+dep fix
* Update var/spack/repos/builtin/packages/py-cyordereddict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-ipython-cluster-helper dep fix
* py-ipyparallel dep fix
* ipython-cluster-helper debug
* ipython-cluster-helper debug
* ipyparallel dep fix
* ipython-cluster-helper dep fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added initial version of package and patch for precice-bindings
* updated package name
* cleanup in script; added version requirement to cython
* Remove unnecessary part of patch
* cleanup package
* added initial version of package and patch for precice-bindings
* updated package name
* cleanup in script; added version requirement to cython
* Remove unnecessary part of patch
* cleanup package
* update style of package
* reformatting to fullfil style requirements
* reformatting again
* fixing some of the issues mention in PR; working on fixing install stage
* readded py-wheel as dependency
Co-authored-by: Benjamin Rüth <benjamin.rueth@tum.de>
Spack has a fallback for hash checking with m55sums that may not be
supported in earlier versions of Python 3.x. The comments in the
Spack code acknowledge that this is best effort and may fail, but
recent vermin checks (running as part of our CI) reject this. This
disables vermin checks for that fallback.
* enable flatcc to be built with gcc/9.X.X
* add static option for building libyogrt
* cleanup
* Initial working version
* rework new oneapi wrappers
* tested and removed my initials from source
* cleanup
* Update __init__.py
* remove whitespace
* working now with mods for testing, detection. Detection for oneapi is working, but entry needs to be modified to add link path for libimf.so. Cleared cruft for old Intel versions
* fixed some formatting
* cleanup
* flake8 cleanup
* flake8
* fixed syntax of compiler version detection tests
* fixed syntax of compiler version detection tests
modified: detection.py
* fix typo
* fixes for compilers tests
* remove erroneous tests for outdated -std= flags, remove ifx version check (output won't parse)
Co-authored-by: Frank Willmore <willmore@anl.gov>
* Patch CMake version check in Umpire
* Update version constraint for cmake_version_check patch
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add maintainers to Umpire
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [py-torch-nvidia-apex] added version 201019 and added dependency on py-pybind11
* [py-torch-nvidia-apex] changed versioning format
* [py-torch-nvidia-apex] removed redundent version condition
* [py-torch-nvidia-apex] removed condition on dependency
* Adding AOCC support for M4
* combining 4 if-statements into a single if-statement with or conditions
* keeping parentheses around the or expressions
* fixing flake8 test failures
Co-authored-by: mohan babu <mohbabul@amd.com>
For some mysterious reason Qt4 stopped building the xmlpatterns
component, needed by some downstream packages. With this patch, the
component successfully builds with
```
qt@4.8.7~dbus~debug~examples~framework~gtk~opengl~phonon+shared~sql~ssl~tools~webkit freetype=none arch=linux-rhel7-haswell %gcc@10.2.0
```
`sbang` now lives at https://github.com/spack/sbang, and it has its own
test suite that's more extensive than what's in Spack. We'll leave sbang
tests to sbang from now on, and just vendor `bin/sbang` directly.
Remaining `sbang` tests have to do with patching files, not with
`sbang`'s functionality.
This update also fixes a bug with `sbang` and multiple command line
arguments that was introduced in #19529. See:
* https://github.com/spack/sbang/pull/1
* https://github.com/spack/sbang/pull/2
- [x] include latest `sbang` from https://github.com/spack/sbang
- [x] remove old `sbang` tests from Spack
- [x] update `COPYRIGHT` and `cmd/license.py`
* Update package.py
Remove breaking patch.
Patching the shebang is useless is the dependencies are properly loaded before execution. Furthermore, the long paths which can be generated when installing with Spack can exceed the maximum length of the shebang.
* Add newer versions of strelka.
* [libcudf] created template
* [libcudf] depends on cuda
* [libcudf] set cmake dir
* [libcudf] depends on boost
* [libcudf] depends on py-pyarrow
* [libcudf] depends on librmm
* [libcudf] depends on dlpack
* [libcudf] added more dependency information from https://github.com/rapidsai/libcudf/blob/v0.15.0/CONTRIBUTING.md#customizing-the-build
* [libcudf] removed python dependencies
* [libcudf] fixed url that got mangled in package renaming
* [libcudf] added default build options from build.sh
* [libcudf] added version 0.16.0a
* [libcudf] removed version 0.16.0a as it's an alpha version
* [libcudf] added homepage and description. removed fixmes
* [libcudf] flake8
* [libcudf] arrow requires +orc
* [libcudf] requires +parquet
* [libcudf] checksum changed
`sbang` was previously a bash script but did not need to be. This
converts it to a plain old POSIX shell script and adds some options. This
also allows us to simplify sbang shebangs to `#!/bin/sh /path/to/sbang`
instead of `#!/bin/bash /path/to/sbang`.
The new script passes shellcheck (with a few exceptions noted in the file)
- [x] `SBANG_DEBUG` env var enables printing what *would* be executed
- [x] `sbang` checks whether it has been passed an option and fails gracefully
- [x] `sbang` will now fail if it can't find a second shebang line, or if
the second line happens to be sbang (avoid infinite loops)
- [x] add more rigorous tests for `sbang` behavior using `SBANG_DEBUG`
On Cori(Cray-XC40), I need to pass the entire path for the compilers, this is what is saved in c_compiler, cpp_compiler, f_compiler. Therefore, when for the MPI wrappers only the binary name is provided I run into the same issue. There is no drawback of passing the entire path, this is set by the user through the compiler path anyways.
* added -lpthread flag in kv/tests/CMakeLists.txt
* Update var/spack/repos/builtin/packages/papyrus/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
PHP supports an initial shebang, but its comment syntax can't handle our 2-line
shebangs. So, we need to embed the 2nd-line shebang comment to look like a
PHP comment:
<?php #!/path/to/php ?>
This adds patching support to the sbang hook and support for
instrumenting php shebangs.
This also patches `phar`, which is a tool used to create php packages.
`phar` itself has to add sbangs to those packages (as phar archives
apparently contain UTF-8, as well as binary blobs), and `phar` sets a
checksum based on the contents of the package.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* New package: py-minrpc
* Delete package.py.save
* Update var/spack/repos/builtin/packages/py-minrpc/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`sbang` is not always accessible to users of packages, e.g., if Spack
is installed in someone's home directory and they deploy software
for others. Avoid this by:
1. Always installing the `sbang` script in the `install_tree`
2. Relocating binaries to point to the copy in the `install_tree`
and not the one in the Spack installation.
This PR also:
- ensures that `sbang` is reinstalled if it is modified in Spack
- adds tests
- updates the way `gobject-introspection` patches Makefiles
to support `sbang`
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Add BLT package
* Switch install function
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add type='run' to cmake dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add git attribute to BLT
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cp2k: locate correct include dir when using intel-parallel-studio+mkl for fftw-api
* libxc: drop arch-specific intel opt. flags
fixes#17794
* libint: drop arch-specific intel opt. flags, always build Fortran example with FC
fixes#17509
* package/pmdk add variants, version 1.9
* add dependency
* Update var/spack/repos/builtin/packages/pmdk/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The logic in `config.py` merges lists correctly so that list elements
from higher-precedence config files come first, but the way we merge
`dict` elements reverses the precedence.
Since `mirrors.yaml` relies on `OrderedDict` for precedence, this bug
causes mirrors in lower-precedence config scopes to be checked before
higher-precedence scopes.
We should probably convert `mirrors.yaml` to use a list at some point,
but in the meantie here's a fix for `OrderedDict`.
- [x] ensuring that keys are ordered correctly in `OrderedDict` by
re-inserting keys from the destination `dict` after adding the keys from
the source `dict`.
- [x] also simplify the logic in `merge_yaml` by always reinserting
common keys -- this preserves mark information without all the special
cases, and makes it simpler to preserve insertion order.
Assuming a default spack configuration, if we run this:
```console
$ spack mirror add foo https://bar.com
```
Results before this change:
```console
$ spack config blame mirrors
--- mirrors:
/Users/gamblin2/src/spack/etc/spack/defaults/mirrors.yaml:2 spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/
/Users/gamblin2/.spack/mirrors.yaml:2 foo: https://bar.com
```
Results after:
```console
$ spack config blame mirrors
--- mirrors:
/Users/gamblin2/.spack/mirrors.yaml:2 foo: https://bar.com
/Users/gamblin2/src/spack/etc/spack/defaults/mirrors.yaml:2 spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/
```
Shell integration no longer requires setting `SPACK_ROOT`, so we can
simplify the documentation on it. The docs on shell support and using
packages are getting a bit old, and information on `spack load` (which
seems to be everyone's most common way of using packages) is hard to
find.
This PR simplifies the shell documentation to remove SPACK_ROOT, and also
moves some sections around for clearer organization.
- [x] make docs on sourcing setup scripts clearer and simpler
- [x] introduce `spack load` early in the basic usage guide instead of
burying it in the module docs
- [x] clean up module docs so that spack module tcl loads comes later
- [x] be clear about the different ways to use packages so that the users
can find the docs better.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
GROMACS still requires a version of FFTW when compiling it to utilize
NVIDIA GPUs. In fact, the type of calculation that depends on FFTW --
Particle-Mesh Ewald (PME) -- is generally run on the host system's CPUs,
even when GPUs are available.
* New package: py-rise
* Fix URL and add description
* Update var/spack/repos/builtin/packages/py-rise/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* gaussian-src: initial commit to build from source
* do not install the source to ensure to not accidentally distribute
it to users
* set required runtime env vars based on the login.profile
* gaussian-view: update to 6.1.1
PR #19482 updated gcc to only apply the zstd patch until @10.2 but the
releases/gcc-10 branch actually does not contain the patch yet, that is,
gcc@10.3 will most likely have the same problem. Apply the patch for all
10.x releases instead.
* gemini dep -py-bcolz +
* Update var/spack/repos/builtin/packages/py-bcolz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-bcolz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-bcolz URL fix
* Update var/spack/repos/builtin/packages/py-bcolz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
fixes#19476
Module file content is written to file in a
temporary location and read back to be analyzed
by unit tests.
The approach to patch "open" and write to a
StringIO in memory has been abandoned, since
over time other operations insisting on the
filesystem have been added to the module file
generator.
* [py-pyarrow] telling setup.py that we want cuda support
* [py-pyarrow] added orc variant
* [py-pyarrow] passing the orc variant down the line
* [py-pyarrow] added variant description
* ocl-icd: fix build problems
* New package: opencl-c-headers
* New package: opencl-clhpp
* New bundled package: opencl-headers
- bundle C and C++ header files
* ocl-icd: Add +headers variant to use this as opencl provider
* ocl-icd: add new upstream release 2.2.13
* ocl-icd: add asciidoc-py3 and xmlto dependency needed for manpage generation
* ocl-icd and opencl-headers provides OpenCL 3.0
- also add more explicit version providing for older ocl-icd versions
* opencl-headers: add maximum of supported opencl versions for all versions
* opencl-headers: there aren't final releases with OpenCL 3.0
* [orc] created template
* [orc] depends on maven
* [orc] building with -fPIC
* [orc] fixed name of c flags option
* [orc] depends on openssl
* [orc] added dependencies and disableing installing vendored libs
* [orc] disabling hdfs
* [orc] depending on specific versions of dependencies
* [orc] no building of third party libs
* [orc] helping cmake find the dependencies
* [orc] disabling features that would require static protobuf libraries
* [orc] dependency versions are ranges
* [orc] added homepage and description. removed fixmes
* [orc] flake8
* [orc] switching to compilier indipendent code
* r-sf: fix build error
* Update var/spack/repos/builtin/packages/r-sf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Add herwig3
* Prepare fixes based on MR (needs checking)
* Set all dependencies (except python) as build-type
* OK now
* Move import to the top of the file
* Fix dependency name
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Synchronization on GitHub macOS runners seems to be very slow, and
frequently the foreground/background tests fail due to the race this
causes. This increases the tolerance for slowness a bit more, to allow up
to 4 spurious output lines in the tests.
This should hopefully result in no more false negatives on these tests
for macOS on GitHub.
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Add qgraf
* Update package.py
Changes from review
* Changes from MR
* Fix for URLs containing @ symbol
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Zsh and newer versions of bash have a builtin `which` function that will
show you if a command is actually an alias or a function. For functions,
the entire function is printed, and our `spack()` function is quite long.
Instead of printing out all that, make the `spack()` function a wrapper
around `_spack_shell_wrapper()`, and include some no-ops in the
definition so that users can see where it was created and where Spack is
installed.
Here's what the new output looks like in zsh:
```console
$ which spack
spack () {
: this is a shell function from: /Users/gamblin2/src/spack/share/spack/setup-env.sh
: the real spack script is here: /Users/gamblin2/src/spack/bin/spack
_spack "$@"
return $?
}
```
Note that `:` is a no-op in Bourne shell; it just discards anything after
it on the line. We use it here to embed paths in the function definition
(as comments are stripped).
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Update madgraph to 2.8.1
* Changes from MR
* Changes from MR
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* Updated blaspp package
* Modified lapackpp for newest release
* Formatting
* Updates to lapackpp package for new version
* Added dependency on cblas
* Removed cblas dependency
* updated to lapackpp
* Added new version for blaspp and lapackpp
* Removed debugging output
* Converted version matching logic for for loop
* mpich: yaksa configure fix
modified: var/spack/repos/builtin/packages/mpich/package.py
* typo
* python is not needed when building from preconfigured tarballs
* add maintainers
* Added FFLAGS for apple-clang:11
* Added issue #
* Update var/spack/repos/builtin/packages/mpich/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* update version to avoid compile error
* Update var/spack/repos/builtin/packages/r-rgdal/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Add FORM
* Update package.py
Changes from review
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Fixes for thepeg
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* mfem: specify PETSC_DIR, link correct sundials libraries
* fix: only use PETSC_DIR directly for static builds
* fix: only use sundials nvecmpiplusx for MFEM 4.2+
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* LHAPDF should extend Python to get env variables correct
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* Adding AOCC compiler to SPACK community
The AOCC compiler system offers a high level of advanced optimizations, multi-threading and processor support that includes global optimization, vectorization, inter-procedural analyses, loop transformations, and code generation. AMD also provides highly optimized libraries, which extract the optimal performance from each x86 processor core when utilized. The AOCC Compiler Suite simplifies and accelerates development and tuning for x86 applications.
* Added unit tests for detection and flags for AOCC
* Addressed reviewers comments w.r.t version checks and url,checksum related line lengths
Co-authored-by: Test User <spack@example.com>
* add updated version of py-dnaio
* Add py-setuptools-scm build dependency
* Fine tune the py-xopen dependency constraint
The needed version of xopen does not become specific until v0.4 of
dnaio.
* Set constraint on py-setuptools-scm
The py-setuptools-scm dependency is needed beginning with v0.4.
* updated version of py-cutadapt
* Update dependency specs
* Add py-setuptools-scm build dependency
* More constraint fixes
* Fix version range for py-xopen
* Added tau version 2.29.1 hash
* Update var/spack/repos/builtin/packages/tau/package.py
Make version name match branch name (master)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add updated version of py-xopen
* Update dependency constraints
* Further refine the python constraints
Also, put them all together.
* Put python constraints at top of list
* New package - Gate
This PR adds the Gate package as well as the ITK dependency.
* Fix flake 8 errors
* Be more explicit with CMake options
Make sure Cmake values related to variants are explicitly set to either
ON/OFF.
The ITK_USE_MKL flag will turn on the following:
- USE_FFTWD=ON
- USE_FFTWF=ON
- USE_SYSTEM_FFTW=ON
Since the package depends on fftw-api, those options will always be set.
* A collection of tensorflow fixes and updates
* tensorflow 2.3.1 requires the workaround for external protobuf as well
* Update tensorflow-estimator to 2.3.0
* Update tensorboard to 2.3.0
* Update tensorboard-plugin-wit to use actual releases
* Patch that potentially fixes#16073
* add myself to maintainer list
* Changed make command to support new slate build variable 'blas='
* Updated to use package's "make install" target
* Added variant 'blas' to support switching blas provider and removed legacy 'mkl' variant.
* Fixed problem caused by systems which use a non-bash /bin/sh
* Removed blas= variant in preference for setting blas provider via spec syntax (e.g., ^openblas).
* Fixed formating
* Changed to MakefilePackage and cleaned up make argument generation
* Implemented "edit" method
* Removed blank line
* Sqitched to using mpi compiler wrapper variables
* Update var/spack/repos/builtin/packages/slate/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* ADD: testing to dev-build command
* RM: mutally exclusive group for testing in parser
* FIX: test option to subparser and not testing
* ADD: spack-completion.bash
* RM: local devbuildcosmo cmd
* FIX: bad merge --drop-in -b --before options forgotten
* FIX: --test place in spack-completion.bash
* FIX: typo
* FIX: blank line removing
* FIX: trailing white space
Co-authored-by: Elsa Germann <egermann@tsa-ln002.cm.cluster>
The package list at https://spack.readthedocs.io/en/latest/package_list.html claims "it is automatically generated based on the packages in the latest Spack release" but it is actually based on the develop branch. This leads to confusion when users find that e.g. herwigpp is included in the list, but it cannot be found when they install the latest release. That latest release has a package list at https://spack.readthedocs.io/en/stable/package_list.html which does indeed not include herwigpp.
Changing the language from "the latest Spack release" to "this Spack version" might make that clearer. Maybe.
* Update libensemble to v0.7.1
* Update var/spack/repos/builtin/packages/py-libensemble/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add nvhpc compiler definition: "spack compiler add" will now look
for instances of the NVIDIA HPC SDK compiler executables
(nvc, nvc++, nvfortran) in supplied paths
* Add the nvhpc package which installs the nvhpc compiler
* Add testing for nvhpc detection and C++-standard/pic flags
Co-authored-by: Scott McMillan <smcmillan@nvidia.com>
Output was, e.g. `Executables in /bin and /,u,s,r,/,b,i,n are both associated with the same spec xz@5.2.2`, will be `Executables in /bin and /usr/bin are both associated with the same spec xz@5.2.2`.
Previously config.guess and config.sub were patched only
in the root of the source path.
This modification extend the previous behavior to patch every
config.guess or config.sub file even in subfolders, if need be.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* allow environments to specify dev-build packages
* spack develop and spack undevelop commands
* never pull dev-build packges from bincache
* reinstall dev_specs when code has changed; reinstall dependents too
* preserve dev info paths and versions in concretization as special variant
* move install overwrite transaction into installer
* move dev-build argument handling to package.do_install
now that specs are dev-aware, package.do_install can add
necessary args (keep_stage=True, use_cache=False) to dev
builds. This simplifies driving logic in cmd and env._install
* allow 'any' as wildcard for variants
* spec: allow anonymous dependencies
raise an error when constraining by or normalizing an anonymous dep
refactor concretize_develop to remove dev_build variant
refactor tests to check for ^dev_path=any instead of +dev_build
* fix variant class hierarchy
* Make release_90 preferred version.
* Be more explicity about CUDA dependencies.
* Remove duplicate CUDA dependency in Flang package and introduce nvptx variant.
* Fix nvptx variant message.
* Fixed wrong link to version 0.0.0 and add hash for version 0.1.4
* Fix failing build for neovim@master and neovim@stable and add hash for version 0.4.0
* Fix flake8 issues
* Removed unnecessary newline
* Depedency conditions restriction to neovim >= 0.2.0 as previous versions fail to compile
* Removed build dependency on git
* Removed master from all conditions
* autotools: add attribute to delete libtool archives .la files
According to Autotools Mythbuster (https://autotools.io/libtool/lafiles.html)
libtool archive files are mostly vestigial, but they might create issues
when relocating binary packages as shown in #18694.
For GCC specifically, most distributions remove these files with
explicit commands:
https://git.stg.centos.org/rpms/gcc/blob/master/f/gcc.spec#_1303
Considered all of that, this commit adds an easy way for each
AutotoolsPackage to remove every .la file that has been installed.
The default, for the time being, is to maintain them - to be consistent
with what Spack was doing previously.
* autotools: delete libtool archive files by default
Following review this commit changes the default for
libtool archive files deletion and adds test to verify
the behavior.
* Add new package: py-rbtools
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Original version added --no-gcc to CFLAGS when compiling with intel
compilers. This does not appear to be needed and indeed causes problems
(see #18894) with newer intel compilers; I have modified so it is not
added for intel@19: (I confirmed it is needed/works for intel@20, based
on comments in #18854 looks like holds for intel@19 as well).
(Also fix old formatting issue flake8 was complaining about)
* Update of py-redis for merlin-1.7.5
* Add hiredis variant and python versions for 3.5.x versions.
* Update var/spack/repos/builtin/packages/py-redis/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added slurm version 20-02-4-1 and support to build slurmrestd
* cleanup formatting
* cleanup libjwt to pass unittests
* missed one boilerplate
* hacking a pass in unittests
* defer to default install routine in libjwt
This commit refactors the computation of the search path
for aclocal in its own method, so that it's easier to reuse
for packages that need to have a custom autoreconf phase.
Co-authored-by: Toyohisa Kameyama <kameyama@riken.jp>
The r-devtools package was not installable due to a few issues.
- The rstudioapi spec was for 0.11.0 but the rstudioapi version is
actually 0.11. This caused an error during concretization.
- Set r-usethis to depend on rlang@0.4.3: rather than r-lang@0.4.3.
- Set r-usethis to depend on r-gh@1.1.0: rather than r-gh@1.1.0.
- Added version r-gh-1.1.0 as it is not currently present in spack.
* Added CUDAHOSTCXX variable needed to compile with cuda and mpi.
* Added guard for setting CUDAHOSTCXX with MPI.
* Acceptable working version of dealii+cuda+mpi.
By default Spack uses the latest (highest version) GCC
compiler available, which might change across updates
of the Github CI environment.
Since a C compiler is always installed and `mpich~fortran`
will result in faster build times, avoid building the FORTRAN
interface as part of the test.
* cpio: Fix issue compiling with newer intel compilers (#18854)
Do not add --no-gcc for recent intel compilers (e.g. 20.x)
* cpio: Remove --no-gcc flag for intel@19 as well as intel@20
Based on comments from @nrichart, removing --no-gcc option for intel@19
as well as intel@20
* Provide draco-7_8_0.
+ Also provide a patchfile for draco-7_6_0 to support CrayPE builds.
+ Version 7.8.0 has a new variant `+caliper`.
+ Sort dependencies alphabetically after grouping by required and optional.
* Remove patchfile that is no longer needed.
+ Newer versions of draco do not require this patch.
+ Older versions of draco are not supported for spectrum-mpi.
* Change new variant +caliper to default to False.
* pandoc: add variant for texlive
Modifies the pandoc package by adding a variant for texlive, which is only needed for PDF output. Enables this variant by default.
* Fix whitespace
Fix for #19095
When given +openmp, add the correct compiler openmp flag to the link
stage. This seems to be required for %intel compilers.
I do this for all compilers, not just %intel, because it does not seem
to harm anything and might be beneficial for others (and just seems
'correct').
* py-scikit-image: bump version
* address reviewer comments
* address reviewer comments
* address reviewer comments
* py-scikit-image : update dependencies : part 2
* cloudpicke is a docs only dependency, enable it with a variant if necessary
* address reviewer comments
* cleanup build vs run deps
* address reviewer comments
* Initial cut at FLCL spackage. Works with GCC so far.
* Update spackage to list release which supports spack. Add @agaspar as a maintainer. Default unit tests to disabled when building with spack.
* Change url to 0.2.
* Nope, 0.3.
* add package py-lmodule version 0.1.0
Lmodule is tested with lmod >= 7.x. Lmod 6 has different json
structure in spider which is not supported by lmodule
* py-charm4py: new package
Charm++ for python
Installation notes:
1) charm4py ships with its own charm++ tarball. It really wants
to use the version it ships with. It also builds charm++ in a special way to
produce libcharm.so (but not charmc, etc), so it does not seem
worthwhile to try to hack to build using a spack installed charmpp.
2) Originally, the installation was failing due to unresolved cuda
symbols when setup.py was doing a ctypes.CDLL of libcharm.so (in order
to verify version?). This appears to be due to the fact that
libcharm.so had undefined cuda symbols, but did not show libcudart.so as
a dependency (in e.g. ldd output). To fix this, I had to add
libcudart.so explicitly when linking libcharm.so, but since setup.py
untars a tarball to build libcharm, the solution was a tad convoluted:
2a) Add a patch in spack to py-charm4py which creates a patchfile
"spack-charm4py-setup.py.patch" which will modify a Makefile file (after it
is untarred) to add the flags in env var SPACK_CHARM4PY_EXTRALIBS to
the link command for libcharm.so
2b) The spack patch file also patches setup.py to run patch using the
aforementioned patchfile to patch the Makefile after it is untarred, and
sets the SPACK_CHARM4PY_EXTRALIBS variable appropriately in the setup
environment.
* Update var/spack/repos/builtin/packages/py-charm4py/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-charm4py/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-charm4py: flake8 fixes
remove useless import
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update Package, Pymol 2.4
* Fixed flake8 stuff
* more style fixes
* missing ( at EOF
* added py-pymol 2.3 back this
* extra line removal
* white space in empty line removal
* added libpng and py-pyqt5 to prefix_path
* Fix 'unexpected product version' error for macOS 11.0
* Adjustment: add the minimum version that this macOS patch is necessary.
* Adding a keyword to prevent the patch being applied to systems other than darwin (macOS)
* Deleting quotation marks
* AMD ROCm 3.8.0 - roctracer-dev
* Update var/spack/repos/builtin/packages/roctracer-dev/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added py-ply dependency
* remove py-ply
* Update var/spack/repos/builtin/packages/roctracer-dev/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* aomp 3.7.0 and rccl 3.8.0 update
* Bump up to ROCm 3.8.0 support on AOMP
* Create 0001-Add-amdgcn-to-devicelibs-bitcode-names-3.8.patch
* Create 0001-Add-amdgcn-to-devicelibs-bitcode-names.patch
* Update var/spack/repos/builtin/packages/aomp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This reverts #18359 and follow-on PRs intended to address issues with
#18359 because that PR changes the hash of all specs. A future PR will
reintroduce the changes.
* Revert "Fix location in spec.yaml where we look for full_hash (#19132)"
* Revert "Fix fetch of spec.yaml files from buildcache (#19101)"
* Revert "Merge pull request #18359 from scottwittenburg/add-binary-distribution-cache-manager"
When we attempt to determine whether a remote spec (in a binary mirror)
is up-to-date or needs to be rebuilt, we compare the full_hash stored in
the remote spec.yaml file against the full_hash computed from the local
concrete spec. Since the full_hash moved into the spec (and is no longer
at the top level of the spec.yaml), we need to look there for it. This
oversight from #18359 was causing all specs to get rebuilt when the
full_hash wasn't fouhd at the expected location.
It looks like intel compilers generate warnings for omp pragmas when
openmp flag is not given, which due to other flags set get promoted to
errors.
This adds a flag to ignore the pragma omp warnings (icc diagnostic
number 3180 on %intel@14:).
This changes makes sure that when we run the pipeline job that updates
the buildcache package index on the remote mirror, we also update the
key index. The public keys corresponding to the signing keys used to
sign the package was pushed to the mirror as a part of creating the
buildcache index, so this is just ensuring those keys are reflected
in the key index.
Also, this change makes sure the "spack buildcache update-index"
job runs even when there may have been pipeline failures, since we
would like the index always to reflect the true state of the mirror.
* Add rocblas 3.8.0 and add all Tensile deps
* Deploy rocm_smi to the bin/ folder so that it is in $PATH
* BUILD_WITH_TENSILE_HOST=ON on 3.7.0+ and fix flake8
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
[py-particle] format
[py-particle] switch to pypi downloads
[py-particle] specify dependencies in more details
[py-particle] format
Update var/spack/repos/builtin/packages/py-particle/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Update var/spack/repos/builtin/packages/py-particle/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Update var/spack/repos/builtin/packages/py-particle/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Since those files currently exist in buildcaches (in S3 buckets) with
potentially different content types, we should be less restrictive in
what content types we accept when attempting to fetch them. This PR
removes the content type constraint so any file with the matching
name will be found.
* Remove duplication of reconstructed RPATHs caused by multiple
identical entries in prefixes dictionary
* Don't rewrite RPATHs if relative RPATHs are unchanged because the
directory layout is unchanged
* Need to check the binary is not a Mach-o binary in a linux package or an ELF binary in a macOS package.
* use sys.platform
* Darwin -> darwin for sys.platform
* Created +python_deps variant
- the timemory python bindings can still be imported without these runtime packages and forcing a dependence by default significantly increases the spack install time
* Added conflict
- added conflicts('+python_deps', when='~python')
* rocm-3.8.0 updates for hipblas,rocsolver,rocm-opencl
* rocm-3.8.0 updates to rocalution and rename and change rocmvalidationsuite
* rocm-3.8.0 update to miopen-hip
* Revert "rocm-3.8.0 updates for hipblas,rocsolver,rocm-opencl"
This reverts commit 2542e8b1be.
* rocm-3.8.0 changes for rocsolver and hipblas
* new package: py-gitpython
* Update var/spack/repos/builtin/packages/py-gitpython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Sinan81 <Sinan81@earth>
* Add more updates for kokkos 3.2 release, particularly nvcc-wrapper
* Use an ordinary Package
Co-authored-by: Jeremiah J Wilke <jjwilke@kokkos-dev-2.sandia.gov>
* Rework spack.util.web.list_url()
list_url() now accepts an optional recursive argument (default: False)
for controlling whether to only return files within the prefix url or to
return all files whose path starts with the prefix url. Allows for the
most effecient implementation for the given prefix url scheme. For
example, only recursive queries are supported for S3 prefixes, so the
returned list is trimmed down if recursive == False, but the native
search is returned as-is when recursive == True. Suitable
implementations for each case are also used for file system URLs.
* Switch to using an explicit index for public keys
Switches to maintaining a build cache's keys under build_cache/_pgp.
Within this directory is an index.json file listing all the available
keys and a <fingerprint>.pub file for each such key.
- Adds spack.binary_distribution.generate_key_index()
- (re)generates a build cache's key index
- Modifies spack.binary_distribution.build_tarball()
- if tarball is signed, automatically pushes the key used for signing
along with the tarball
- if regenerate_index == True, automatically (re)generates the build
cache's key index along with the build cache's package index; as in
spack.binary_distribution.generate_key_index()
- Modifies spack.binary_distribution.get_keys()
- a build cache's key index is now used instead of programmatic
listing
- Adds spack.binary_distribution.push_keys()
- publishes keys from Spack's keyring to a given list of mirrors
- Adds new spack subcommand: spack gpg publish
- publishes keys from Spack's keyring to a given list of mirrors
- Modifies spack.util.gpg.Gpg.signing_keys()
- Accepts optional positional arguments for filtering the set of keys
returned
- Adds spack.util.gpg.Gpg.public_keys()
- As spack.util.gpg.Gpg.signing_keys(), except public keys are
returned
- Modifies spack.util.gpg.Gpg.export_keys()
- Fixes an issue where GnuPG would prompt for user input if trying to
overwrite an existing file
- Modifies spack.util.gpg.Gpg.untrust()
- Fixes an issue where GnuPG would fail for input that were not key
fingerprints
- Modifies spack.util.web.url_exists()
- Fixes an issue where url_exists() would throw instead of returning
False
* rework gpg module/fix error with very long GNUPGHOME dir
* add a shim for functools.cached_property
* handle permission denied error in gpg util
* fix tests/make gpgconf optional if no socket dir is available
* Flang master branch is now the preferred version.
* Flang master branch can now use LLVM 9
* Remove master as this was never used by Flang.
* Add LLVM-Flang release_90 and release_90.
Magma is not currently compatible with CUDA-11. While this is reflected
in the package, it is done with a comment in a `depends_on` directive,
which has the effect of trying to install a version of CUDA that may be
different from the one in the current environment, without any message
to the end user. A `conflicts` is a better way to handle this.
* Disable bash completion by default.
* flake8
* Adding explicit dependence on libuuid
* Adding explicit dependence on cryptsetup
This way we don't pick up host crypto packages by mistake.
* Fixing the completion directory.
* Update var/spack/repos/builtin/packages/util-linux/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* flake8
* Removing libuuid linkage according to @michaelkuhn on #18696
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* trigger ascent e4s pipeline on merge to spack develop
* change pipeline name ecpcitest/e4s is the pipeline that will be triggered for merge on develop its the E4S use-case.
This PR adds the current release version of mumax and tweaks the install
of the previous beta version.
- Set the url parameter to reflect the release version over the beta
version. Hopefully, this will be consistent going forward.
- Set an explicit url for the previous beta version.
- Accept values for `cuda_arch`. The previous version had its own list
but the release version does not.
- Replace the built in cuda compute capabilities list with the one
provided by Spack for the 3.10beta version.
This PR fixes a couple of things with the libbeagle package.
- libbeagle can only be built for one GPU type. Add a test for that.
- version 2 had the arch statement in
libhmsbeagle/GPU/kernels/Makefile.am but version 3 has it in
configure.ac. Put the variant specified value in configure.ac for
consistency.
Due to recent changes in the `netcdf-c` package, it is now necessary to explicitly request a non-mpi-enabled hdf5 build if building a non-mpi-enabled seacas.
* libvdwxc: unbreak concretization, request fftw-api
mixing both fftw and fftw-api in a dependency tree can trigger the
following:
```
$ spack spec cp2k@master +sirius
==> [2020-09-16-12:36:06.552981] sirius applying constraint gsl
==> [2020-09-16-12:36:06.554270] sirius applying constraint openblas@0.3.10%gcc@7.5.0~consistent_fpcsr~ilp64+pic+shared threads=none arch=linux-opensuse_leap15-sandybridge
Traceback (most recent call last):
File "./bin/spack", line 64, in <module>
sys.exit(spack.main.main())
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/main.py", line 762, in main
return _invoke_command(command, parser, args, unknown)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/main.py", line 490, in _invoke_command
return_val = command(parser, args)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/cmd/spec.py", line 103, in spec
spec.concretize()
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2228, in concretize
user_spec_deps=user_spec_deps),
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2716, in normalize
visited, all_spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2654, in _normalize_helper
dep, visited, spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2613, in _merge_dependency
visited, spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2654, in _normalize_helper
dep, visited, spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2554, in _merge_dependency
provider = self._find_provider(dep, provider_index)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2489, in _find_provider
providers = provider_index.providers_for(vdep)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/provider_index.py", line 80, in providers_for
return sorted(s.copy() for s in result)
File "/data/tiziano/debug-spack/spack2/lib/spack/llnl/util/lang.py", line 249, in <lambda>
lambda s, o: o is not None and s._cmp_key() < o._cmp_key())
TypeError: '<' not supported between instances of 'str' and 'NoneType'
```
while at the same point disallowing MKL as a fftw provider.
Solving both issues by depending on `fftw-api@3` instead and adding a
conflict on `^fftw~mpi` when using `+mpi` (thanks to alalazo).
* cp2k: use conflicts instead of runtime checks for fftw/openblas variants
* Initial Draft of Cosmoflow Spackage
Need to add in logic to streamline cpu/gpu builds
* Added ~cuda logic to cosmoflow spackage
Added logic to support a ~cuda build for cosmoflow
* Requested Changes to Cosmoflow Spackage
Made requested changes to cosmoflow spackage
MatIO development has switched to github from sourceforge. Updated the `git` and `url` variables and added the four new versions (1.5.14 -- 1.5.17) that have been released since the last update of this package.
* qbox: install to correct directory structure
* qbox: Have qb executable put in bin rather than src subdir
* qbox: Fix python script shebangs to use python from path
* qbox: Add dependencies on gnuplot, python2 for utilities
* qbox: fix flake8 issue
* qbox: Add $prefix/util to PATH
* Initial CRADL Spackage Work
Currently resolving ```--single-version-externally-managed``` error
* Fixed GPUtil Issues
Thanks to Vinay Ramakrishnaiah for overwriting install
* Finished CRADL Install Function
Finished CRADL install function which is basically copying the scripts
to the install directory. Also resolved flake8 issues for PR purposes
Update pipelines documentation to describe how 'tags', 'variables',
'image', 'before_script', 'script', and 'after_script' can be
supplied at the top level, to be used by any of the runner mappings,
and also overridden by any of the runner mappings.
Also show an example of capturing the custom spack SHA at pipeline
generation time, so all jobs are sure to run with the same version
of spack, as a means to illustrate the $env:VARIABLE_NAME syntax.
* Add new package: webbench
* Update var/spack/repos/builtin/packages/webbench/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cpio: add --rtlib=compiler-rt for %fj
* cpio: simplify if
* Update var/spack/repos/builtin/packages/cpio/package.py
This seems better.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add new package: hping
* Update var/spack/repos/builtin/packages/hping/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/hping/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Fix flake8 errors
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Use the config path instead of the basename
* Removing unused variables
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Test
Making sure if there are 2 include config files with the same basename they are both implemented
* Edit test assert
Co-authored-by: Greg Becker <becker33@llnl.gov>
* ncurses: adding external support.
* Update var/spack/repos/builtin/packages/ncurses/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ncurses/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ncurses/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixing includes.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Fixes#18441
When writing an environment, there are cases where the lock file for
the environment may be removed. In this case there was a period
between removing the lock file and writing the new manifest file
where an exception could leave the manifest in its old state (in
which case the lock and manifest would be out of sync).
This adds a context manager which is used to restore the prior lock
file state in cases where the manifest file cannot be written.
* Checksummed New Flux Versions
Checksummed new flux versions to let spack detect them
* Added CXXFlags to build Flux-sched
Added missing cxxflags to build flux-sched
* Adding Cuda Variant to SW4Lite
Added cuda variant of sw4lite as per guidance in README
* Updated SW4Lite+cuda to Current Header Conventions
Updated sw4lite+cuda to use current conventions for spackage include
dirs
* Fixing FLake8 Issue with Sw4lite+cuda Fix
Fixed overly long line and further underlined sticky note reminding me
to run flake8 BEFORE pushing
* Switching to Spack Compiler Wrapper
Switching to spack compiler wrapper for consistency
* Orca: Add new versions.
* Orca: Support OpenMPI without the legacy wrappers.
By default, Spack builds OpenMPI without the legacy wrappers when using the Slurm scheduler. This breaks Orca since its binaries are hardcoded to call "mpirun". To workaround this issue, add a "mpirun" wrapper which calls "srun" when required.
* cp2k: do not support ~openmp for v8+
* sirius: version bump
* cp2k: fix overlapping deps for elpa
fixes#18029
* cp2k: update SIRIUS dependency for v8+
* spfft: requires CMake 3.11+
* cp2k: fix build with +sirius
* darshan-util: remove return(-1) from void function
* Update var/spack/repos/builtin/packages/darshan-util/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/darshan-util/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* gamess-ri-mp2-miniapp: initial import
* flake8
* Update var/spack/repos/builtin/packages/gamess-ri-mp2-miniapp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This is a special case of overriding since each section is being matched with the current spec.
The trailing ':' for sections with override is now removed when parsing the configuration so the special handling for the modules configuration stopped working but it went unnoticed.
Starting with OpenSceneGraph 3.5.5, support for windows managed by Qt
has been moved to the seperate project osgQt. Hence, a dependency on Qt
is not needed any longer for version 3.5.5 or newer.
In order to still satisfy the dependency on OpenGL, a depends_on('gl')
has been added.
without setting the build enviroment, the installation fails with
```
1 error found in build log:
35946 fmtutil [INFO]: /usr/local/pkg/Installs/linux-ubuntu18.04-skylake_avx512/gcc7.4.0/texlive/20190410/rgs2nakycorkgzno/t
exmf-var/web2c/pdftex/pdfcslatex.fmt installed.
35947 fmtutil [INFO]: Disabled formats: 6
35948 fmtutil [INFO]: Successfully rebuilt formats: 45
35949 fmtutil [INFO]: Total formats: 51
35950 fmtutil [INFO]: exiting with status 0
35951 ==> [2020-09-07-21:23:21.482745] '/usr/local/pkg/Installs/linux-ubuntu18.04-skylake_avx512/gcc7.4.0/texlive/20190410/
rgs2nakycorkgzno/bin/x86_64-linux/mtxrun' '--generate'
>> 35952 /usr/bin/env: 'texlua': No such file or directory
```
May be there is a better way...
Cython requires a library that is available in Python 3.8, or before
Python 3.8 with setuptools. This specifies that setuptools is a run
dependency to allow running with Python < 3.8
`spack install --yes-to-all` doesn't actually make the build non-interactive,
but that is why people typically use it. This documents that you must also
specify `--no-checksum` for a fully non-interactive build.
* Modules: Deduplicate suffixes but don't sort them.
The suffixes' order is defined by the order in which they appear in the configuration file.
* Modules: Modify tests to use spack_yaml.load_config.
spack_yaml.load_config ensures that the configuration is stored in an ordered manner. Without this change, the behavior of the tests did not match Spack's.
* Modules: Tweak the suffixes test to better catch ordering issues.
* new package: py-textblob
add variant to py-nltk to allow for data download/installation
add dependencies to py-nltk so that bin/nltk works
* add resources and resource generation script
* spack config: default modification scope can be an environment
The previous model was that environments are the highest priority config
scope for config reading operations, but were not considered for config
writing operations. Now, the active environment is the highest priority
config scope for both reading and writing operations.
Now spack config add, spack external find and spack compiler set environment
configuration in the environment by default if an environment is active. This is a
change in default behavior for these routines, but better matches the mental
model for an environment taking precedence over the user's default config file.
* add scope argument to 'spack external find' to choose non-default scope
* Increase testing for config modifications on environments
Co-authored-by: Gregory Becker <becker33@llnl.gov>
At some point in the build phase a script
spack-src/scripts/convert-template
has a shebang looking for python in the path.
Currently this picks up system python if in invoker's path, but should
be using python from spack, so add a build dependency on python.
Many system-installed binaries (at least in Debian) are built against a
libtinfo.so that has versioned symbols. If spack builds a version without this
functionality, and it winds up in the user's LD_LIBRARY_PATH via spack load,
system binaries will begin to complain.
```
$ less log.txt
less: /opt/spack/.../libtinfo.so.6: no version information available (required by less)
```
Co-authored-by: Luke D'Alessandro <ldalessa@uw.edu>
The 'external_modules' attribute on a Spec, when read from a YAML
configuration file, may contain extra formatting that is lost when
that Spec is written-to/read-from JSON format. This was resulting in
a hashing instability (when the Spec was read back, it would report a
different hash). This commit adds a function which removes the extra
formatting from 'external_modules' as it is passed to the Spec in
__init__ to ensure a consistent hash.
* Add rocm 3.7.0 libs
* Make 3.7.0-only dependency on numactl explicit
* Add rocm-device-libs dep to rocm-clang-ocl
* Update the cmakelists dir in rocm-debug-agant
* Make rocm-debug-agent work on 3.7.0
* Disable tensile host; following rocm-arch recommendations
* ldak: new package at 5.1
* flake8
* Re-run tests
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-gitdb
* Update var/spack/repos/builtin/packages/py-gitdb/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Set GOPATH in build environment to avoid creating files in the user's
default GOPATH (e.g. ~/go).
* Support for external find.
* Added latest releease 0.74.3.
Remove prior built-in Trilinos subrepository.
Added a Trilinos conflict discovered while documenting ForTrilinos:
```
***
*** ERROR: Setting Trilinos_ENABLE_SEACASExodus=OFF which was 'ON' because SEACASExodus has a required library dependence on disabled TPL Netcdf!
***
```
As detailed in https://bugs.python.org/issue33725, starting new
processes with 'fork' on Mac OS is not guaranteed to work in general.
As of Python 3.8 the default process spawning mechanism was changed
to avoid this issue.
Spack depends on the fork-based method to preserve file descriptors
transparently, to preserve global state, and to avoid pickling some
objects. An effort is underway to remove dependence on fork-based
process spawning (see #18205). In the meantime, this allows Spack to
run with Python 3.8 on Mac OS by explicitly choosing to use 'fork'.
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* [py-torch-geometric] depends on py-torch-sparse
* [py-torch-geometric] setting TORCH_CUDA_ARCH_LIST
* [py-torch-geometric] added the rest of the dependencies
* [py-torch-geometric] added cuda variant and added more build env vars
* [py-torch-geometric] added variant info for depenedencies
* [py-torch-geometric] flake8
* [py-torch-geometric] add variant description
* HPCC Benchmark: added HPC Challenge (HPCC) benchmark
* HPCC Benchmark: modified error message on lack of fftw2 interface in MKL
* hpcc: fixed styling add one more installation example
* hpcc: styling fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* hpcc: changed include and lib location setter
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* hpcc: fixed styling add one more installation example
* hpcc: removed readme.md
* hpcc: develop repo now is in github
* hpcc: march arguments are set explicitly in case of intel compilers, added -restrict flag, which needed for older intel compilers (at least <=19.0.5.281)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* wrf: new package
* wrf: fix install dir
* wrf: ndown location
* Add more compiler and nesting options to wrf package
* Fix configure that didn't find pgf90, use tempfile and compile in parallel
* WRF v4.2 with parallel I/O support through pnetcdf
Signed-off-by: michael laufer <michael.laufer@toganetworks.com>
* extend Package, compiler wrapper now used, small fixes
Signed-off-by: michael laufer <michael.laufer@toganetworks.com>
* Update var/spack/repos/builtin/packages/wrf/package.py
fixed typo
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Levi Baber <baberlevi@gmail.com>
Co-authored-by: eXact lab <info@exact-lab.it>
Co-authored-by: michael laufer <michael.laufer@toganetworks.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-update-checker
* add test deps
* Update var/spack/repos/builtin/packages/py-update-checker/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-update-checker/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-update-checker/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove lint stuff.
Co-authored-by: Sinan81 <Sinan81@earth>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`configure` script of Modules 4.5.2 is a bit too strict and breaks when
special options like `--disable-dependency-tracking` are set. This issue
will be fixed on Modules project starting version 4.5.3
(cea-hpc/modules#354).
This change adapts `configure` options set when installing version 4.5.2
to avoid options unrecognized on this version.
Fix#18420
* libvterm: renumber version and add 1.0.3
neovim: build on aarrch64
* Remove unneeded comment.
* libvterm: newer bazaar snapshot version is set to version 0.0.
neovim: change for libvterm version change, and libtermkey version bug is fixed.
* update libvterm versions.
* Add new package: byte-unixbench
* refine install flow
* Update var/spack/repos/builtin/packages/byte-unixbench/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add new package: leptonica
* Update var/spack/repos/builtin/packages/leptonica/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-spacy: new version 2.3.2
update en-core-web-sm to @2.3.1
add en-vectors-web-lg@2.3.0
* update deps
* wasabi
Co-authored-by: Andrew Elble <aweits@localhost.localdomain>
* Update version for package Draco
+ Add support for `draco-7.7.0`.
+ Introduces new `+cuda` variant. This variant is only allowed in version
`7.7.0:`.
+ Restrict `random123` to compatible versions.
+ Restrict `libquo` to compatible versions.
+ Moving forward, require `python@3:`
+ Moving forward, the ``+superlu_dist` variant is not longer supported.
+ Improve printed output for `--test` mode by adding `ctest` option
`--output-on-failure`
+ Provide a patch to support for IBM Spectrum-MPI in version `7.7.0:`
+ Provide a patch to allow variant `~cuda` to actually disable GPU portions of
the code when a GPU is discovered on the local system.
* Remove unnecessary function decoration.
* Adding externals for bison and flex
Added because bison actually pulls in a ton of stuff.
* Need to escape parentheses.
* Need to add re package.
* Adding re package.
* spectrum-mpi: adding external support.
* Package is tested, works on LLNL lassen
* Spectrum external now detects the correct compiler
* Changing code to not output all compilers
Done per becker33's request on #18055
If Thyra isn't explicitly enabled at the package level, trilinos fails
to build.
```
/var/folders/gy/mrg1ffts2h945qj9k29s1l1dvvmbqb/T/s3j/spack-stage/spack-stage-trilinos-12.18.1-vfmemkls4ncta6qoptm5s7bcmrxnjhnd/spack-src/packages/muelu/adapters/stratimikos/Thyra_XpetraLinearOp_def.hpp:167:15: error:
no member named 'ThyraUtils' in namespace 'Xpetra'
Xpetra::ThyraUtils<Scalar,LocalOrdinal,GlobalOrdinal,Node>::toXpetra(rcpFromRef(X_in), comm);
~~~~~~~~^
```
* py-basemap
* Updated versions + URL attribute
* Update var/spack/repos/builtin/packages/py-basemap/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-basemap/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removed unnecssary comment
* flake8
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Adding external support for mvapich2.
This picks up all the options that are currently settable by
the spack package. It also detects the compiler and sets it
appropriately.
* Removing debugging printing.
* Adding changes suggested by @nithintsk
+ Added version 2.1.9
+ Previously the SZ package incorrectly depended on CMake without a
version dependancy, but actually version 3.13 or newer is required
+ Added myself as a maintainer for the SZ spack package
This is a bug release with some new features and bug fixes. Among them:
[Batch] Set number of MPI processes for SLURM. (Ben Tovar)
[General] Use the right signature when overriding gettimeofday. (Tim
Shaffer)
[Resource Monitor] Add context-switch count to final summary. (Ben
Tovar)
[Resource Monitor] Fix kbps to Mbps typo in final summary. (Ben Tovar)
[WorkQueue] Update example apps to python3. (Douglas Thain)
* samtools: Add version 0.1.8 for OSS soapdenovo-trans.
* Add depend on zlib and samtools to build on aarch64.
* soapdenovo-trans: Change the condition of depend on zlib and samtools.
* New package: cxxopts
* Use +unicode instead of unicode=True
- Make the unicode option more explicit
* Add two new variants to spack for upcoming 1.5, stable and develop
* Add as maintainer
* Add depends_on on clauses
* Remove unrelated change
I know that it's just an example, but I was trying to figure out what was going on and it wasn't making sense....
`tput sgr0` resets the terminal state (http://linuxcommand.org/lc3_adv_tput.php) and I can't see any reason to do it twice. Deleting the second occurrence doesn't seem to break the fancy prompt effect.
* qgis
* Update package.py
QGIS 3.12.1 can use PROJ >= 4.9.3. Therefore both version restrictions on PROJ were incorrect.
https://github.com/qgis/QGIS/blob/final-3_12_1/INSTALL
* Update package.py
Add explanation to (hopefully temporary) removal of hdf5 dependency.
* Remove overly restrictive GRASS version number.
* flake8
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Set location for dependencies specifying explicitly both
the include and lib path. This permits to handle cases where
the libraries are installed in lib64 instead of lib.
fixes#17556fixes#10842closes#18150
Compilers can have strange versions, as the version is provided by the user. We know the real version internally, (by querying the compiler) so expose it as a property and use it in places we don't trust the user. Eventually we'll refactor this with compilers as dependencies, but this is the best fix we've got for now.
- [x] Make `real_version` a property and cache the version returned by the compiler
- [x] Use `real_version` to make C++ language level flags work
Restores the fetching progress bar sans failure outputs; restores non-debug reporting of using fetch cache for installed packages; and adds a unit test.
* Add status bar check to test and fetch output when already installed
Some of the feature flags are named differently and clwb is missing on
my i7-1065G7. cascadelake and cannonlake might have similar problems but
I do not have access to those architectures to test.
- add cuda variant, enabled by default, but conflicting with
strumpack@:3.9.999
- add zfp variant, enabled by default, but conflicting with
strumpack@:3.9.999
- update minimum CMake version to 3.11
- for version 4.0.0:, do not use mpi wrappers. v4.0.0 uses CMake
MPI targets
- for version 4.0.0, add dependency on butterflypack@1.2.0:
- remove versions 3.1.0 and older
- make parmetis variant True by default
- add TODO for slate variant (spack package not ready yet)
While I believe there must have been a reason to restrict libtool to <=
2.4.2, adios compiles just fine with libtool 2.4.6 for me.
In fact, without this change, I'm getting this error:
libtool: Version mismatch error. This is libtool 2.4.6, but the
libtool: definition of this LT_INIT comes from libtool 2.4.2.
libtool: You should recreate aclocal.m4 with macros from libtool 2.4.6
This doesn't make much sense, since spack did build libtool@2.4.2 as a
dependency, and was supposedly trying to use it. My guess is that on
this system (NERSC's cori) the system libtool in /usr/bin, which is
2.4.6 somehow got picked up partially.
Semi-recently the lua spackage was updated to explicitly add libtinfow
to the lua build line. Ncurses provides this but only when the +termlib
variant is enabled
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
* fix typo WONTON_ENABLE_Kokkos ---> TANGRAM_ENABLE_Kokkos
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
* make_package_relative: relocate rpaths on cray
* relocate_package: relocate rpaths on cray
* platforms: add `binary_formats` property
We need to know which binary formats are supported on a platform so we
know which types of relocations to try. This adds a list of binary
formats to the platform and removes a bunch of special cases from
`binary_distribution.py`.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
See #18033
libssh seemed to detect and link to system krb5 libraries if found
to provide gssapi support, causing issues/system dependencies/etc.
We add a boolean variant gssapi
If +gssapi, the spack krb5 package is added as a dependency.
If ~gssapi, the Cmake flags are adjusted to not use gssapi so that
does not link to any krb5 package.
xz-utils already builds a shared library. The +pic variant adds the
compiler pic flag to the static archive so that it can be linked into
another shared library.
* Add Collier and SysCalc recipes
* Remove extra syscalc version
* Build collier with -j1 for @:1.2.4
* Add recipe for gosam-contrib
* Update gosam-contrib recipe with 'provides'
* Madgraph recipe, first version
* Finalize madgraph recipe + flake8
* Make py2 version of madgraph default; fix hash for syscalc; fix patch
* Handle virtual packages (#3)
* Update package.py
* Update packages.yaml
* Remove virtual packages - pt. 1
* Remove virtual packages - pt. 2
* Changes from review - pt. 1
* Changes from code review - pt. 2
* Update var/spack/repos/builtin/packages/collier/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/madgraph5amc/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add hash for version 2.7.2 (available in our private mirror)
* Fixes for 2.7.3 family
* Patches for 2.7.3{.py3,}{.atlas,}
* Fix hash of syscalc
* Hack to fix concretization (2.7.3 matches 2.7.3.py3)
* Add conflict statement (reported to devs)
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Delete madgraph5amc-2.7.2.atlas.patch
* Delete madgraph5amc-2.7.2.patch
* Update package.py
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: iarspider <iarpsider@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Adding new packages Mvapich2x and Mvapich2-GDR which can be installed only via binary mirrors
* Added docstring descriptions to both packages
* Removed variant wrapper for cuda dependencies
* Fixed multiple flake8 errors
* Updated APIs to pass unit tests
* Updated APIs for MVAPICH2-X package and fixed flake8 warnings for MVAPICH2-GDR
* Changed url back to single line
* Removed extra parantesis around URL string
Co-authored-by: nithintsk <nithintsk@github.com>
* [root] add dataframe cmake option
@chissg @HadrienG2 @drbenmorgan
This has been a separate cmake option starting v6-19 I believe: 31292b9082
It should default to true -- not sure why, but this recipe sets it to off.
I could add a variant too, but since it has become an integral part of root and doesn't introduce extra dependencies, I'd propose to just set it to true like I do here.
* Update package.py
Before this PR, packages.yaml files that contained an
empty "paths" or "modules" attribute were not updated
correctly, since the update function was not reporting
them as changed after the update.
This PR fixes that issue and adds a unit test to
avoid regression.
This commit adds output to the "spack external find"
command to inform users of the result of the operation.
It also fixes a bug introduced in #17804 due to the fact
that a function was not updated to conform to the new
packages.yaml format (_get_predefined_externals).
* Update the change to add gomp compatibity to llvm-openmp.
* Update the change to add gomp compatibity to llvm-openmp using append instead of extend.
* Fix flake8 issue.
Co-authored-by: Jim Galarowicz <jgalarowicz@newmexicoconsortium.org>
* pFUnit: Added support for version 4
pFUnit v4 uses submodules, so we must fetch from the repo rather
than grabbing the tarball (see #11642).
* pFUnit: Added conflicts
pFUnit 4 causes an internal compiler error with gcc 7.2.0, and
several pFUnit versions are incompatible with shared libraries.
* pFUnit: Added conflicts for version 4
Verson 4 uses Fortran 2008 features and cannot be built with gcc
compilers prior to 8.4.
* pFUnit: Fixed conflicts/dependencies as suggested
* pFUnit: Version 4 no longer fetches from git
Checksummable files are fetched instead.
* pFUnit: Simplify major version check
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* pFUnit: Removed unnecessary patch for v4
The patch is still applied to v3.
* pFUnit: Modified MPI flag for v4
pFUnit v3 and v4 use different CMake flags to enable/disable MPI
support. Also added a conflict for v3 with MPI enabled using
gfortran 10, since newer gfortran is more finicky about datatypes.
* pFUnit: Rearranged mpi logic
* pFUnit: changed m4 to a build dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* pFUnit: Added URL back
I did not realize it was needed by "spack versions" and
"spack checksum". Thanks @adamjstewart!
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When the user explicitly sets ~fortran, mpich builds without fortran
support. This will make building C/C++ libraries using clang easier,
since clang does not offer a fortran compiler by default (yet).
Since the user has to disable Fortran support explicitly, this change
is not breaking.
* Handle uninstalled rootspecs in buildcache
- Do not parse specs / find matching specs when in an environment and no
package string is provided
- Error only when a spec.yaml or spec string are not installed. In an
environment it is fine when the root spec does not exist.
- When iterating through the matched specs, simply skip uninstalled
packages
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
Style and documentation tests take just a few minutes
to run. Since in Github actions one can't restart a single
job but needs to restart an entire workflow, here we group
tests with similar duration together.
- [x] Remove references to `master` branch
- [x] Document how release branches are structured
- [x] Document how to make a major release
- [x] Document how to make a point release
- [x] Document how to do work in our release projects
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
This is needed because libcuda is used by the driver,
whereas libcudart is used by the runtime. CMake searches
for cudart instead of cuda.
On LLNL LC systems, libcuda is only found in compat and
stubs directories, meaning that the lookup of libraries
fails.
`spack -V` stopped working when we added the `releases/latest` tag to
track the most recent release. It started just reporting the version,
even on a `develop` checkout. We need to tell it to *only* search for
tags that start with `v`, so that it will ignore `releases/latest`.
`spack -V` also would print out unwanted git eror output on a shallow
clone.
- [x] add `--match 'v*'` to `git describe` arguments
- [x] route error output to `os.devnull`
`spack buildcache list` was trying to construct an `Arch` object and
compare it to `arch_for_spec(<spec>)`. for each spec in the buildcache.
`Arch` objects are only intended to be constructed for the machine they
describe. The `ArchSpec` object (part of the `Spec`) is the descriptor
that lets us talk about architectures anywhere.
- [x] Modify `spack buildcache list` and `spack buildcache install` to
filter with `Spec` matching instead of using `Arch`.
- [x] Make it easier to get a `Spec` with a proper `ArchSpec` from an
`Arch` object via new `Arch.to_spec()` method.
- [x] Pull `spack.architecture.default_arch()` out of
`spack.architecture.sys_type()` so we can get an `Arch` instead of
a string.
* Loosen Axom's variants, add shared variant for axom, fix clang/xlf rpath'ing problem on blueos
* Fix flake8
* Add main branch to list of known git branches
fixes#18028
Since now external packages support multiple modules
the correct thing to do is to check if the name of the
*first* module to be loaded contains the string "cray"
`cmake @3.16.3` is the version provided by Ubuntu 20.04. Adding this version here avoids the warning
```
==> Warning: Missing a source id for cmake@3.16.3
```
when using the system `cmake`.
* Spack recipes for ROCm Stage 1 Build components
* fix flake8 errors
* fixes for flake8 errors
* Add a patch for cmake 3.x suport
* Fix rpath issue where hsa-rocr-dev does not allow it to be filled in by spack
* Remove inherited cmake args from comgr
* Make hsakmt-roct compile: no -Werror because with const cast in numa, and actually add the numa dependency
* Remove redundant cmake args which is inherited
* Fix some dependencies
* Fix some python 2.x compatibilities
* Add amd gpu targets to rocfft
* Make comgr a link dep of rocm-dbgapi and remove redundant cmake args
* Remove redundant cmake args
* Remove more redundant cmake args
* Final redundant args
* Use cmake 3.x instead of a fixed version
* Remove random variable
* Use installed rocclr instead of nonexisting directory
* Don't build outside the staging folder
* Deploy some missing cmake target file
* Formatting
* Fix target list
* Properly handle the rocclr dependency
* Formatting
* Fix vermin test
* Make all 3.5.0 package depend exactly on eachother
* Add a few missing link dependencies
* Fix flake8
* Remove some other redundant flags
* Add gcc install prefix for gcc builds of llvm-amdgpu
* review changes for the spack recipes
* Do not hard-code versions
* Fix atmi install
- no more relative rpaths outside of install directory (required patch)
- fix build -> link dependencies
- remove unused build dependency
* Fix flake8 errors
* Remove unused variable and make things python 2.x compatible
* Fix flake8
* Move compiler config from rocfft -> hipcc
* Remove redundant dependency on fftw-api
* Remove redundant import
* Avoid hitting the ROCM_PATH variable altogether with a patch; also just fill in all variables
* Add missing deps z3, zlib and ncurses+termlib to llvm-amdgpu
* Fix perl shebang and add dep
* Fix typo and patch HIP_CLANG_ROOT detection in hip's cmake files
* fixing build failure due z3 and adding zlib for rocgdb
* new changes to add z3,curses dependency for llvm-amdgpu
* fix flake8 error
Co-authored-by: root <root@localhost.localdomain>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
This is needed because libcuda is used by the driver,
whereas libcudart is used by the runtime. CMake searches
for cudart instead of cuda.
On LLNL LC systems, libcuda is only found in compat and
stubs directories, meaning that the lookup of libraries
fails.
* Libtool: add spack external find support
* Less specific regex
* match -> search
* Clarify that min returns first alphabetically, not shortest
* Simplify version determination
The modifications in 193e8333fa
introduced a bug in the loading of compiler modules, since a
function that was expecting a list of string was just getting
a string.
This commit fixes the bug and adds an assertion to verify the
prerequisite of the function.
* add py-ufl package from fenics
* add py-fiat package from fenics
* add py-ffcx package from fenics
* add py-dijitso package from fenics
* add dolfinx library from fenics
* amend ffcx to use ufl and fiat master branches
* setup variants complex and int64 of dolfinx
* add dolfinx python library as package
* add test dependencies to py-dolfinx
* remove broken doc variant
* remove test dependencies from py-dolfinx
* flake8 fixes to dolfinx and py-dolfinx
* make sure dolfinx cmake picks up the correct python version
* list build phases in py-dolfinx package
* remove unnecessary package url
* make pkgconf a build dependency
* make all python dependencies build+run
* py-ffcx needs py-setuptools to be a build/run dependency to support ffcx executable
* remove unnecessary variants from dolfinx
* add missing dependencies to py-dijitso
* remove stray line from py-dolfinx
* simplify definition of build_directory in py-dolfinx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* use depends_on("python") rather than extends("python") in py-ffcx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* use depends_on("python") rather than extends("python") in py-fiat
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* use depends_on("python") rather than extends("python") in py-ufl
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* rename py-fiat to py-fenics-fiat
* rename py-ufl to py-fenics-ufl
* fix error in depends_on(petsc) definition
* add missing dep on numpy to py-fenics-fiat
* specify python@3.8: as requirement for all fenics components
* use tuples rather than list for depends_on type=
* specify eigen@3.3.7: as dependency for dolfinx
* add js947 and chrisrichardson as maintainers for the fenics packages
* remove scipy dependency from py-dolfinx
* rename package py-ffcx -> py-fenics-ffcx
* rename package dolfinx -> fenics-dolfinx
* rename package py-dolfinx -> py-fenics-dolfinx
* remove pointless URL from py-fenics-dolfinx package
* rename package py-dijitso -> py-fenics-dijitso
* formatting
* remove unecessary cmake args from fenics-dolfinx
* revert py-fenics-fiat python version to 3:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* revert py-fenics-ufl python version to 3.5:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add conflict to fenics-dolfinx for C++17 support
* revert py-fenics-ffcx python version to 3.5:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* pbbam: fix build error
* Update var/spack/repos/builtin/packages/pbbam/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Packages can implement “detect_version” to support detection
of external instances of a package. This is generally easier
than implementing “determine_spec_details”. The API for
determine_version is similar: for example you can return
“None” to indicate that an executable is not an instance
of a package.
Users may implement a “determine_variants” method for a package.
When doing external detection, executables are grouped by version
and each group results in a single invocation of “determine_variants”
for the associated spec. The method returns a string specifying
the variants for the package. The method may additionally return
a dictionary representing extra attributes for the package.
These will be stored in the spec yaml and can be retrieved
from self.spec.extra_attributes
The Spack GCC package has been updated with an implementation
of “determine_variants” which adds the following extra
attributes to the package: c, cxx, fortran
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).
With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
Older versions do not compile correctly. New users should use 2.004,
not any of the older versions.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
FlatCC has been removed from UnifyFS as a dependency on the develop
branch and for future releases.
spath is now an optional dependency for UnifyFS to normalize relative
paths provided by the user.
The tcl module for r-dorng will fail to load due to the [] characters in
the description. This happens for Tcl formatted modules loaded by Lmod
at least.
```
module load r-dorng-1.7.1-gcc-9.2.0-wtq7bne
Lmod has detected the following error: .../spack/share/spack/modules/linux-centos7-broadwell/r-dorng-1.7.1-gcc-9.2.0-wtq7bne:(r-dorng-1.7.1-gcc-9.2.0-wtq7bne):
invalid command name "L'Ecuyer"
```
Split text for short and long descriptions.
* Add variants to petsc
This PR adds the follolwing variants to the petsc package
- gmp
- jpeg
- libpng
- giflib
- mpfr
- netcdf
- pnetcdf (parallel-netcdf)
- moab
- eigen
- random123
- exodusii
- mstk
- cgns
- memkind
- muparser
- p4est
- saws
- libyaml
- zstd
* Fix flake8 errors
* Additional changes to Petsc recipe
This commit addresses the issues with dependencies that were brought up
in the comments. There are also a few other enhancements.
- the language of the new variant descriptions was changed to be more
consistent with what was already in the recipe
- an explicit '+mpi' was added to the depends_on('hypre...') directives
- an explicit '+mpi' was added to the depends_on('trilinos...')
directives
- the run time error checking for '~mpi' was replaced with 'conflicts()'
directives that will cause the install to fail sooner
- additional variants that were 'parallel only' were added to the '~mpi'
check
* Set the '~mpi`' conflicts msg to a variable
* Changing raja, chai, and umpire packages so all will compile with each other.
* Need a CUDA version of CHAI when compiling with raja+cuda+chai
* Updating checks for commit.
* Adding comments explaining why chai+umpire tests were disabled
* Reactivating tests for CHAI and Umpire
* reordering versions
* Unified handling of Cuda Arch
* Adding latest versions
* Unused/Untested: removed
* Aesthetic and test mode in Chai
* Unified handling of Cuda Arch
* Using 'ON' consistently, instead of 'On'
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix, suggestion and patch:
Chai depends on RAJA, not the other way.
Apply suggested master-main version mapping.
Add Umpire version 3.0.0 and patch.
Co-authored-by: Robert Blake <blake14@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package - REDItools
This PR adds the REDItools package, along with a new package dependency,
py-fisher. This contains a patch generated from the python 2to3 script
as well as some other fixes. I am not sure if the project is ready to
support python-3 yet but I submitted the other patches upstream.
* Update var/spack/repos/builtin/packages/reditools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new version 2020.3; new variants nosuffix and fft; version selections for plumed
* fixed too long lines
* fixed whitespaces
* revised fft interface according to @haampie 's suggestions
Co-authored-by: lu64bag3 <gerald.mathias@lrz.de>
* new package Wonton
* remove the flecsi variant because flecsi-sp does not have a spackage
* fix url, clean up whitespaces
* formatting
* put in explicit else clauses for variants in CMake section because CMake's behavior is system-dependent
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
`spack -V` stopped working when we added the `releases/latest` tag to
track the most recent release. It started just reporting the version,
even on a `develop` checkout. We need to tell it to *only* search for
tags that start with `v`, so that it will ignore `releases/latest`.
`spack -V` also would print out unwanted git eror output on a shallow
clone.
- [x] add `--match 'v*'` to `git describe` arguments
- [x] route error output to `os.devnull`
* update version: intel packages daal, ipp, mkl-dnn, mkl, mpi, parallel-studio, pin, tbb and makes url parameter consistent and always use single quote.
* Fixes a typo with one of the sha256 checksum..
* Adds version entries for new versions of Intel packages.
* Adds hashes for new versions of Intel packages.
* Adds missing hash of Intel compiler.
* Adds the newest version of Intel MPI 2019.8.
* Fixes hash for intel-parallel-studio and intel-tbb.
* Fixes version number of Intel MPI.
* Adds GPI-2 package.
* Fixes flake8 noticed issues.
* Second try to fix flake8 comment
* Fixes some issues adamjstewart noticed.
* Fixes package according to flake8 complains.
* Fixes flake8 issue.
* Renames next version to master and removes master.
* Adds maintainer into gpi-2 and returns master branch for the git
repository.
Co-authored-by: Robert Mijakovic <robert.mijakovic@lrz.de>
* Dyninst: 10.2 release
* Use 'elf' instead of 'elfutils'
* Use v10.2.0 tag
* Change minimum elfutils to 0.173
* Move STERILE_BUILD option to correct cmake_args
* make a sacrifice to the flake8 gods
* Add maintainer
* Revert to using elf@1 for elfutils
* Allow all ParaView versions to depend on Python 2
* Keep conflict for 5.9 and up with python 2
* Fix line too long
* Don't use backslash
* Try fixing indent
* Clean logic for python cmake flags
* Try fixing indent
Previously the python package for vim used static linking, and depending
on what system libraries were available and linked against could cause
symbol conflicts for python leading to segfaults in loading c modules in
the standard library (i.e. heapq). This patch address this issue by
dynamically linking them.
If you use git to clone a repository ssh, git transfers control the ssh
binary available on your path, if that ssh binary was built with
contradictory version of openssl/kerberos, then your git commands will
fail.
* sirius, update versions, fixes, add missing options
- sirius/spfft: depend on fftw-api
- cleanup +shared option
- sirius add option for memory pool
- sirius add version 6.5.3 and 6.5.4
- sirius: add spfft dependency for @master, @develop
* add nlcglib package
Robust wave function optimization for SIRIUS.
* add q-e-sirius package
based on q-e package
* Update var/spack/repos/builtin/packages/q-e-sirius/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* nlcglib: pass nvcc_wrapper to cmake
* Add 6.5.6
* Make flake8 happy
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* canu: fix depends issue & using java instead of jdk
* Update var/spack/repos/builtin/packages/canu/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* typo error correction
* Adding recipe for `colorspacious` (a python package)
* Copyright year changed
* revert last commit on basic_usage.rst
* better with a good description
* fix according to failed test
* Update var/spack/repos/builtin/packages/py-colorspacious/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-colorspacious/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Nightly builds with MacOS started failing again
due to an upgrade of the default virtual environment
that now uses Python 3.8
This makes us hit #14102 and every build fails. This
commit should be reverted along with the fix to #14102.
* Additional versions of py-jsonschema.
* Tweak to force Maestro to use jsonschema@3.2.0:
* Correction of whitespace (flake8 error).
* Merges importlib's Python version conditons
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add new versions of spfft
* Extend CudaPackage and use virtual fftw package
Co-authored-by: Simon Pintarelli <simon.pintarelli@cscs.ch>
* Add CUDA 11 compatibility note
* Depend on older cuda <= 10 for spfft <= 0.9.11
Co-authored-by: Simon Pintarelli <simon.pintarelli@cscs.ch>
* introduce logic for boost+context dependency and generic_context variant
* fix OTF2 instrumentation minor problem
* default coroutine impl depends on platform
* fix flake8
* add reference to ~generic_coroutines conflict info
* Update var/spack/repos/builtin/packages/hpx/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Octopus: Add support for version 10.0.
Fix compilation when using the MKL as a provider for BLAS/LAPACK. Octopus will now detect that the MKL also provides the FFTW API and will refuse to compile when both the FFTW library and the MKL are given to the configure script.
* Octopus: Add supported version range for libxc.
* berkeley-db: add version 18.1.40, update build options in package
* combine adamjstewart's changes
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [kassiopeia] New package
* [kassiopeia] Remove master branch, update dependencies
* Update var/spack/repos/builtin/packages/kassiopeia/package.py
Unable to test since I do not have a license to intel-parallel-studio, but I see no reason why it would not work if.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [kassiopeia] depends_on mpi
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [kassiopeia] cmake_args with self.spec.satisfies and elses
* [kassiopeia] args.extend -> args.append
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* h5py: explicitly specify version
hdf5@1.10.5 on Cray is wrongly detected as 1.8.4.
* Update var/spack/repos/builtin/packages/py-h5py/package.py
Thanks. Also had this first, then CI was complaining about line length ...
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Before:
```console
$ licensee diff --license mit LICENSE-MIT
Comparing to MIT License:
Input Length: 1092
License length: 1020
Similarity: 92.46%
diff --git a/LICENSE b/LICENSE
index 0ce42af..be0ff1c 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,3 +1,4 @@
{+spack project developers. see the top-level copyright file for details.+}
permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "software"), to deal in
the software without restriction, including without limitation the rights to
```
After:
```console
$ licensee diff --license mit LICENSE-MIT
Comparing to MIT License:
Input Length: 1020
License length: 1020
Similarity: 100.00%
Exact match!
```
This gets us a 100% license match from GitHub's `licensee` tool.
`spack buildcache list` was trying to construct an `Arch` object and
compare it to `arch_for_spec(<spec>)`. for each spec in the buildcache.
`Arch` objects are only intended to be constructed for the machine they
describe. The `ArchSpec` object (part of the `Spec`) is the descriptor
that lets us talk about architectures anywhere.
- [x] Modify `spack buildcache list` and `spack buildcache install` to
filter with `Spec` matching instead of using `Arch`.
- [x] Make it easier to get a `Spec` with a proper `ArchSpec` from an
`Arch` object via new `Arch.to_spec()` method.
- [x] Pull `spack.architecture.default_arch()` out of
`spack.architecture.sys_type()` so we can get an `Arch` instead of
a string.
Ci is currently failing on brew update with the error:
```
Error: Cannot install bazelisk because conflicting formulae are installed.
bazel: because Bazelisk replaces the bazel binary
Please `brew unlink bazel` before continuing.
Unlinking removes a formula's symlinks from /usr/local. You can
link the formula again after the install finishes. You can --force this
install, but the build may fail or cause obscure side effects in the
resulting software.
```
Avoiding:
```
$ brew update
$ brew upgrade
```
solves the issue by preventing the risk of conflicting formulae
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
* Update LBANN, Hydrogen, Aluminum to inherit CudaPackage
* Update CMake constraints: LBANN, Hydrogen, and Aluminum now require
cmake@3.16.0: (better support for pthreads with nvcc)
* Aluminum: add variants for host-enabled MPI and RMA features in a
MPI-GPU RDMA-enabled library
* NCCL: add versions 2.7.5-1, 2.7.6-1, and 2.7.8-1
* Hydrogen: add version 1.4.0
* LBANN: add versions 0.99 and 0.100
* Aluminum: add versions 0.4.0 and 0.5.0
* new package(s): py-gql
and related dependencies:
py-aiohttp
py-async-timeout
py-graphql-core
py-idna-ssl
py-multidict
py-websockets
py-yarl
new versions:
py-requests
* fixes
Co-authored-by: Andrew W Elble <aweits@skl-a-00.rc.rit.edu>
* NWChem 7.0.0
* add python2 for 6.8.1. removed 6.8 https://github.com/spack/spack/pull/17779#discussion_r462700413
* nwchem 6.8.1 breaks with gcc 10 and later
* restored extra python bits for version 6.8.1. add env. definition of basis libraries
* changes for flake8
* url fixed
* prevent 6.8.1 being compiled with gcc 10
* Ferret: Add missing dependency with curl.
* Ferret: Don't force using the static version of libgfortran.
* Ferret: Ensure Spack's compiler wrappers are used.
This allows properly setting the rpaths.
* Ferret: Add support for versions 7.3 to 7.6.
* Ferret: Add a variant to install Ferret standard datasets.
* Ferret: Define some useful runtime environnement variables.
* Ferret: Fix flake8.
Also add myself as a maintainer as suggested by @alalazo.
As discussed in issue #17638, wherein kahip fails to build when
scons is dependent on python@3.
This converts the print statements in various SConstruct files
into python3 friendly print functions.
I found most of the affected SConstruct files in both @2.00 and
the later versions I found on web, but some files were only in @2.00.
I split the patches into two files for that reason, but have not
tried the later versions.
* LAMMPS: Use LATTE 1.2.2 starting with version 20200602.
Version 20200602 and upper requires Latte 1.2.2. This caused the internal Latte distribution to be used instead of the Latte install provided by Spack.
* LAMMPS: Add new versions 20200630 and 20200721.
* dcmtk: fixed type error
* Update var/spack/repos/builtin/packages/dcmtk/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: IDL
* Update var/spack/repos/builtin/packages/idl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/idl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added license header and changed url_for_version to just url
* removed unused imports, addressed comments
* removed trailing whitespace on line 14
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
During configure lhapdf5 searches for python. On one system
I tested on (ubuntu 19.10) it finds a system installed python3
and fails to create the python extension.
Variant named to make explicit that this is only a python2 extension.
* update version: intel packages daal, ipp, mkl-dnn, mkl, mpi, parallel-studio, pin, tbb and makes url parameter consistent and always use single quote.
* Fixes a typo with one of the sha256 checksum..
* Adds version entries for new versions of Intel packages.
* Adds hashes for new versions of Intel packages.
* Adds missing hash of Intel compiler.
* Adds the newest version of Intel MPI 2019.8.
* Fixes hash for intel-parallel-studio and intel-tbb.
* Fixes version number of Intel MPI.
Co-authored-by: Robert Mijakovic <robert.mijakovic@lrz.de>
* openfoam: use MPI 'headers' property (fixes#17730)
* openfoam: +spdp variant, usable for OpenFOAM 1906 and later
in contrast to +float32, which uses single-precision throughout, +spdp
uses the following:
- single-precision for most internals
- double-precision for linear solver
* openfoam: add m4 as build dependency
* scotch: update to 6.0.9 released Oct 2019
Co-authored-by: Mark Olesen <Mark.Olesen@esi-group.com>
Libunwind already builds a shared library. The +pic variant adds the
compiler pic flag to the static archive so that it can be linked into
another shared library.
Eospac's build breaks on gcc@10: due to dependence on -fcommon behavior
and gnu changing to -fno-common. Added conditional argument to support
bleeding edge compilers
Style and documentation tests take just a few minutes
to run. Since in Github actions one can't restart a single
job but needs to restart an entire workflow, here we group
tests with similar duration together.
Relative paths in views have been broken since #17608 or earlier.
- [x] Fix by passing base path of the environment into the `ViewDescriptor`.
Relative paths are calculated from this path.
This PR adds the r-dss package and the r-bsseq package, also new, as a
dependency. This includes the latest versions, which required updates to
the following dependencies:
- r-biocgenerics
- r-iranges
- r-s4vectors
- r-summarizedexperiment
Older versions of r-dss and r-bsseq are included as well to ensure
compatibility with older versions of the above dependencies.
* add tutorial setup script to share/spack
* Add check for Ubuntu 18, fix xvda check, fix apt-get errors
- now works on t2.micro, t2.small, and m instances
- apt-get needs retries around it to work
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
A bug was introduced in #13100 where ChildErrors would be redundantly
printed when raised during a build. We should eventually revisit error
handling in builds and figure out what the right separation of
responsibilities is for distributed builds, but for now just skip
printing.
- [x] SpackErrors were designed to be printed by the forked process, not
by the parent, so check if they've already been printed.
- [x] update tests
* WHIZARD: add versions 2.8.4 and 2.8.3
* New package: LCIO
* WHIZARD: add optional dependency on LCIO
* WHIZARD: add optional dependency on Openloops
* WHIZARD: allow building with either hepmc or hepmc3 dependencies
* Openloops: set process_lib_dir in configure
* Openloops: fix reference to variant
astropy 3.2.1 fails to build with python 3.8.3 with
errors similar to this:
astropy/stats/_stats.c:318:11: error: too many arguments to function 'PyCode_New'
PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
These are files that are generated by cython, but are included in the
tarball. Since there's apparently been an API change to PyCode_New, they will
need to be re-cythonized to compile correctly.
Fixes#17299
Cray Shasta systems appear to use an unmodified Sles or other Linux operating system on the backend (like Cray "Cluster" systems and unlike Cray "XC40" systems that use CNL).
This updates the CNL version detection to properly note that this is the underlying OS instead of CNL and delegate to LinuxDistro.
* environment-views: fix bug where missing recipe/repo breaks env commands
When a recipe or a repo has been removed from Spack and an environment
is active, it causes the view activation to crash Spack before any
commands can be executed. Further, the error message it not at all clear
in explaining the issue.
This forces view regeneration to always start from scratch to avoid the
missing package recipes, and defaults add_view=False in main for views activated
by the `spack -e` option.
* add messages to env status and deactivate
Warn users that a view may be corrupt when deactivating an environment
or checking its status while active. Updated message for activate.
* tests for view checking
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* switch from bool to int debug levels
* Added debug options and changed lock logging to use more detailed values
* Limit installer and timestamp PIDs to standard debug output
* Reduced verbosity of fetch/stage/install output, changing most to debug level 1
* Combine lock log methods; change build process install to debug
* Changed binary cache install messages to extraction messages
* [M4] Add missing compiler flag on Cray Compiler
The new version of the Cray Compiler are based on Clang, which means we
need to add the same LDFLAG as other clang environments.
* bugfix: make compiler preferences slightly saner
This fixes two issues with the way we currently select compilers.
If multiple compilers have the same "id" (os/arch/compiler/version), we
currently prefer them by picking this one with the most supported
languages. This can have some surprising effects:
* If you have no `gfortran` but you have `gfortran-8`, you can detect
`clang` that has no configured C compiler -- just `f77` and `f90`. This
happens frequently on macOS with homebrew. The bug is due to some
kludginess about the way we detect mixed `clang`/`gfortran`.
* We can prefer suffixed versions of compilers to non-suffixed versions,
which means we may select `clang-gpu` over `clang` at LLNL. But,
`clang-gpu` is not actually clang, and it can break builds. We should
prefer `clang` if it's available.
- [x] prefer compilers that have C compilers and prefer no name variation
to variation.
* tests: add test for which()
Apple's gcc is really clang. We previously ignored it by default but
there was a regression in #17110.
Originally we checked for all clang versions with this, but I know of
none other than `gcc` on macos that actually do this, so limiting to
`apple-clang` should be ok.
- [x] Fix check for `apple-clang` in `gcc.py` to use version detection
from `spack.compilers.apple_clang`
* MacOS build tests
- Run on PR that modify the YAML file of the workflow
- Don't clone Spack, since we are in the Spack repo now
* Try to add opengl to configuration to build jupyter
* fixup
Spack did not support usage of the `--config-scope` option in
combination with an environment: In `lib/spack/spack/main.py`,
`spack.config.command_line_scopes` is set equal to any config scopes
passed by the `--config-scope` option. However, this is done after
activating an environment. In the process of activating an environment,
the `spack.config.config` singleton is instantiated, so later setting of
`spack.config.command_line_scopes` is ignored.
This commit sets command line scopes before activating an environment to
ensure that they are included in the configuration.
Co-authored-by: Tim Fuller <tjfulle@sandia.gov>
* Add new versions of texlive and poppler.
* Add new versions of harfbuzz which also relocated source location to github.
* Update var/spack/repos/builtin/packages/harfbuzz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Restore deleted url line in harfbuzz.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Addition of Chainmap to satisfy Maestro dependency.
* Additional versions and dependencies for Maestro.
* Updated URL to point to pypi.
* Updates to chainmap hashes.
* Updates to pull version from PyPi.
* Corrections to flake8 errors.
* Stricter restrictions on Python versioning.
Maestro actually supports Python 3.5 and later.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Only install chainmap for Python2 versions.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removal of setuptools python cond.
* Removal of version constaints on setuptools.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
GCC 4.8.5 on rhel6:
```
utext.cpp:572:5: error: 'max_align_t' in namespace 'std' does not name a
type
std::max_align_t extension;
^
utext.cpp: In function 'UText* utext_setup_67(UText*, int32_t,
UErrorCode*)':
utext.cpp:587:73: error: 'max_align_t' is not a member of 'std'
spaceRequired = sizeof(ExtendedUText) + extraSpace -
sizeof(std::max_align_t);
^
utext.cpp:587:73: note: suggested alternative:
In file included from
/projects/spack/opt/spack/gcc-4.4.7/gcc/6ln2t7b/include/c++/4.8.5/cstddef:42:0,
from utext.cpp:19:
/projects/spack/opt/spack/gcc-4.4.7/gcc/6ln2t7b/lib/gcc/x86_64-unknown-linux-gnu/4.8.5/include/stddef.h:
425:3: note: 'max_align_t'
} max_align_t;
^
utext.cpp:598:57: error: 'struct ExtendedUText' has no member named
'extension'
ut->pExtra = &((ExtendedUText *)ut)->extension;
^
g++ ... loadednormalizer2impl.cpp
g++ ... chariter.cpp
```
The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* Initial version of PySCF.
* Add master branch to xcfun library
* PySCF only compatible with specific commit of xcfun library
* Update var/spack/repos/builtin/packages/py-pyscf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyscf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Revert "PySCF only compatible with specific commit of xcfun library"
This reverts commit 8296005400.
* Revert "Add master branch to xcfun library"
This reverts commit f2b6998931.
* Issues conflict for xcfun library version rather than relying on a random commit.
* Add version xcfun 2.0.0a2 which is needed by PySCF.
* Remove xcfun conflict and express dependency more explictly. Add comment as to why this is necessary.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* wannier90: add versions 3.0.0 and 3.1.0 and 'shared variant'
Added versions 3.0.0 and 3.1.0
Added shared variant
Added url_for_version function as versions less than 3 are from the
wannier.org site and versions 3 and up are from github.com
Added the MPI libraries to the list of libs substituted into the make.sys file
in place of @LIBS
Made it possible to build a shared object version of the library for versions
< 3 by filtering the src/Makefile.2 file (based off of the patch from a src rpm
from RHEL for version 2.0.1)
Create a modules directory in the install prefix root directory and copy the
Fortran .mod files there.
Set the MPIFC variable to the Spack Fortran MPI compiler wrapper.
* abinit: added 'wannier90' variant which enables building abinit with wannier90
Added wannier90 variant
Made abinit depend on the shared object ('shared') variant of
wannier90 if the wannier90 variant is selected
Add configure args for wannier90 libs, includes, and binaries and to
set MPIFC
set the dft-flavor to wannier90 when wannier90 is enabled and only
set the dft flavor to 'atompaw+libxc' if wannier90 is not selected
* Update var/spack/repos/builtin/packages/abinit/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* incorporated bbecker's suggestion for making the strings less ugly!
* incorporated bbecker's suggestion to fix the logic for picking which
"DFT flavor" configure argument.
If the wannier variant is enabled, it passes --with-dft-flavor=wannier90
to configure, otherwise it passes --with-dft-flavor=atompaw+libxc to configure
* Changed to using plain strings
* Fixed version tests
* incorporated @adamjstewart's fix for testing if the major version is > 2
* incorporated @adamjstewart's fix to check if mpi is enabled and
only set the MPIFC variable if it is.
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Only set MPIFC if '+mpi' is set
* incorporated fixes from @adamjstewart including:
- using the string=True argument to filter_file (and removed the unneeded
escapes)
- changing the url to the github location
- fixing the version checks
- building a libwannier.dylib on darwin
* incorporated fixes suggested by @adamjstewart including:
- using the string=True argument to filter_file and cleaned up the escapes
- only pass the MPIFC argument to configure when '+mpi' is set
- chaned the url to the github site for Wannier090
- fixed the version checks
- build a 'libwannier.dylib' file when building the shared variant on darwin
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* moved a configure argument from it's own '+mpi' check to under the lower one
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Cleaned up syntax as suggested by @adamjstewart
It looks *so much better* now! Thanks!
* removed unneeded import of 'find' from 'llnl.util.filesystem' package
as suggested by @adamjstewart
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* incorporated changes from @adamjstewart
changed check to "if '@:2 +shared' in spec:" instead of a nested check of '@:2' and
'+shared'
removed unneeded joins used in filter_file and spliced the list of objs directly into
the filter_file call
used the dso_suffix instead of testing for darwin to determine the name of the
shared library
* removed whitespace from blank line
* fixed bug with '../../wannier90.x: .*' not being treated as a regexp. Thanks Adam!
* fixed missing whitespace when modifying Makefile.2
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: OpenLoops
* install() for openloops
* Working OpenLoops recipe
* Flake-8
* Only copy collection file if required; add clarification to num_jobs
* Add __future__ import just in case
* Fix missing space
* Remove __future__ import
* Changes from review, pt. 1
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Replace print() with write()
* Flake-8
Co-authored-by: iarspider <iarpsider@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* vasp: New package.
* Remove unneeded `#noqa`
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removed a completely needless tty.debug()
* Add compiler conflicts() and minute fixes
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add apcomp package
* add maintainers
* fake8
* Update var/spack/repos/builtin/packages/apcomp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* review suggestions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* icu4c: Add new versions for older releases.
The old URLs for versions 60.1, 58.2 and 57.1 do not work anymore so add the versions available on Github.
The old versions are kept for reference (cf. #15896).
* icu4c: Add versions 66.1 and 67.1.
* icu4c: Fix compilation of versions 58 and 59 with recent glibc.
This matches the current latest version of protobuf in Spack.
Generally the version of py-protobuf and protobuf should match,
but this constraint is not currently recorded in py-protobuf.
For normal users, `-o` or `--no-same-owner` (GNU extension) is
the default behavior, but for the root user, `tar` attempts to preserve
the ownership from the tarball.
This makes `tar` use `-o` all the time. This should improve untarring
files owned by users not available in rootless Docker builds.
* gdk-pixbuf: Add new stable versions.
* gdk-pixbuf: Add a missing dependency with libx11.
Also add a variant disabled by default to make it optional since it is considered deprecated
(cf. 3362e94c25).
* Added new versions to magics and began to set not-so-optional netcdf dependency
* Added enforced netcdf dependency
* Fix also works for version 4.1.0
* llvm-flang Only build offload code if cuda enabled
The current version executes `cmake(*args)` always as part of the post install. If device offload is not part of the build, this results in referencing `args` without it being set and the error:
```
==> Error: UnboundLocalError: local variable 'args' referenced before assignment
```
Looking at prevoous version of `llvm-package.py` this whole routine appears to be only required for offload, some indent `cmake/make/install` to be under the `if`.
* Update package.py
Add comment
The error message was not updated when the behavior of Spack environments
was changed to not automatically activate the local environment in #17258.
The previous error message no longer makes sense.
* bbcp: Update the URLs to use HTTPS.
The HTTP URLs do not work anymore.
* bbcp: Add missing libnsl dependency.
* bbcp: Rename the git-based version to match the branch name.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pysam: add LDFLAGS to curl
* Update var/spack/repos/builtin/packages/py-pysam/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: vbfnlo
* Add new package: vbfnlo
* Add recipe for looptools
* Add patch for looptools
* LoopTools: patch not needed (fixed by developers without changing version)
* Remove patch file as well
* Update package.py
* Update package.py
* Fix vbfnlo recipe for old version
Co-authored-by: iarspider <iarpsider@gmail.com>
When Spack installs a package, it stores repository package.py files
for it and all of its dependencies - any package with a Spack metadata
directory in its installation prefix.
It turns out this was too broad: this ends up including external
packages installed by Spack (e.g. installed by another Spack instance).
Currently Spack doesn't store the namespace properly for such packages,
so even though the package file could be fetched from the external,
Spack is unable to locate it.
This commit avoids the issue by skipping any attempt to locate and copy
from the package repository of externals, regardless of whether they
have a Spack repo directory.
* new package: ligra
* setup run environment
* tidy up
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* flake8
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`gcc` 9 and above have more warnings that break the `flatcc` build by default, because `-Werror` is enabled. This loosens the build up so that we can build with more compilers in Spack.
- [x] Add `-DFLATCC_ALLOW_WERROR=OFF` to `flatcc` CMake arguments
Co-authored-by: Frank Willmore <willmore@anl.gov>
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
fixes#17396
This prevents the class attribute to be inherited and
saves current maintainers from becoming the default
maintainers of every Cuda package.
We got rid of `master` after #17377, but users still want a way to get
the latest stable release without knowing its number.
We've added a `releases/latest` tag to replace what was once `master`.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Fixes#16478
This allows an uninstall to proceed even when encountering pre-uninstall
hook failures if the user chooses the --force option for the uninstall.
This also prevents post-uninstall hook failures from raising an exception,
which would terminate a sequence of uninstalls. This isn't likely essential
for #16478, but I think overall it will improve the user experience: if
the post-uninstall hook fails, there isn't much point in terminating a
sequence of spec uninstalls because at the point where the post-uninstall
hook is run, the spec has already been removed from the database (so it
will never have another chance to run).
Notes:
* When doing spack uninstall -a, certain pre/post-uninstall hooks aren't
important to run, but this isn't easy to track with the current model.
For example: if you are uninstalling a package and its extension, you
do not have to do the activation check for the extension.
* This doesn't handle the uninstallation of specs that are not in the DB,
so it may leave "dangling" specs in the installation prefix
This PR creates a new spack package for
mumax: GPU accelerated micromagnetic simulator.
This uses the current beta version because
- it is somewhat dated, ~2018
- it is the only one that supports recent GPU kernels
On Cray platforms, we rely heavily on the module system to figure out
what targets, compilers, etc. are available. This unfortunately means
that we shell out to the `module` command as part of platform
initialization.
Because we run subcommands in a shell, we can get infinite recursion if
`setup-env.sh` and friends are in some init script like `.bashrc`.
This fixes the infinite loop by adding guards around `setup-env.sh`,
`setup-env.csh`, and `setup-env.fish`, to prevent recursive
initializations of Spack. This is safe because Spack never shells out to
itself, so we do not need it to be initialized in subshells.
- [x] add recursion guard around `setup-env.sh`
- [x] add recursion guard around `setup-env.csh`
- [x] add recursion guard around `setup-env.fish`
- [x] Remove references to `master` branch
- [x] Document how release branches are structured
- [x] Document how to make a major release
- [x] Document how to make a point release
- [x] Document how to do work in our release projects
* fix binutils deptype for gcc
binutils needs to be a run dependency of gcc
* Fix gcc+binutils build on RHEL7+
static-libstdc++ is not available with system gcc.
Anyway, as it is for bootstraping, we do not really care depending on
a shared libstdc++.
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
Spack was attempting to calculate abspath on the located config.guess
path even when it was not found (None); this commit skips the abspath
calculation when config.guess is not found.
* Add Rivet and YODA
* Add patches
* Flake-8
* Set level for Rivet patches
* Syntax fix
* Fix dependencies of Rivet
* Update var/spack/repos/builtin/packages/rivet/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The latest 0.3.10 version openblas changed how Fortran libraries
are detected, and this broke Fujitsu compiler support.
This (new) openblas patch addresses that issue.
* dtfbplus: New package.
* dftbplus: Addresses @adamjstewart's comments on PR #15191
* dftbplus: Fixes format() calls that slipped in previous commit.
* dftbplus: Appease flake8.
* dftbplus: Change 'url' and misc. fixes.
* Add a resource to do the job of './utils/get_opt_externals'
Also:
* Add url_for_version function
* Add Java to PATH for run environment
* Update `install` method to handle old and new version
Co-authored-by: lu64bag3 <gerald.mathias@lrz.de>
* examl +
* examl style fix
* examl flake8 fix
* Update var/spack/repos/builtin/packages/examl/package.py
using `working_dir`
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
* hbase: refine url , java and version
* Update var/spack/repos/builtin/packages/hbase/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
about: Report a bug in the core of Spack (command not working as expected, etc.)
labels: "bug,triage"
---
<!-- Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran `spack find` to list all the installed packages and ..." -->
### Steps to reproduce the issue
```console
$ spack <command1> <spec>
$ spack <command2> <spec>
...
```
### Error Message
<!-- If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect. -->
```console
$ spack --debug --stacktrace <command>
```
### Information on your system
<!-- Please include the output of `spack debug report` -->
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have searched the issues of this repo and believe this is not a duplicate
- [ ] I have run the failing commands in debug mode and reported the output
<!-- We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack! -->
description:Report a bug in the core of Spack (command not working as expected, etc.)
labels:[bug, triage]
body:
- type:textarea
id:reproduce
attributes:
label:Steps to reproduce
description:|
Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran `spack find` to list all the installed packages and ..."
placeholder:|
```console
$ spack <command1> <spec>
$ spack <command2> <spec>
...
```
validations:
required:true
- type:textarea
id:error
attributes:
label:Error message
description:|
If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect.
placeholder:|
```console
$ spack --debug --stacktrace <command>
```
- type:textarea
id:information
attributes:
label:Information on your system
description:Please include the output of `spack debug report`
validations:
required:true
- type:markdown
attributes:
value:|
If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well.
- type:checkboxes
id:checks
attributes:
label:General information
options:
- label:I have run `spack debug report` and reported the version of Spack/Python/Platform
required:true
- label:I have searched the issues of this repo and believe this is not a duplicate
required:true
- label:I have run the failing commands in debug mode and reported the output
required:true
- type:markdown
attributes:
value:|
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
about: Some package in Spack didn't build correctly
title: "Installation issue: "
labels: "build-error"
---
<!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install <spec>
...
```
### Information on your system
<!-- Please include the output of `spack debug report` -->
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. -->
* [spack-build-out.txt]()
* [spack-build-env.txt]()
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [ ] I have uploaded the build log and environment files
- [ ] I have searched the issues of this repo and believe this is not a duplicate
description:Some package in Spack didn't build correctly
title:"Installation issue: "
labels:[build-error]
body:
- type:markdown
attributes:
value:|
Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue `Installation issue: <name-of-the-package>`.
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
- type:textarea
id:reproduce
attributes:
label:Steps to reproduce the issue
description:|
Fill in the console output from the exact spec you are trying to build.
value:|
```console
$ spack spec -I <spec>
...
```
- type:textarea
id:error
attributes:
label:Error message
description:|
Please post the error message from spack inside the `<details>` tag below:
value:|
<details><summary>Error message</summary><pre>
...
</pre></details>
validations:
required:true
- type:textarea
id:information
attributes:
label:Information on your system
description:Please include the output of `spack debug report`.
validations:
required:true
- type:markdown
attributes:
value:|
If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well.
- type:textarea
id:additional_information
attributes:
label:Additional information
description:|
Please upload the following files:
* **`spack-build-out.txt`**
* **`spack-build-env.txt`**
They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists.
- type:markdown
attributes:
value:|
Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and **@mention** them here if they exist.
- type:checkboxes
id:checks
attributes:
label:General information
options:
- label:I have run `spack debug report` and reported the version of Spack/Python/Platform
required:true
- label:I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers
required:true
- label:I have uploaded the build log and environment files
required:true
- label:I have searched the issues of this repo and believe this is not a duplicate
about: Suggest adding a feature that is not yet in Spack
labels: feature
---
<!--*Please add a concise summary of your suggestion here.*-->
### Rationale
<!--*Is your feature request related to a problem? Please describe it!*-->
### Description
<!--*Describe the solution you'd like and the alternatives you have considered.*-->
### Additional information
<!--*Add any other context about the feature request here.*-->
### General information
- [ ] I have run `spack --version` and reported the version of Spack
- [ ] I have searched the issues of this repo and believe this is not a duplicate
<!--If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
description:Suggest adding a feature that is not yet in Spack
labels:[feature]
body:
- type:textarea
id:summary
attributes:
label:Summary
description:Please add a concise summary of your suggestion here.
validations:
required:true
- type:textarea
id:rationale
attributes:
label:Rationale
description:Is your feature request related to a problem? Please describe it!
- type:textarea
id:description
attributes:
label:Description
description:Describe the solution you'd like and the alternatives you have considered.
- type:textarea
id:additional_information
attributes:
label:Additional information
description:Add any other context about the feature request here.
- type:checkboxes
id:checks
attributes:
label:General information
options:
- label:I have run `spack --version` and reported the version of Spack
required:true
- label:I have searched the issues of this repo and believe this is not a duplicate
required:true
- type:markdown
attributes:
value:|
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.