Sun RPC support was optional starting with glibc version 2.26 and was removed in version 2.32. On newer operating systems (e.g., RHEL8) libtirpc is required to satisfy the xdr symbols.
* rocblas: make tensile dependencies conditional
* Remove rocm-smi from the rocblas dependency list
rocm-smi was added to the rocblas dependency list because Tensile was a
dependency of rocBLAS and rocm-smi was a dependency of Tensile. However,
that reasoning was not correct.
Tensile is composed of three components:
1. A command-line tool for generating kernels, benchmarking them, and
saving the parameters used for generating the best kernels
(a.k.a. "Solutions") in YAML files.
2. A build system component that reads YAML solution files, generates
kernel source files, and invokes the compiler to compile then into
code object files (*.co, *.hsco). An index of the kernels and their
associated parameters is also generated and stored in either YAML
or MessagePack format (TensileLibrary.yaml or TensileLibrary.dat).
3. A runtime library that will load the TensileLibrary and code object
files when asked to execute a GEMM and choose the ideal kernel for
your specific input parameters.
rocBLAS developers use (1) during rocBLAS development. This is when
Tensile depends on rocm-smi. The GPU clock speed and temperature must be
controlled to ensure consistency when doing the solution benchmarking.
That control is provided by rocm-smi. When building rocBLAS, Tensile is
used for (2). However, there is no need for control of the GPU at that
point and rocm-smi is not a dependency. At runtime, the rocBLAS library
uses Tensile for (3), but there is again no dependency on rocm-smi.
tl;dr: rocm-smi is a dependency of the tensile benchmarking tool,
which is not a build dependency or runtime dependency of rocblas.
This PR contains several fixes for the kallisto package.
- create hdf5 variant as hdf5 is optional beginning with 0.46.2
- provide patch for 0.43 to link against libz
- provide patch for older versions to build again gcc-11 and up
- patch and code to use autoconf-2.70 and up
Newer versions of botocore (>=1.23.47) support the full IOBase
interface, so the hacks added to supplement the missing attributes are
no longer needed. Conditionally disable the hacks if they appear to be
unnecessary based on the class hierarchy found at runtime.
* py-panaroo: new package
* moving panaroo to branch
* updated mizani, plotnine, and pystan versions and requirements
* made suggested fixes
* adding more requested fixes
* added new versions of statsmodels and httpstan
* py-torch: add version 0.23.0 and fix to built on aarch64
* Add newer versions, fix build issues
* Fix tests
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add connection information to buildcache update command
Ensure that the s3 connection made when attempting to update the content
of a buildcache attempts to use the extra connection information
from the mirror creation.
* Add unique help for endpoint URL argument
Fix copy/paste error for endpoint URL help which was the same as
the access token
* Re-work URL checking for S3 mirrors
Due to the fact that nested bucket URLs would never match the string used
for checking that the mirror is the same, switch the check used.
Sort all mirror URLs by length to have the most specific cases first
and see if the desired URL "starts with" the mirror URL.
* Long line style fixes
Add execptions for long lines and fix other style errors
* Use format() function to rebuild URL
Use the format command to rebuild the url instead of crafing a
formatted string out of known values
* Add early exit for URL checking
When a valid mirror is found, break from the loop
For a long time the module configuration has had a few settings that use
`blacklist`/`whitelist` terminology. We've been asked by some of our users to replace
this with more inclusive language. In addition to being non-inclusive, `blacklist` and
`whitelist` are inconsistent with the rest of Spack, which uses `include` and `exclude`
for the same concepts.
- [x] Deprecate `blacklist`, `whitelist`, `blacklist_implicits` and `environment_blacklist`
in favor of `exclude`, `include`, `exclude_implicits` and `exclude_env_vars` in module
configuration, to be removed in Spack v0.20.
- [x] Print deprecation warnings if any of the deprecated names are in module config.
- [x] Update tests to test old and new names.
- [x] Update docs.
- [x] Update `spack config update` to fix this automatically, and include a note in the error
that you can use this command.
Python's built-in tarfile support doesn't address some general
cases of malformed tarfiles that are already handled by the system
'tar' utility; until these can be addressed, use that exclusively.
Add support for CMake builds while preserving autotools support
for older versions of GDAL
* Add GDAL 3.5.0
* Remove GDAL 1
* Add support for new CMake build system
* Change defaults to build all recommended dependencies
* Simplify Autotools flag handling
* Determine version range for drivers
The goal of this PR is to make clearer where we need a package object in Spack as opposed to a package class.
We currently instantiate a lot of package objects when we could make do with a class. We should use the class
when we only need metadata, and we should only instantiate and us an instance of `PackageBase` at build time.
Modifications:
- [x] Remove the `spack.repo.get` convenience function (which was used in many places, and not really needed)
- [x] Use `spack.repo.path.get_pkg_class` wherever possible
- [x] Try to route most of the need for `spack.repo.path.get` through `Spec.package`
- [x] Introduce a non-data descriptor, that can be used as a decorator, to have "class level properties"
- [x] Refactor unit tests that had to be modified to reduce code duplication
- [x] `Spec.package` and `Repo.get` now require a concrete spec as input
- [x] Remove `RepoPath.all_packages` and `Repo.all_packages`
* fixed the cgal recipe and added the latest release.
* Update var/spack/repos/builtin/packages/cgal/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* updated cgal recipe to new URL for tarballs
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
There's a race condition in `remove()` as the lockfile is removed after
releasing the lock, which is a problem when another process acquires a
write lock during deletion.
Also simplify life a bit in multiprocessing when a file is possibly
removed multiple times, which currently is an error on the second
deletion, so the proposed fix is to make remove(...) idempotent and not
error when deleting non-existing cache entries.
Don't tests for existence of lockfile, cause windows/linux behavior is different
Oversight in #31433, the non-phony `env` target was missing a file being
created for it, which can cause make to infinitely loop when including
multiple generated makefiles.
* Metall package: add dependency to GCC for build test
* Package Metall: add v.017
* Package Metall: update the package file
* Update var/spack/repos/builtin/packages/metall/package.py
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
* Metall package: add v0.18 and v0.19
* Metall Package: add v0.20
* Metall package: add v0.21
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
When no default editor is installed and no environment variable is set,
which_string would return None and this would be passed to os.execv
resulting in a TypeError. The message presented to the user would be:
Error: execv: path should be string, bytes or os.PathLike,
not NoneType
This change checks that which_string has returned successfully before
attempting to execute the result, resulting in a new error message:
Error: No text editor found! Please set the VISUAL and/or EDITOR
environment variable(s) to your preferred text editor.
It's not strictly necessary, but I've also changed try_exec to catch
all errors rather than just OSErrors. This would have provided slightly
more context for the original error message.
Use 8 byte reals only when `precision=double` is set
The pre-defined compilation targets do not follow the usual behavior of
Makefiles. Compiler flags are set as strings (not Makefile variables) and as
such are not able to be overridden by environment variables. This patch changes
the behavior to the expected behavior of a Makefile such that `fflags` etc have
the desired effect.
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
* [py-ipyparallel] setuptools.config.StaticModule moved
... in py-setuptools@61
* [py-ipyparallel] setuptools fix only added to release 8.2.1
https://github.com/ipython/ipyparallel/pull/680
* [py-ipyparallel] Fix typo
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
There were two choices: 1) remove '-p' from '-a' or 2) allow monkeypatching
the cleaning of the python cache since clean's test_function_calls isn't
supposed to be actually cleaning anything.
This commit supports the latter and adds a test case for `-p`.
* add recipe for improved-rdock
* style: fix format
* style: fix format again
* Fix location of the directory
* Fix copyright
* Fix according to the reviewer's comments.
* Revert "Fix according to the reviewer's comments."
This reverts commit 4877877daf.
* Fix according to the reviewer's comments.
* style: fix format
* Update var/spack/repos/builtin/packages/improved-rdock/package.py
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Yuichi Otsuka <otsukay@riken.jp>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* guppy: new package
* adding different hashs/urls
* working now
* guppy: new package
* ont-guppy: new package
* ont-guppy: new package
* ont-guppy: fixed style
- rocprim is a header-only library; its amdgpu_target variant is only
meaningful for its client executables, not the library itself
- comgr and llvm-amdgpu are merely indirect dependencies (via hip)
* Initial implementation of Omnitrace package
* Fix flake8 errors
* Fix run environment when +python
* String normalization and fix for build env when +tau
* Tweak to style
Release branches and tags run protected pipelines, and we noticed
that those pipelines were generating all jobs in the stack, even
when the mirror contained all the built specs and an up to date
index. The problem was caused because the override mirror was
not present in spacks mirror configuration at the time when the
binary_distribution.update() method was called. This fixes that
by always adding the mirror override, if present, to the list of
configured mirrors.
* py-altair: new package
* Update var/spack/repos/builtin/packages/py-altair/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-altair/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-altair/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-altair: update
* Update var/spack/repos/builtin/packages/py-altair/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* boost: do not access the context-impl variant for versions below 1.65.0
* boost: check if spec.variants contains context-impl
* boost: improve error message when the context-impl variant causes a conflict
Executing spack solve boost@1.63.0 +context context-impl=fcontext
triggers the following error message:
==> Error: No version for 'boost' satisfies '@1.63.0' and '@1.79.0'
With this change, the error message becomes the following:
==> Error: Cannot set variant 'context-impl' for package 'boost' because the variant condition cannot be satisfied for the given spec
We adopted the convention of putting binaries for each stack into
a dedicated mirror named after the directory in which the stack
(spack.yaml file) resides. This fixes the mirror url of the
radiuss-aws-aarch64 stack to follow that convention.
`make_target` can be used to instruct Spack to build one of the pre-defined make
targets in the MPAS makefile. Spack will guess the targret based on the value of
`spack_fc`, but the user can overwrite the target as variant. E.g.
```
spack install mpas-model make_target=pgi-nersc
```
`precision` is used to set the `PRECISION` flag in the Makefile to {single,
double}. Default is double.
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
* remove unhelpful comment
* Filter compiler duplicates while reading manifest
* more-specific version matching edited to use module-specific version (to avoid an issue where a user might add a compiler with the same version to the initial test configuration
* Set R_LIBS_USER='' in dependent build environment
Despite R packages being installed with the --vanilla flag, which
ignores Rprofile and Renviron files, R will still set R_LIBS_USER if the
default directory exists. This could lead to pulling in packages from
that directory during the build which could cause the build to fail. To
avoid that, explicitly set R_LIB_USER='' to ensure that packages from
the HOME/R directory are not in the library path.
* Update var/spack/repos/builtin/packages/r/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* PythonPackage: add default libs/headers attributes
* Style fix
* libs and headers should be properties
* Check both platlib and include
* Fix variable reference
Building on #24639, this allows versions to be prefixed by `git.`. If a version begins `git.`, it is treated as a git ref, and handled as git commits are starting in the referenced PR.
An exception is made for versions that are `git.develop`, `git.main`, `git.master`, `git.head`, or `git.trunk`. Those are assumed to be greater than all other versions, as those prefixed strings are in other contexts.
This commit adds some changes which improve use of Spack-installed
oneAPI packages with Spack-generated modules, but not within Spack
(e.g. if you install some of these packages with Spack, then load
their associated modules to build other packages outside of Spack).
The majority of the PR diff is refactoring. The functional changes
are:
* intel-oneapi-mkl:
* setup_run_environment: update Intel compiler flags to RPATH the
mkl libs
* intel-oneapi-compilers: update the compiler configuration to RPATH
libraries installed by this package (note that Spack already handled
this when installing dependent packages with Spack, but this is
specifically to use the intel-oneapi-compilers package outside
of Spack). Specifically:
* inject_rpaths: this modifies the binaries installed as part of
the intel-oneapi-compilers package to RPATH libraries provided
by this package
* extend_config_flags: this instructs the compiler executables
provided by this package to RPATH those same libraries
Refactoring includes:
* intel-oneapi-compilers: in addition to the functional changes,
a large portion of this diff is reorganizing the creation of
resources/archives downloaded for the install
* The base oneAPI package renamed component_path, so several packages
changed as a result:
* intel-oneapi-dpl
* intel-oneapi-dnn
* extrae
* intel-oneapi-mpi:
* Perform file filtering in one pass rather than two
Allow `spack external find` (with no extra args) to proceed if the manifest file exists but
without sufficient permissions; in that case, print a warning. Also add a test for that behavior.
TODOs:
- [x] continue past any exception raised during manifest parsing as part of `spack external find`,
except for CTRL-C (and other errors that indicate immediate program termination)
- [x] Semi-unrelated but came up when discussing this with the user who reported this issue to
me: the manifest parser now accepts older schemas
See: https://github.com/spack/spack/issues/31191
nvc doesn't know this flag: `-Wno-unknown-pragmas`, it doesn't hurt to
remove the specific warning flags, from:
```
-O3 -Wall -Wextra -Wno-unknown-pragmas -Wcast-qual
```
to:
```
-O3 -Wall
```
fixes#30997
Instead of giving a penalty of 30 to all nodes when preferences
are not package specific, give a penalty of 100 to all targets
of a node where we have package specific preferences, if the target
is not explicitly preferred.
* Add the rocm tag to packages from the Radeon Open Compute Platform.
* Added more packages from the ROCm software distribution based on
reviewer feedback.
* Updated tags based on reviewer feedback
* add first version of py-pyrr package
* added license
* update license HEADER
* Remove preferred as there is only one version available
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Better specify dependency type
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove useless variable license
Co-authored-by: Jérôme Dubois <jerome.dubois@cea.fr>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixed a bug in the 'external find --all' command where the call failed
to find packages by both executable and library. The bug was that the
call `path.all_packages()` incorrectly turned the variable
`packages_to_check` into a generator rather than keeping it a list.
Thus the second call to `detection.by_library` had no work to do.
* Fixed the help message for the find external and compiler commands as
well as others that used the `scopes_metavar` field to define where
the results should be stored in configuration space. Specifically,
the fact that configuration could be added to the environment was not
mentioned in the help message.
os.path.dirname was being used to compare compilers. If two compilers
are in the same directory then it will pick up the first one it encounters.
Compare the full compiler path instead.
When installing setuptools from sources in Spack, we might
get into weird failures due to the way we use pip.
In particular, for Spack it's necessary to install in a
non-isolated pip environment to allow using PYTHONPATH as a
selection method for all the build requirements of a
Python package.
This can fail when installing setuptools since there might
be a setuptools version already installed for the Python
interpreter being used, with different entry points than
the one we want to install.
Installing from wheels both pip and setuptools should
harden our installation procedure in the context of:
- Bootstrapping Python dependencies of Spack
- Using external Python packages
On Cray systems that use Cray Data Virtualization Service (DVS),
symlinks across filesystems are not allowed, either due to a bug, or
because they're simply not POSIX compliant [1].
Spack's OpenSSL package defaults to `certs=system` which comes down to
symlinking `/etc/ssl` in the Spack install prefix, triggering this
problem, resulting in mysterious installation failures.
Instead of relying on system certs, we can just use
`ca-certificates-mozilla`, and if users really need system certs, then
they're probably better off marking OpenSSL entirely as external.
[1] https://github.com/glennklockwood/cray-dvs
* Created package and added description
* Add py-markdown-include
* Created package
* Finished creating package
* Added py-md-environ
* Added build dependencies
* Added other deps
* Add python-markdown-math (#4)
* Created package and started to add info
* Removed unneeded global/install options
* Figured out version spec for markdown-math
* Removed type=build from unnecessary dependencies
* Removed unneeded install/global options, added version spec to dependency
* Added wscullin as interim maintainer for packages
* Fixed style issues
* Took care of trailing whitespace
* Removed comment line before imports
* Changed file charset to match other packages
* Update var/spack/repos/builtin/packages/py-ford/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-ford/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removed test dependency after review feedback
* Added new 6.1.12 version to py-ford
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add LevelZero variant to hwloc package
This permits the hwloc package to build with with support for the
Intel Level Zero low-level layer, analogous to CUDA, ROCm, and OpenCL.
* Fix typo
Add variant +gssapi to enable authentication via Kerberos through GSSAPI
When openssh is installed at sites using Kerberos, openssh needs to auth
via Kerberos through GSSAPI in order to work in such environments.
* Update h5bench maintainers and versions
* Include version 1.1 for h5bench
* Correct release hash and set default version
* Update .tar.gz version
* Update HDF5 VOL async version and environment variable syntax
* added package gptune with all its dependencies: adding py-autotune, pygmo, py-pyaml, py-autotune, py-gpy, py-lhsmdu, py-hpbandster, pagmo2, py-opentuner; modifying superlu-dist, py-scikit-optimize
* adding gptune package
* minor fix for macos spack test
* update patch for py-scikit-optimize; update test files for gptune
* fixing gptune package style error
* fixing unit tests
* a few changes reviewed in the PR
* improved gptune package.py with a few newly added/improved dependencies
* fixed a few style errors
* minor fix on package name py-pyro4
* fixing more style errors
* Update var/spack/repos/builtin/packages/py-scikit-optimize/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* resolved a few issues in the PR
* fixing file permissions
* a few minor changes
* style correction
* minor correction to jq package file
* Update var/spack/repos/builtin/packages/py-pyro4/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixing a few issues in the PR
* adding py-selectors34 required by py-pyro4
* improved the superlu-dist package
* improved the superlu-dist package
* moree changes to gptune and py-selectors34 based on the PR
* Update var/spack/repos/builtin/packages/py-selectors34/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* improved gptune package: 1. addressing comments of tldahlgren in PR 26936; 2. adding variant openmpi
* fixing style issue of gptune
* changing file mode
* improved gptune package: add variant mpispawn which depends on openmpi; add variant superlu and hypre for installing the drivers; modified hypre package file to add a gptune variant
* fixing style error
* corrected pddrive_spawn path in gptune test; enforcing gcc>7
* fixing style error
* setting environment variables when loading gptune
* removing debug print in hypre/package.py
* adding superlu-dist v7.2.0; fixing an issue with CMAKE_INSTALL_LIBDIR
* changing site_packages_dir to python_platlib
* not using python3.9 for py-gpy, which causes due to dropped support of tp_print
* more replacement of site_packages_dir
* fixing a few dependencies in gptune; added a gptune version
* adding url for gptune
* minor correction of gptune
* updating versions in butterflypack
* added a version for openturns
* added a url for openturns
* minor fix for openturns
* Update var/spack/repos/builtin/packages/openturns/package.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* added version2.1.1 for butterflypack
* fixing a tag in butterflypack
* minor fix for superlu-dist
* removed file filter for superlu-dist
* Update var/spack/repos/builtin/packages/superlu-dist/package.py
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* superlu-dist: add version detection for the testing; removing unused lines
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* add test to verify fix works
* fix spec cflags/variants parsing test (breaking change)
* fix `spack spec` arg quoting issue
* add error report for deprecated cflags coalescing
* use .group(n) vs subscript regex group extraction for 3.5 compat
* add random test for untested functionality to pass codecov
* fix new test failure since rebase
Fix a bug introduced in #21720. `spack_json.dump()` calls `_strify()` on dictionaries to
convert `unicode` to `str`, but it constructs `dict` objects instead of
`collections.OrderedDict` objects, so in Python 2 (or earlier versions of 3) it can
scramble dictionary order.
This can cause hashes to differ between Python 2 and Python 3, or between Python 3.7
and earlier Python 3's.
- [x] use `OrderedDict` in `_strify`
- [x] add a regression test
* nfft, pnfft: fix detection of fftw variants (precision)
* nfft, pnfft: use fftw's selected_precisions; avoid repetitive calls to spec
Co-authored-by: Martin Lang <martin.lang@mpsd.mpg.de>
The "submodules" argument of the "version" directive can now accept
a callable that returns a list of submodules, in addition to the usual
Boolean values
* Cancel running workflows automatically on PR update
* Add the last update later to check cancellation is working
* Use github.run_number instead of github.sha
* [py-sentry-sdk] audited dependencies and relisted extras as variants
* [py-sentry-sdk] added version 1.5.5
* [py-sentry-sdk] flake8
* [py-sentry-sdk]
- added "descriptions" to variants
- removed conflicts
- replaced with when statements in varients
* bootstrap: account for disabled sources
Fix a bug introduced in #30192, which effectively skips
any prescription on disabled bootstrapping sources.
* Add unit test to avoid regression
Fixes#31021
With #25185, we no longer default to using tar when we can't
determine the extension type, opting to fail instead.
This broke fetching for the pcre package, where we couldn't determine
the extension. To determine the extension, we were attempting to
extract it from the destination filename; however, this file name
may omit details of the origin URL that are required to determine the
extension, so instead we examine the URL directly.
This also updates the decompressor_for method not to set ext=None
by default: it must now always be set by the caller.
* Update prmon recipe, v3.0.2
Update recipe to v3.0.2, using external dependency
option for the build (as these are satisfied so easily with Spack)
* Remove "broken" 3.0.1 version
This one could not build properly with Spack, due to
missing submodule sources
Installation process of libint runs a Python script:
<69cc7b9bc6/export/cmake/CMakeLists.txt.export (L410)>.
If Python isn't explicitly listed as build dependency, system Python will be
picked up, which can cause troubles.
* Fix py-ray dependencies and build system
* Update var/spack/repos/builtin/packages/py-ray/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added documentation for unusual dependencies.
* Fixed npm preinstall script per @adamjstewart suggestion.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Most package installations include compressed source files. This
adds support for common archive types on Windows:
* Add support for using system 7zip functionality to decompress .Z
files when available (and on Windows, use 7zip for .xz archives)
* Default to using built-in Python support for tar/bz2 decompression
(note that Python tar documentation mentions preservation of file
permissions)
* Add tests for decompression support
* Extract logic for handling exploding archives (i.e. compressed
archives that expand to more than one base file) into an
exploding_archive_catch context manager in the filesystem module
* [py-httpx] python dependencies are type=('build', 'run')
* [py-httpx] py-wheel is now implied by PythonPackage
* [py-httpx] fixed older version dependencies
* [py-httpx] added version 0.22.0
Spack's staging logic constructs a file path based on a URL. The URL
may contain characters which are not allowed in valid file paths on
the system (e.g. Windows prohibits ':' and '?' among others). This
commit adds a function to remove such offending characters (but
otherwise preserves the URL string when constructing a file path).
* add pacakge: abacus
* rename
* fix some style bugs
* update package:abacus
* fix some style bugs
* format code
* Delete a line of useless comments
* updatee abacus
* Update package.py
add new version info and fix a bug on mkl
* update version info and fix a bug on mkl
* Update package.py
* fix style bugs
* trailing whitespace
Anticipate openPMD-api changes in the next major release that are
already in `dev` (aka Spack `develop`):
- C++17 requirement
- drop: `mpark-variant` public dependency
- add: `toml11` private dependency
Also add @franzpoeschel as co-maintainer for the Spack package.
Updates to improve Spack-generated modules for Intel oneAPI compilers:
* intel-oneapi-compilers set CC etc.
* Add a new package intel-oneapi-compilers-classic which can be used to
generate a module which sets CC etc. to older compilers (e.g. icc)
* lmod module logic now updated to treat the intel-oneapi-compilers*
packages as compilers
* acts-dd4hep: new package, separated from new acts@19.1.0
* acts-dd4hep: improved versioning
* acts-dd4hep: don't use curl | sha256sum
* acts: new variant `odd` for Open Data Detector
* acts-dd4hep: style changes
Add spack stacks targeted at Spack + AWS + ARM HPC User Group hackathon. Includes
a list of miniapps and full-apps that are ready to run on both x86_64 and aarch64.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Add two new stacks targeted at x86_64 and arm, representing an initial list of packages
used by current and planned AWS Workshops, and built in conjunction with the ISC22
announcement of the spack public binary cache.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Explicitly import package utilities in all packages, and corresponding fallout.
This includes:
* rename `spack.package` to `spack.package_base`
* rename `spack.pkgkit` to `spack.package`
* update all packages in builtin, builtin_mock and tutorials to include `from spack.package import *`
* update spack style
* ensure packages include the import
* automatically add the new import and remove any/all imports of `spack` and `spack.pkgkit`
from packages when using `--fix`
* add support for type-checking packages with mypy when SPACK_MYPY_CHECK_PACKAGES
is set in the environment
* fix all type checking errors in packages in spack upstream
* update spack create to include the new imports
* update spack repo to inject the new import, injection persists to allow for a deprecation period
Original message below:
As requested @adamjstewart, update all packages to use pkgkit. I ended up using isort to do this,
so repro is easy:
```console
$ isort -a 'from spack.pkgkit import *' --rm 'spack' ./var/spack/repos/builtin/packages/*/package.py
$ spack style --fix
```
There were several line spacing fixups caused either by space manipulation in isort or by packages
that haven't been touched since we added requirements, but there are no functional changes in here.
* [x] add config to isort to make sure this is maintained going forward
referred targets are currently the only minimization criteria for Spack for which we allow
negative values. That means Spack may be incentivized to add nodes to the DAG if they
match the preferred target.
This PR re-norms the minimization criteria so that preferred targets are weighted from 0,
and default target weights are offset by the number of preferred targets per-package to
calculate node_target_weight.
Also fixes a bug in the test for preferred targets that was making the test easier to pass
than it should be.
* Call Numpy package's set_blas_lapack() and setup_build_environment() in Scipy package
* Remove broken link from comment
* Use .package attribute of spec to avoid import
This PR fixes several issues I noticed while trying to get Spack working on Apple M1.
- [x] `build_environment.py` attempts to add `spec['foo'].libs` and `spec['foo'].headers` to our compiler wrappers for all dependencies using a try-except that ignores `NoLibrariesError` and `NoHeadersError` respectively. However, The `libs` and `headers` attributes of the Python package were erroneously using `RuntimeError` instead.
- [x] `spack external find python` (used during bootstrapping) currently has no way to determine whether or not an installation is `+shared`, so previously we would only search for static Python libs. However, most distributions including XCode/Conda/Intel ship shared Python libs. I updated `libs` to search for both shared and static (order based on variant) as a fallback.
- [x] The `headers` attribute was recursively searching in `prefix.include` for `pyconfig.h`, but this could lead to non-deterministic behavior if multiple versions of Python are installed and `pyconfig.h` files exist in multiple `<prefix>/include/pythonX.Y` locations. It's safer to search in `sysconfig.get_path('include')` instead.
- [x] The Python installation that comes with XCode is broken, and `sysconfig.get_paths` is hard-coded to return specific directories. This meant that our logic for `platlib`/`purelib`/`include` where we replace `platbase`/`base`/`installed_base` with `prefix` wasn't working and the `mkdirp` in `setup_dependent_package` was trying to create a directory in root, giving permissions issues. Even if you commented out those `mkdirp` calls, Spack would add the wrong directories to `PYTHONPATH`. Added a fallback hard-coded to `lib/pythonX.Y/site-packages` if sysconfig is broken (this is what distutils always did).
This PR supports the creation of securely signed binaries built from spack
develop as well as release branches and tags. Specifically:
- remove internal pr mirror url generation logic in favor of buildcache destination
on command line
- with a single mirror url specified in the spack.yaml, this makes it clearer where
binaries from various pipelines are pushed
- designate some tags as reserved: ['public', 'protected', 'notary']
- these tags are stripped from all jobs by default and provisioned internally
based on pipeline type
- update gitlab ci yaml to include pipelines on more protected branches than just
develop (so include releases and tags)
- binaries from all protected pipelines are pushed into mirrors including the
branch name so releases, tags, and develop binaries are kept separate
- update rebuild jobs running on protected pipelines to run on special runners
provisioned with an intermediate signing key
- protected rebuild jobs no longer use "SPACK_SIGNING_KEY" env var to
obtain signing key (in fact, final signing key is nowhere available to rebuild jobs)
- these intermediate signatures are verified at the end of each pipeline by a new
signing job to ensure binaries were produced by a protected pipeline
- optionallly schedule a signing/notary job at the end of the pipeline to sign all
packges in the mirror
- add signing-job-attributes to gitlab-ci section of spack environment to allow
configuration
- signing job runs on special runner (separate from protected rebuild runners)
provisioned with public intermediate key and secret signing key
Old concrete specs were slipping through in `_assign_hash`, and `package_hash` was
attempting to recompute a package hash when we could not know the package a time
of concretization.
Part of this was that the logic for `_assign_hash` was hard to understand -- it was
called twice from `_finalize_concretization` and had special cases for both args it
was called with. It's much easier to understand the logic here if we just inline it.
- [x] Get rid of `_assign_hash` and just integrate it with `_finalize_concretization`
- [x] Don't call `_package_hash` at all for already-concrete specs.
- [x] Add regression test.
Use `spack build` as build dir to avoid recursive link error.
```
config.status: linking /var/folders/fy/x2xtwh1n7fn0_0q2kk29xkv9vvmbqb/T/s3j/spack-stage/spack-stage-sed-4.8-wraqsot6ofzvr3vrgusx4mj4mya5xfux/spack-src/GNUmakefile to GNUmakefile
config.status: executing depfiles commands
config.status: executing po-directories commands
config.status: creating po/POTFILES
config.status: creating po/Makefile
==> sed: Executing phase: 'build'
==> [2022-05-25-14:15:51.310333] 'make' '-j8' 'V=1'
make: GNUmakefile: Too many levels of symbolic links
make: stat: GNUmakefile: Too many levels of symbolic links
make: *** No rule to make target `GNUmakefile'. Stop.
```
This PR introduces a new build cache layout and package format, with improvements for
both efficiency and security.
## Old Format
Currently a binary package consists of a `spec.json` file at the root and a `.spack` file,
which is a `tar` archive containing a copy of the `spec.json` format, possibly a detached
signature (`.asc`) file, and a tar-gzip compressed archive containing the install tree.
```
build_cache/
# metadata (for indexing)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json
<arch>/
<compiler>/
<name>-<ver>/
# tar archive
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spack
# tar archive contents:
# metadata (contains sha256 of internal .tar.gz)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json
# signature
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json.asc
# tar.gz-compressed prefix
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.tar.gz
```
After this change, the nesting has been removed so that the `.spack` file is the
compressed archive of the install tree. Now signed binary packages, will take the
form of a clearsigned `spec.json` file (a `spec.json.sig`) at the root, while unsigned
binary packages will contain a `spec.json` at the root.
## New Format
```
build_cache/
# metadata (for indexing, contains sha256 of .spack file)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json
# clearsigned spec.json metadata
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json.sig
<arch>/
<compiler>/
<name>-<ver>/
# tar.gz-compressed prefix (may support more compression formats later)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spack
```
## Benefits
The major benefit of this change is that the signatures on binary packages can be
verified without:
1. Having to download the tarball, or
2. having to extract an unknown tarball.
(1) is an improvement in efficiency; (2) is a security fix: we now ensure that we trust the
binary before we try to run it through `tar`, which avoids potential attacks.
## Backward compatibility
Also after this change, spack should still be able to handle the previous buildcache
structure and binary mirrors with mixed layouts.
This PR builds on #28392 by adding a convenience command to create a local mirror that can be used to bootstrap Spack. This is to overcome the inconvenience in setting up this mirror manually, which has been reported when trying to setup Spack on air-gapped systems.
Using this PR the user can create a bootstrapping mirror, on a machine with internet access, by:
% spack bootstrap mirror --binary-packages /opt/bootstrap
==> Adding "clingo-bootstrap@spack+python %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "gnupg@2.3: %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "patchelf@0.13.1:0.13.99 %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding binary packages from "https://github.com/alalazo/spack-bootstrap-mirrors/releases/download/v0.1-rc.2/bootstrap-buildcache.tar.gz" to the mirror at /opt/bootstrap/local-mirror
To register the mirror on the platform where it's supposed to be used run the following command(s):
% spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources
% spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries
The mirror has to be moved over to the air-gapped system, and registered using the commands shown at prompt. The command has options to:
1. Add pre-built binaries downloaded from Github (default is not to add them)
2. Add development dependencies for Spack (currently the Python packages needed to use spack style)
* bootstrap: refactor bootstrap.yaml to move sources metadata out
* bootstrap: allow adding/removing custom bootstrapping sources
This operation can be performed from the command line since
new subcommands have been added to `spack bootstrap`
* Add --trust argument to spack bootstrap add
* Add a command to generate a local mirror for bootstrapping
* Add a unit test for mirror creation
* Allow Kokkos with OpenMPTarget backend
* Restrict SYCL and OpenMPTarget to C++17 or higher
* Improve C++ standard check for SYCL and OpenMPTarget
* Fix indentation
Currently, environments can either be concretized fully together or fully separately. This works well for users who create environments for interoperable software and can use `concretizer:unify:true`. It does not allow environments with conflicting software to be concretized for maximal interoperability.
The primary use-case for this is facilities providing system software. Facilities provide multiple MPI implementations, but all software built against a given MPI ought to be interoperable.
This PR adds a concretization option `concretizer:unify:when_possible`. When this option is used, Spack will concretize specs in the environment separately, but will optimize for minimal differences in overlapping packages.
* Add a level of indirection to root specs
This commit introduce the "literal" atom, which comes with
a few different "arities". The unary "literal" contains an
integer that id the ID of a spec literal. Other "literals"
contain information on the requests made by literal ID. For
instance zlib@1.2.11 generates the following facts:
literal(0,"root","zlib").
literal(0,"node","zlib").
literal(0,"node_version_satisfies","zlib","1.2.11").
This should help with solving large environments "together
where possible" since later literals can be now solved
together in batches.
* Add a mechanism to relax the number of literals being solved
* Modify spack solve to display the new criteria
Since the new criteria is above all the build criteria,
we need to modify the way we display the output.
Originally done by Greg in #27964 and cherry-picked
to this branch by the co-author of the commit.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Inject reusable specs into the solve
Instead of coupling the PyclingoDriver() object with
spack.config, inject the concrete specs that can be
reused.
A method level function takes care of reading from
the store and the buildcache.
* spack solve: show output of multi-rounds
* add tests for best-effort coconcretization
* Enforce having at least a literal being solved
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Py-x21 now works, needs dependencies
Conflicts:
var/spack/repos/rit-rc/packages/py-x21/package.py
* Added dependencies to py-x21
* Making flake style check happy
* [py-x21] flake8
* [py-x21]
- added homepage
- added placeholder description
- added comment about checksums
* [py-x21] added darwin support and fixed issue with python 3.7 wheel name
* [py-x21] adding checksum hash
* [py-x21] removed duplicate py-pynacl
* [py-x21]
- updated description
- updated version listing to have a different version for each version
of python. Also, versions dependent on sys.platform
- updated url_for_version to not require post concretized information so
that spack checksum works
* [py-x21] isort
Co-authored-by: vehrc <vehrc@rit.edu>
Previously the regex was only checking for presence of quotes as a beginning
or end character and not a matching set. This erroneously identified the
following *single* argument as being quoted:
source bashenvfile &> /dev/null && python3 -c "import os, json; print(json.dumps(dict(os.environ)))"
rocm-5.1.0 removed librocrand.so from ROCM_DIR/rocrand/lib location (but includes are still at this location)
/opt/rocm-5.0.2/lib/librocrand.so
/opt/rocm-5.0.2/rocrand/lib/librocrand.so
/opt/rocm-5.1.0/lib/librocrand.so
drwxr-xr-x 2 root root 617 Mar 8 08:20 /opt/rocm-5.0.2/rocrand/include
drwxr-xr-x 2 root root 617 Mar 31 09:48 /opt/rocm-5.1.0/rocrand/include
Add a config option to strip `-Werror*` or `-Werror=*` from compile lines everywhere.
```yaml
config:
keep_werror: false
```
By default, we strip all `-Werror` arguments out of compile lines, to avoid unwanted
failures when upgrading compilers. You can re-enable `-Werror` in your builds if
you really want to, with either:
```yaml
config:
keep_werror: all
```
or to keep *just* specific `-Werror=XXX` args:
```yaml
config:
keep_werror: specific
```
This should make swapping in newer versions of compilers much smoother when
maintainers have decided to enable `-Werror` by default.
Parse error information is kept for specs, but it doesn't seem like we propagate it
to the user when we encounter an error. This fixes that.
e.g., for this error in a package:
```python
depends_on("python@:3.8", when="0.900:")
```
Before, with no context and no clue that it's even from a particular spec:
```
==> Error: Unexpected token: ':'
```
With this PR:
```
==> Error: Unexpected token: ':'
Encountered when parsing spec:
0.900:
^
```
* Added autotools configure flags to ensure that hwloc finds the correct
version of CUDA that it was concretized against, rather than the first
one that package config finds.
* Added support for finding the correct version of ROCm libraries. Fixed Flake8.
* Fixed guard on finding ROCm library
* [py-h2] py-wheel is implied by PythonPackage
* [py-h2] python dependencies should be type=('build', 'run')
* [py-h2] fixed dependencies for py-h2@4.0.0
* [py-h2] added version 3.2.0
* [py-h2] added version 4.1.0
* [py-h2] Older version requires py-enum34 for older versions of python
Add two new cloud pipelines for E4S on Amazon Linux, include arm and x86 (v3 + v4) stacks.
Notes:
- Updated mpark-variant to remove conflict that no longer exists in Amazon Linux
- Which command on Amazon Linux prefixes on all results when padded_length is too high. In this case, padded_length<=503 works as expected. Chose conservative length of 384.
* Introduce concretizer:unify option to replace spack:concretization
* Deprecate concretization
* Make spack:concretization overrule concretize:unify for now
* Add environment update logic to move from spack:concretization to spack:concretizer:reuse
* Migrate spack:concretization to spack:concretize:unify in all locations
* For new environments make concretizer:unify explicit, so that defaults can be changed in 0.19
* Update h5bench maintainers and versions
* Include version 1.1 for h5bench
* Correct release hash and set default version
* Update .tar.gz version
* Include new version and update runtime
* Update year
* Update package.py
* Update package.py
fixes#30700
To avoid clingo adding penalties for not using the
default value for a variant, it's better to model
the variant as conditional where possible.
* This commit removes the Boost.with_default_variants to variants that packages are precisely dependant upon. This is the third batch of 16 packages with modified boost dependencies.
* style fix
* Update var/spack/repos/builtin/packages/sympol/package.py
Co-authored-by: Tim Haines <thaines.astro@gmail.com>
* fix style
* Apply suggestions from code review
Co-authored-by: Tim Haines <thaines.astro@gmail.com>
* Fix Trilinos boost deps
* Fix style
Co-authored-by: Tim Haines <thaines.astro@gmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Add a `build_type` variant, which allows building optimized compilers,
as well as target libraries (libstdc++ and friends).
The default is `build_type=RelWithDebInfo`, which corresponds to GCC's
default of -O2 -g.
When building with `+bootstrap %gcc`, also add Spack's arch specific
flags using the common denominator between host and new GCC.
It is done by creating a config/spack.mk file in def patch, that looks
as follows:
```
BOOT_CFLAGS := $(filter-out -O% -g%, $(BOOT_CFLAGS)) -O2 -g -march=znver2 -mtune=znver2
CFLAGS_FOR_TARGET := $(filter-out -O% -g%, $(CFLAGS_FOR_TARGET)) -O2 -g -march=znver2 -mtune=znver2
CXXFLAGS_FOR_TARGET := $(filter-out -O% -g%, $(CXXFLAGS_FOR_TARGET)) -O2 -g -march=znver2 -mtune=znver2
```
The oneapi and dpcpp compilers are essentially the same except for which
binary is used foc CXX. Spack will detect them as "mixed toolchain" and
not inject compiler optimization flags. This will be needed once
archspec has entries for the oneapi and dpcpp compilers. This PR detects
when dpcpp and oneapi are in the toolchains list and explicitly sets
`is_mixed_toolchain` to `False`.
* [py-openslide-python] added verion 1.1.2 and set max py-setuptools version for 1.1.1
* [py-openslide-python]
- setuptools required for all possible newer versions
- python is type build run
* [py-openslide-python] use pil provider
* Add version 3.0 and 3.1 and prelim OpenMP support
* Fix flag handler missing spec variable
* Use self.compiler.openmp_flag instead of -fopenmp
* Fix whitespace
Fixes qt configure errors with external openssl on older systems (rhel7)
See
efc02f9cc3/dist/changes-5.15.0 (L346)
This means for now on, `qt ^openssl@1.0` gets you `qt@5.15.4 ~ssl`:
clingo chooses latest qt version **but disables ssl support**.
Error messages for the clingo concretizer have proven challenging. The current messages are incredibly vague and often don't help users at all. Unsat cores in clingo are not guaranteed to be minimal, and lead to cores that are either not useful or need to be post-processed for hours to reach a minimal core.
Following up on an idea from a slack conversation with kwryankrattiger on slack, this PR takes a new approach. We eliminate most integrity constraints and minima/maxima on choice rules in clingo, and instead force invalid states to imply an error predicate. The error predicate can include context on the cause of the error (Package, Version, etc). These error predicates are then heavily optimized against, to ensure that we do not include error facts in the solution when a solution with no error facts could be generated. When post-processing the clingo solution to construct specs, any error facts cause the program to raise an exception. This leads to much more legible error messages. Each error predicate includes a priority and an error message. The error message is formatted by the remaining arguments to produce the error message. The priority is used to ensure that when clingo has a choice of which rules to violate, it chooses the one which will be most informative to the user.
Performance:
"fresh" concretizations appear to suffer a ~20% performance penalty under this branch, while "reuse" concretizations see a speedup of around 33%.
Possible optimizations if users still see unhelpful messages:
There are currently 3 levels of priority of the error messages. Additional priorities are possible, and can allow us finer granularity to ensure more informative error messages are provided in lieu of less informative ones.
Future work:
Improve tests to ensure that every possible rule implying an error message is exercised
A non-existent upstream should not be fatal: it could only mean it is
not deployed yet. In the meantime, it should not block the user to
rebuild anything it needs.
A warning is still emitted, to let the user decide if this is ok or not.
* Fix for xtensor-xsimd
* Add sha256 for all new releases
* renamed ufcx package
* Update sha for ffcx
* fixed hashes and modified fenics-dolfinx to depend on ufcx
* cleaned and fixed dependency types
* use spec.satisfies in cmake_args
* bumped to ufcx@0.4.1
* address PR comments
* fix hashes
* update parmetis in cmake_args to reflect default setting
* update versions
* renamed ufcx package
* fixed hashes and modified fenics-dolfinx to depend on ufcx
* cleaned and fixed dependency types
* use spec.satisfies in cmake_args
* bumped to ufcx@0.4.1
* address PR comments
* fix hashes
* update parmetis in cmake_args to reflect default setting
* update versions
* Add dependency fix
* bump basix to 0.4.2 and address PR comments
* Versioning fixes
* Use xtensor-0.24: and loosen pybind11
* Add conflicts for partitioners
* Updates on partitioners
* use define_from_variant
* Tidy up some dependencies
* Work on multi-variants for graph partitioners
* Fix KaHIP issue.
KaHIP changed the name of its library from 'interface' to 'kahip'. Pin earlier versions of DOLFINx to earlier verisons of KaHIP for proper detection.
Co-authored-by: Chris Richardson <chris@bpi.cam.ac.uk>
Co-authored-by: Garth N. Wells <gnw20@cam.ac.uk>
Fixes missing chgrp on symlinks in package installations, and errors on
symlinks referencing non-existent or non-writable locations.
Note: `os.chown(.., follow_symlinks=False)` is python3 only, but
`os.lchown` exists in both versions.
* Change license dir from hard-coded to a configurable item
* Change config item to be a string not an array
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Trying to compute `dag_hash()` or `package_hash()` on a concrete spec that doesn't have
a `_package_hash` attribute would attempt to recompute the package hash.
This most commonly manifests as a failed lookup of a namespace if you attempt to uninstall
or compute the hashes of packages in exsternal repositories that aren't registered, e.g.:
```console
> spack spec --json c/htno
==> Error: Unknown namespace: myrepo
```
While it wouldn't change the already-assigned `dag_hash` value, this behavior is
incorrect, since the package file for a previously concrete spec:
1. might have changed since concretization,
2. might not exist anymore, or
3. might just not be findable by Spack.
This PR ensures that the package hash can't be computed on older concrete specs. Instead
of calling `package_hash()` from within `to_node_dict()`, we now check for the `_package_hash`
attribute and only add the package_hash to the spec record if it's there.
This PR also handles the tricky semantics of computing `package_hash()` at concretization
time. We have to compute it *before* marking the spec concrete so that `to_node_dict` can
use it. But this means that the logic for `package_hash()` can't rely on `spec.concrete`,
as it is called *during* concretization. Instead of checking for concreteness, `package_hash()`
now checks `_patches_assigned()` to determine whether it should add them to the package
hash.
- [x] Add an assert to `package_hash()` so it can't be called on specs for which it
would be wrong.
- [x] Add an `_assign_hash()` method to handle tricky semantics of `package_hash`
and `dag_hash`.
- [x] Rework concretization to call `_assign_hash()` before and after marking specs
concrete.
- [x] Rework content hash part of package hash to check for `_patches_assigned()`
instead of `spec.concrete`.
- [x] regression test
* [py-tensorflow-hub] applied patch for newer version of zlib
* [py-tensorflow-hub] patch also applies to 0.11.0
* [py-tensorflow-hub] Audit fix
1. patch URL in package py-tensorflow-hub must end with ?full_index=1
Newer versions of gobject-introspection require Meson to build. Convert
the package into a hybrid one that still supports older versions using
Autotools.
* arm-forge: Download via HTTPS
Update download URL to use HTTPS (rather than HTTP)
* arm-forge: Allow +probe to depend on python3
Allow python dependency required for arm-forge+probe to be python3 as
well as 2.7.x
* arm-forge: Add versions up to 22.0.1
By default, libfuse install helper programs like `fusermount3`, which
are mostly useless if not installed with setuid (that is, `+useroot`).
However, their presence makes it complicated to use globally installed
versions, which can be combined with a Spack-installed FUSE library.
In particular, on systems that have a setuid fusermount3 binary, but no
libfuse-dev installed, it is nice to be able to build libfuse with Spack, and
have it call the system setuid executable.
* Correcting include and library paths using patch file for RVS to build
following library files in spack.
libperf.so.0.0
libpebb.so.0.0
libiet.so.0.0
libgst.so.0.0
libpqt.so.0.0
libmem.so.0.0
libbabel.so.0.0
* Correcting include and library paths using patch file for RVS to build
following library files in spack.
libperf.so.0.0
libpebb.so.0.0
libiet.so.0.0
libgst.so.0.0
libpqt.so.0.0
libmem.so.0.0
libbabel.so.0.0
* Replacing ROCM_PATH with RPATH in the deviceid.sh before installing in Spack build.
* Reducing multiple enviroment variable for HIP and HSA path
- Removed gl dependency.
- Specify clang as cmake compiler as gcc was being
improperly picked up. As a result, ffi include
path was needed in C/CXX flags.
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Previously we sorted by hash values for `spack graph`, but changing hashes can make the
test brittle and the node order seem nondeterministic to users.
- [x] Sort nodes in `spack graph` by the default edge order, which takes into account
parent and child names as well as dependency types.
- [x] Update ASCII test output for new order.
The dependency check currently checks whether there are only build
dependencies left for a particular package. However, the database also
contains uninstalled packages, which can cause the check to fail.
For instance, with `bison` and `flex` having already been uninstalled,
`m4` will have the following dependents:
```
bison ('build', 'run')--> m4
flex ('build',)--> m4
libnl ('build',)--> m4
```
`bison` and `flex` should be ignored in this case because they are not
installed anymore.
Fixes#30673
* ceed50: add ceed 5.0.0 and pumi 2.2.7
* libceed-0.10
* ceed50: add omegah
* omega-h: mpi and cuda builds work
* omega-h: fix style
* New package: libfms
* New version: gslib@1.0.7
CEED: add some TODO items for the 5.0 release
* ceed: variant name consistent with package name
* LAGHOS: allow newer versions of MFEM to be used with v3.1
* LIBCEED: add missing 'install' target in 'install_targets'
* CEED: address some TODO items + some tweaks
* MFEM: add new variant for FMS (libfms)
* CEED: v5.0.0 depends on 'libfms' and 'mfem+fms'
* RATEL: add missing 'install' target in 'install_targets'
* CEED: add dependency for v5.0.0 on Ratel v0.1.2
* CEED: add Nek-related dependencies for ceed@5.0.0
* CEED: v5.0.0 depends on MAGMA v2.6.2
* libCEED: set the `CUDA_ARCH` makefile parameter
* libCEED: set the `HIP_ARCH` makefile parameter
Co-authored-by: Jed Brown <jed@jedbrown.org>
Co-authored-by: Veselin Dobrev <dobrev@llnl.gov>
Co-authored-by: Veselin Dobrev <v-dobrev@users.noreply.github.com>
#24556 merged in support for Python's .zip file support via ZipFile.
However as per #30200 ZipFile does not preserve file permissions of
the extracted contents. This PR returns to using the `unzip`
executable on non-Windows systems (as was the case before #24556)
and now uses `tar` on Windows to extract .zip files.
We previously had checks in `directory_layout` to check for build-dependency
conflicts when we weren't storing build dependencies. We don't need
those anymore; we can just rely on the DAG hash now that it includes everything
we know about each spec.
- [x] Remove vestigial code for checking installed spec against concrete spec
in `ensure_installed()`
- [x] Remove `SpecHashCollisionError` -- if specs have the same hash now, they're
the same as far as `DirectoryLayout` should be concerned.
- [x] Convert spec comparison to `dag_hash()` comparison when adding extensions.
The database now stores full hashes, so we need to adjust the criteria we use to
determine if something can be uninstalled. Specifically, it's ok to uninstall thing that
have remaining build-only dependents.
With the original DAG hash, we did not store build dependencies in the database, but
with the full DAG hash, we do. Previously, we'd never tell the concretizer about build
dependencies of things used by hash, because we never had them. Now, we have to avoid
telling the concretizer about them, or they'll unnecessarily constrain build
dependencies for new concretizations.
- [x] Make database track all dependencies included in the `dag_hash`
- [x] Modify spec_clauses so that build dependency information is optional
and off by default.
- [x] `spack diff` asks `spec_clauses` for build dependencies for completeness
- [x] Modify `concretize.lp` so that reuse optimization doesn't affect fresh
installations.
- [x] Modify concretizer setup so that it does *not* prioritize installed versions
over package versions. We don't need this with reuse, so they're low priority.
- [x] Fix `test_installed_deps` for full hash and new concretizer (does not work
for old concretizer with full hash -- leave this for later if we need it)
- [x] Move `test_installed_deps` mock packages to `builtin.mock` for easier debugging
with `spack -m`.
- [x] Fix `test_reuse_installed_packages_when_package_def_changes` for full hash
- [x] update test to use `build_hash` instead of `dag_hash`, as we're testing for
graph structure, and specifically NOT testing for package changes.
- [x] make hash descriptors callable on specs to simplify syntax for invoking them
- [x] make `Spec.spec_hash()` public
This removes all but one usage of runtime hash. The runtime hash was being used to write
historical lockfiles for tests, but we don't need it for that; we can just save those
lockfiles.
- [x] add legacy lockfiles for v1, v2, v3
- [x] fix bugs with v1 lockfile tests (the dummy lockfile we were writing was not actually
a v1 lockfile because it used the new spec file format).
- [x] remove all but one runtime_hash usage -- that one needs a small rework of the
concretizer to really fix, as it's about separate concretization of build
dependencies.
- [x] Document the history of the lockfile format in `environment/__init__.py`
Some test cases had to be modified in a kludgy way so that abstract specs made
concrete would have versions on them. We shouldn't *need* to do this, as the
only reason we care is because the content hash has to be able to get an archive
for a version.
This modifies the content hash so that it can be called on abstract specs,
including only relevant content.
This does NOT add a partial content hash to the DAG hash, as we do not really
want that -- we don't need in-memory spec hashes to need to load package files.
It just makes `Package.content_hash()` less prickly and tests easier to
understand.
`spack monitor` expects a field called `spec_full_hash`, so we shouldn't change that.
Instead, we can pass a `dag_hash` (which is now the full hash) but not change the field
name.
`hashes_final` was used to indicate when a spec was concrete but possibly lacked
`full_hash` or `build_hash` fields. This was only necessary because older Spacks
didn't generate them, and we want to avoid recomputing them, as we likely do not
have the same package files as existed at concretization time.
Now, we don't need to do that -- there is only the DAG hash and specs are either
concrete and have a `dag_hash`, or not concrete and have no `dag_hash`. There's
no middle ground.
Without some enforcement of spec ordering, python 2 produced
different results in the affected test than did python 3. This
change makes the arbitrary but reproducible decision to sort
the specs by their lockfile key alphabetically.
The full hash appears twice in the spec dict now, replacing just
the value replaces it under "hash" and "full_hash". Only replace
the one that appears after "full_hash".
I'm actually not sure what purpose this test served, so maybe it
could be removed, as it may be testing some distinction between
full and dag hash which no longer exists.
For a long time, Spack has used a coarser hash to identify packages
than it likely should. Packages are identified by `dag_hash()`, which
includes only link and run dependencies. Build dependencies are
stripped before hashing, and we have notincluded hashes of build
artifacts or the `package.py` files used to build. This means the
DAG hash actually doesn't represent all the things Spack can build,
and it reduces reproducibility.
We did this because, in the early days, users were (rightly) annoyed
when a new version of CMake, autotools, or some other build dependency
would necessitate a rebuild of their entire stack. Coarsening the hash
avoided this issue and enabled a modicum of stability when only reusing
packages by hash match.
Now that we have `--reuse`, we don't need to be so careful. Users can
avoid unnecessary rebuilds much more easily, and we can add more
provenance to the spec without worrying that frequent hash changes
will cause too many rebuilds.
This commit starts the refactor with the following major change:
- [x] Make `Spec.dag_hash()` include build, run, and link
dependencides and the package hash (it is now equivalent to
`full_hash()`).
It also adds a couple of bugfixes for problems discovered during
the switch:
- [x] Don't add a `package_hash()` in `to_node_dict()` unless
the spec is concrete (fixes breaks on abstract specs)
- [x] Don't add source ids to the package hash for packages without
a known fetch strategy (may mock packages are like this)
- [x] Change how `Spec.patches` is memoized. Using
`llnl.util.lang.memoized` on `Spec` objects causes specs to
be stored in a `dict`, which means they need a hash. But,
`dag_hash()` now includes patch `sha256`'s via the package
hash, which can lead to infinite recursion
For tutorial builds, we should continue to allow deprecated builds to be installed. We
can update them as needed when we update the tutorial, but we don't need to correct them
immediately on deprecation in CI.
- [x] add `deprecated:true` to tutorial `spack.yaml` config.
* updating googletest version to 1.11 to avoid GTEST_DISALLOW_ASSIGN_ error
* limiting the version scope
* modified the version limit
Co-authored-by: mohan babu <mohbabul@amd.com>
Upstream neovim builds with luajit-openresty or luajit in almost all
cases. To support the current usage, a user can specify that they want
lua, but this will allow the use of the normal (faster, better tested
and better maintained) setup.
* Add checksum for py-pylint@2.13.5
* Update dependencies
* Add checksum for py-astroid@2.11.4
* Correct py-toml addition and add py-tomli dependency
* Remove py-pytoml dependency for versions @2.13:
* Modify py-astroid version range
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Discontinue py-astroid dependency @2.8.0:2.8 for new versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Discontinue py-mccabe dependency @0.6.0:0.6 for new versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Remove mccabe and setuptools-scm dependencies
* Update astroid dependencies
* Extend py-typed-ast version range to future releases
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-dill only required for version 2.13.5 and above
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add maccabe dependency and correct setuptools run dependency
* Setuptools fix
* Add setuptools as run dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pyarrow: Add version 7.0.0
* Add version constraints on dependencies
* Add version 8.0.0
* arrow: Add version 8.0.0
* py-pyarrow: Allow version 8.0.0 of arrow
* Bump up rocm release version to rocm-5.1.0
* update rocm-opencl for rocm-5.1.0 release
* update the migraphx,miopen(hip,opencl),mivisionx,rocm-tensile
* update the mlirmiopen checksum version
- [x] Add `mkdir -p` and `chmod` to ensure `/home/spack-test` exists and
has correct permissions.
- [x] Remove version comments from dependabot-managed action commits
- [x] Don't duplicate comment describing required fixes for distros with
patched git
`spack pkg list` tests were broken by #29593 for cases when your `builtin.mock` repo
still has stale backup files (or, really, stale directories) sitting around. This
happens if you switch branches a lot. In this case, things like this were causing
erroneous packages in the mock listing:
```
var/spack/repos/builtin.mock/packages/
foo/
package.py~
```
- [x] make `list_packages` consider only directories with one-deep `package.py` files.
Reworking lua to allow easier substitution of the base lua implementation.
Also adding in a maintained version of luajit and re-factoring the entire stack
to use a custom build-system to centralize functionality like environment
variable management and luarocks installation.
The `lua-lang` virtual is now versioned so that a package that requires
Lua 5.1 semantics can get any lua, but one that requires 5.2 will only
get upstream lua.
The luaposix package requires lua-bit32, but only when built with a
lua conforming to version 5.1. This adds the package, and the
dependencies, but exposed a problem with luarocks dependency
detection. Since we're installing each package in its own "tree" and
there's no environment variable to list extra trees, spack now
generates a luarocks config file that lists all the trees of all the
dependencies, and references it by setting `LUAROCKS_CONFIG`
in the build environment of every LuaPackage. This allows luarocks
to find the spack installed dependencies correctly rather than
trying (and failing) to download them.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Tom Scogland <tscogland@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Some of our `git` tests still fail when `init.defaultBranch` is set to something other
than `master`.
- [x] get rid of all hard-coded `master` refs
- [x] Use `'default'` to key tests that use the default branch
When running on Windows, Spack may generate files in the stage/install
prefixes that do not have write permissions, which prevents the
removal of those directories (e.g. when cleaning stages or uninstalling).
There should be a refactoring to avoid this in the first place, but that
is assumed to be longer term, so the temporary fix is to make such files
writable if they are not. This PR:
* Automatically handles these permissions errors when uninstalling
packages from the Spack root (makes then writable)
* Updates similar already-existing logic when removing Spack-managed
stage directories (the error-handling was assuming all errors were
permissions errors and was therefore handling other errors
inappropriately)
Note: these permissions issues only appear on Windows so this logic is
only applied there (permissions are not modified for this purpose on
Linux etc.).
This also adds special handling for a case where calling `isdir`
on an `os.DirEntry` object would fail for improperly-created symlinks
(e.g. on Windows, using `os.symlink` without `target_is_directory=True`).
Note this specific issue only came up when enabling link_tree tests
(specifically `source_merge_visitor_cant_be_cyclical`).
* create function for translating compiler names on specs/compiler entries in manifest
* add tests for translating compiler names on spec/compiler entries
* use higher-level function in test and add comment to prefer testing via higher-level function
* opensuse clingo check should not fail on account of this pr, but I cannot get it to pass by restarting via CI UI
* Addition of 1.1.9dev version.
* Small style fix -- extra blank line.
* Update var/spack/repos/builtin/packages/py-maestrowf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-maestrowf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-maestrowf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-maestrowf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Additional dependencies and version constraints.
* Revert to py-poetry.
* Remove run from cryptography (build only).
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Force GCC to always provide a C++14 flag
Updated gnu logic so that the c++14 flag for g++ is always propagated.
This fixes issues with build systems that error out if passed an empty
string for a flag.
Engaging in the best kind of software engineering by updating the unit
test to pass with the value it is now passed. This should better match
the expected flag for g++ compiling with the C++14 standard
* Add py-docutils@0.16
* Add sphinx-tabs package
* Update var/spack/repos/builtin/packages/py-sphinx-tabs/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-sphinx-tabs/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-sphinx-tabs/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This ensures that multiple spack instances called from `make` will respect the maximum number of jobs in the POSIX jobserver across packages.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Problem: GCC 9.4.0 catches a bad integer comparison in
resource/hlapi/bindings/c++/reapi_cli_impl.hpp in flux-sched@0.22.0
and current master.
Add a patch to work around the problem until an upstream fix is
available.
* use the init.defaultBranch name, not master
* make tcl and modules/common independent
Both used to use not just the same directory, but the same *file* for
their outputs. In parallel this can cause problems, but it can also
accidentally allow expected failures to pass if the file is left around
by mistake.
* use a non-global misc_cache in tests
* make pkg tests resilient to gitignore
* make source cache and module directories non-global
`make` solves a lot of headaches that would otherwise have to be implemented in Spack:
1. Parallelism over packages through multiple `spack install` processes
2. Orderly output of parallel package installs thanks to `make --sync-output=recurse` or `make -Orecurse` (works well in GNU Make 4.3; macOS is unfortunately on a 16 years old 3.x version, but it's one `spack install gmake` away...)
3. Shared jobserver across packages, which means a single `-j` to rule them all, instead of manually finding a balance between `#spack install processes` & `#jobs per package` (See #30302).
This pr adds the `spack env depfile` command that generates a Makefile with dag hashes as
targets, and dag hashes of dependencies as prerequisites, and a command
along the lines of `spack install --only=packages /hash` to just install
a single package.
It exposes two convenient phony targets: `all`, `fetch-all`. The former installs the environment, the latter just fetches all sources. So one can either use `make all -j16` directly or run `make fetch-all -j16` on a login node and `make all -j16` on a compute node.
Example:
```yaml
spack:
specs: [perl]
view: false
```
running
```
$ spack -e . env depfile --make-target-prefix env | tee Makefile
```
generates
```Makefile
SPACK ?= spack
.PHONY: env/all env/fetch-all env/clean
env/all: env/env
env/fetch-all: env/fetch
env/env: env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww
@touch $@
env/fetch: env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc
@touch $@
env/dirs:
@mkdir -p env/.fetch env/.install
env/.fetch/%: | env/dirs
$(info Fetching $(SPEC))
$(SPACK) -e '/tmp/tmp.7PHPSIRACv' fetch $(SPACK_FETCH_FLAGS) /$(notdir $@) && touch $@
env/.install/%: env/.fetch/%
$(info Installing $(SPEC))
+$(SPACK) -e '/tmp/tmp.7PHPSIRACv' install $(SPACK_INSTALL_FLAGS) --only-concrete --only=package --no-add /$(notdir $@) && touch $@
# Set the human-readable spec for each target
env/%/cdqldivylyxocqymwnfzmzc5sx2zwvww: SPEC = perl@5.34.1%gcc@10.3.0+cpanm+shared+threads arch=linux-ubuntu20.04-zen2
env/%/gv5kin2xnn33uxyfte6k4a3bynhmtxze: SPEC = berkeley-db@18.1.40%gcc@10.3.0+cxx~docs+stl patches=b231fcc arch=linux-ubuntu20.04-zen2
env/%/cuymc7e5gupwyu7vza5d4vrbuslk277p: SPEC = bzip2@1.0.8%gcc@10.3.0~debug~pic+shared arch=linux-ubuntu20.04-zen2
env/%/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: SPEC = diffutils@3.8%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws: SPEC = libiconv@1.16%gcc@10.3.0 libs=shared,static arch=linux-ubuntu20.04-zen2
env/%/yfz2agazed7ohevqvnrmm7jfkmsgwjao: SPEC = gdbm@1.19%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/73t7ndb5w72hrat5hsax4caox2sgumzu: SPEC = readline@8.1%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/trvdyncxzfozxofpm3cwgq4vecpxixzs: SPEC = ncurses@6.2%gcc@10.3.0~symlinks+termlib abi=none arch=linux-ubuntu20.04-zen2
env/%/sbzszb7v557ohyd6c2ekirx2t3ctxfxp: SPEC = pkgconf@1.8.0%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/c4go4gxlcznh5p5nklpjm644epuh3pzc: SPEC = zlib@1.2.12%gcc@10.3.0+optimize+pic+shared patches=0d38234 arch=linux-ubuntu20.04-zen2
# Install dependencies
env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww: env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc
env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p: env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk
env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao: env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu
env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu: env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs
env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs: env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp
env/clean:
rm -f -- env/env env/fetch env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc
```
Then with `make -O` you get very nice orderly output when packages are built in parallel:
```console
$ make -Orecurse -j16
spack -e . install --only-concrete --only=package /c4go4gxlcznh5p5nklpjm644epuh3pzc && touch c4go4gxlcznh5p5nklpjm644epuh3pzc
==> Installing zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc
...
Fetch: 0.00s. Build: 0.88s. Total: 0.88s.
[+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc
spack -e . install --only-concrete --only=package /sbzszb7v557ohyd6c2ekirx2t3ctxfxp && touch sbzszb7v557ohyd6c2ekirx2t3ctxfxp
==> Installing pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp
...
Fetch: 0.00s. Build: 3.96s. Total: 3.96s.
[+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp
```
For Perl, at least for me, using `make -j16` versus `spack -e . install -j16` speeds up the builds from 3m32.623s to 2m22.775s, as some configure scripts run in parallel.
Another nice feature is you can do Makefile "metaprogramming" and depend on packages built by Spack. This example fetches all sources (in parallel) first, print a message, and only then build packages (in parallel).
```Makefile
SPACK ?= spack
.PHONY: env
all: env
spack.lock: spack.yaml
$(SPACK) -e . concretize -f
env.mk: spack.lock
$(SPACK) -e . env depfile -o $@ --make-target-prefix spack
fetch: spack/fetch
@echo Fetched all packages && touch $@
env: fetch spack/env
@echo This executes after the environment has been installed
clean:
rm -rf spack/ env.mk spack.lock
ifeq (,$(filter clean,$(MAKECMDGOALS)))
include env.mk
endif
```
* Use patches from IBM's Open CE project to enable PyTorch to build on
Power systems.
Cherry-pick a patch to allow earlier versions of PyTorch to build with
CUDA 11.4.
* Update var/spack/repos/builtin/packages/py-torch/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* octopus: adding versions up to 11.4
* octopus: add smoke tests
* octopus: add necessary flags for gcc@10
* octopus: update to compilation and dependencies
* octopus: adding new variants
* octopus: remove 'poke' (as this poke is not in spack [yet])
* octopus: allow compilation from git repo develop branch
* octopus: adapt to spack style requirements
* octopus: add maintainer
* octopus: make tests after install optional
Thank you @tldahlgren
* octopus: follow recommended practice for test input data
Move the two configuration files we use for smoke tests into `test`
subdirectory. Thanks @tldahlgren.
* Adding maintainer
with their agreement by email
* octopus: reduce duplication of flags
- part of code review
* octopus: https is preferred over http
* octopus: remove .99 from versioning information
Thanks to https://github.com/spack/spack/pull/26402, we can drop the
"2:3.99" notation when we mean all versions 2.x and 3.x
Examples: b9e72557e8 (diff-b8373d30b3a141c495c2281273ee6184fc513413142afaf2adac1f406cd6b0d7)
(from review)
* octopus: args.extend([x]) -> args.append(x)
(hint from review)
libassimp has been a dependency for all of 5.x but expressing that has
varied significantly throughout the 5.x lifecycle:
v5.0: qt3d uses internal-only libassimp
v5.5: external-only libassimp
v5.6: either internal or external libassimp via autodetection
v5.9: user-selectable internal-vs-external via -assimp
v5.14: additional qtquick3d module uses -assimp
v5.15: qtquick3d switches to the -quick3d-assimp option
* current bug where the incorrect target is setup
* Add mimalloc package
* Add mimalloc as allocator option to pika
* Add mimalloc as allocator option to hpx
* Set git property globally instead of per-version in pika, hpx, and mimalloc packages
Co-authored-by: Mikael Simberg <mikael.simberg@iki.if>
Strictly, `sed` is a `build` and `run` dependency in all gpi-2
versions, whereas `gawk` is a `run` (`build` and `run`) dependency for
gpi-2 versions greater or equal (less) than 1.4.0
Gitlab pipelines run for spack already have other S3 storage locations
configured for storage of binaries, so this PR removes the redundant
per-pipeline mirror. As a result, the "cleanup" jobs will no longer be
generated at the end of each pipeline, removing one possible point of
pipeline failure.
The go-bootstrap package doesn't work on aarch64 platforms, so the only way
to build Go is to use gccgo.
Also, some versions of gccgo have a bug that prevents them from compiling
go (see golang/go#47771), so this patch limits gcc to versions newer than
10.4.0 or 11.3.0.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* new package: pytaridx
* fixed copyright year
* Update git link
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added type in python depends
* added pypi link
* Update package.py
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Metall package: add dependency to GCC for build test
* Package Metall: add v.017
* Package Metall: update the package file
* Update var/spack/repos/builtin/packages/metall/package.py
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
* Metall package: add v0.18 and v0.19
* Metall Package: add v0.20
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
Starting with MPICH 3.4, we offer different datatype engine options
(dataloop or yaksa). The default is 'auto', which will choose based on
the device configuration. Starting with MPICH 4.0, building against an
external yaksa library is supported.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Added support for finding the OpenCV package via the find external
command. Included support for identifying variants based on available
shared libraries.
Added support to finding the OpenBLAS package via the find external
command.
Enabled packages to show that they can be discovered via the find
external command in the info message.
Updated the OpenCV and OpenBLAS packages to use the extensible search
mechanism for library extensions on multiple OS platforms.
Corrected how find externals works on Darwin for OpenCV and OpenBLAS
to accommodate that the version numbers are placed before the file
extension instead of after it, as on Linux.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* librsb: added v1.2.0.10 (#26043)
* librsb: add v1.2.0.11/v1.3.0.0 (#28636)
* librsb: add v1.3.0.1 (#30424)
* unconflict clang
* address apparent style issues
given
https://github.com/spack/spack/runs/6248126997?check_suite_focus=true
and its excerpt
var/spack/repos/builtin/packages/librsb/package.py:27: [E265] block comment should start with '# '
var/spack/repos/builtin/packages/librsb/package.py:52: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:53: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:53: [E501] line too long (89 > 88 characters)
var/spack/repos/builtin/packages/librsb/package.py:54: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:55: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:56: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:57: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:59: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:60: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:62: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:63: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:64: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:66: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:68: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:70: [E211] whitespace before '('
var/spack/repos/builtin/packages/librsb/package.py:71: [E211] whitespace before '('
let these changes flow in.
* +asan+native: mark as conflict; thanks @tldahlgren
* +asan conflict grouped with other conflicts
As suggested as good Spack style by @tldahlgren .
- Keep long lists in alphabetical order for easier reading
- Add a placeholder for Exa.TrkX plugin since we're missing a dep on the
Spack side
- Add support for the ONNX plugin since Spack now has an ONNX runtime
package
- Use spack's pybind11 package now that we're given the option to do so
This adds the newest stable version (and removes old development
versions), a few missing dependencies and workarounds for build
failures. Without the environment variables, sysstat will try creating
directories in `/var/log`, and without `--disable-file-attr`, sysstat
will try to change file ownership.
* gdal: changing behavior of configure for +xml2 with 3.0+
* Update var/spack/repos/builtin/packages/gdal/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add checksum for py-more-itertools@8.12.0 and fix python dependency
* Add checksum for py-prettytable@3.2.0
* Package version 8.11.0 is the only version that requires python 3.6+
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add reference to python@3.6 support when 8.11
* Revert "Add reference to python@3.6 support when 8.11"
This reverts commit 0ba0002193.
* Add python 3.7: requirement
* Revert range for python 3.6
* Revert py-more-itertools modifications
Co-authored-by: aandvalenzuela <andrea.valenzuela.ramirez@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This is an amended version of https://github.com/spack/spack/pull/24894 (reverted in https://github.com/spack/spack/pull/29603). https://github.com/spack/spack/pull/24894
broke all instances of `spack external find` (namely when it is invoked without arguments/options)
because it was mandating the presence of a file which most systems would not have.
This allows `spack external find` to proceed if that file is not present and adds tests for this.
- [x] Add a test which confirms that `spack external find` successfully reads a manifest file
if present in the default manifest path
--- Original commit message ---
Adds `spack external read-cray-manifest`, which reads a json file that describes a
set of package DAGs. The parsed results are stored directly in the database. A user
can see these installed specs with `spack find` (like any installed spec). The easiest
way to use them right now as dependencies is to run
`spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described
in the file to Spack's installation DB and will also install described compilers to the
compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR
registers the origin as "external-db"). In the future, it is assumed users may want
to be able to treat installs registered with this command differently (e.g. they may
want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec
is concrete
* I don't think the hashes of installed-and-concrete specs should change and this
was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied
(external specs may mention compilers that are not registered, and without this
change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do
something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will
remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
* ASP-based solver: discard unknown packages from reuse
This is an add-on to #28259 that cover for the case of
a single package.py being removed from a repository,
rather than an entire custom repository being removed.
* Add unit test
CTest determines whether to enable tests using the BUILD_TESTING variable.
This should be used by projects to conditionally enable the compilation of tests.
Spack knowns which packages have to run tests and can thus automatically define this variable.
I tried to use --overwrite on nvhpc, but nvhpc's install size is 16GB. Seems
better to do os.rename in the same directory than moving the directory to
`/tmp`.
- [x] install --overwrite: use rename instead of tmpdir
- [x] use tempfile
By default `openmpi` needs `rsh` from `openssh`, which is a somewhat
redundant dependency for clusters using slurm. This PR adds a toggle to
allow users to disable the ssh/rsh plm altogether.
This package was not setting FFTW when +mklfft was used with +cuda.
Since both were set to 'True', the default build was not linked to
any FFTW, leading to a run time error. It seems MKL support was
conflated with alternative CPU acceleration support. This PR does the
following:
- adds the altcpu variant to specify non-GPU/CPU acceleration
- sets a conflict between +altcpu and +cuda
- sets an FFTW implementation
- sets fltk+xft when +gui to get a decent looking GUI interface
- sets tbb dependency only when +altcpu
- adds dependency on ctffind
- adds variant and dependency on motioncor2
- sets defaults for
- qsub template location
- ctffind location
- motioncor2 location
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
fixes#28259
This commit discard specs from unknown namespaces from the
ones that can be "reused" during concretization. Previously
Spack would just error out when encountering them.
1. Add version 2022.04.17 (new numbering scheme) and update mbuild resource.
2. Branch 'master' is now 'main'.
3. Old rev 10.2019.03 needs a patch for python vs python3.
The parent thread in the process stdout redirection logic on Windows
was closing a file that was being read in child thread, which lead to
error-based termination of the reader thread. This updates the
interaction to avoid the error.
* Add checksum for py-ipywidgets@7.7.0
* Correct py-widgetsnbextension and py-ipython dependencies
* Update widgetsnbextension dependency to 3.6
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Allow requirement to next versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Revert ipyhton dependencies
* Add widgetsnbextension@3.6.0 checksum
Co-authored-by: aandvalenzuela <andrea.valenzuela.ramirez@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* ASP-based solver: allow configuring target selection
This commit adds a new "concretizer:targets" configuration
section, and two options under it.
- "concretizer:targets:granularity" allows switching from
considering only generic targets to consider all possible
microarchitectures.
- "concretizer:targets:host_compatible" instead controls
whether we can concretize for microarchitectures that
are incompatible with the current host.
* Add documentation
* Add unit-tests
* MAINT: Add a debug flag
* MAINT: Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* openmpi: always require pmix for 4:
`~pmix` is not applicable to version 4+, since it always builds a vendored
copy of pmix (currently 3.2.3).
* pmix: relax version requirements
When the version range was specified, newer versions didn't exist.
Also use normalized spack versions rather than artificial .9.9 /.0.0.
* openmpi: restrict pmix versions
pmix option isn't available for OpenMPI@1, and according to
https://github.com/open-mpi/ompi/issues/7988 , OpenMPI 4.0.1 will not
build with pmix@3.1.5.
* pmix: add newer versions
* OpenMPI: re-express conflicts/configure logic as conditional variants
This relies partly on `self.enable_or_disable` and its ilk to emit an
empty list when the variant isn't applicable.
* ASP-based solver: always consider version of installed packages
fixes#29201
Explicitly add facts for versions of installed software when
using the --reuse option, so that we could consider versions
that are not declared in package.py
The parser is already committing a crime of querying the database for
specs when it encounters a `/hash`. It's helpful, but unfortunately not
helpful when trying to install a specific spec in an environment by
hash. Therefore, consider the environment first, then the database.
This allows the following:
```console
$ spack -e . concretize
==> Starting concretization
==> Environment concretized in 0.27 seconds.
==> Concretized diffutils
- 7vangk4 diffutils@3.8%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
- hyb7ehx ^libiconv@1.16%gcc@10.3.0 libs=shared,static arch=linux-ubuntu20.04-zen2
$ spack -e . install /hyb7ehx
==> Installing libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
...
==> libiconv: Successfully installed libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
Fetch: 0.01s. Build: 17.54s. Total: 17.55s.
[+] /tmp/tmp.VpvYApofVm/store/linux-ubuntu20.04-zen2/gcc-10.3.0/libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
```
1. update for rocm 4.5 and drop support for earlier rocm.
2. no longer use mbedtls or gotcha, they are only for old revs.
3. update version requirements for dyninst and libmonitor
4. begin to deprecate old versions
Fix bug introduced in #30191. `Spec.installed` and `Spec.installed_upstream` should just return
`False` for abstract specs, as they can be called in that context.
- [x] `Spec.installed` returns `False` now instead of asserting that the `Spec`
is concrete.
- [x] `Spec.installed_upstream` returns `False` now instead of asserting that the `Spec`
is concrete.
- [x] `Spec.installed_upstream` no longer caches its result, as install status seems
like a bad thing to cache -- it can easily be invalidated. Calling code should
use transactions if there are peformance issues, as with other places in Spack.
- [x] add tests for `Spec.installed` and `Spec.installed_upstream`
This PR moves the `installed` and `installed_upstream` properties from `PackageBase` to `Spec` and is a step towards being able to reuse specs for which we don't have a `package.py` available. It _should_ be sufficient to complete the concretization step and see the spec in the concretized DAG.
To fully reuse a spec without a package.py though we need a way to serialize enough data to reconstruct the results of calls to:
- `Spec.libs`, `Spec.headers` and `Spec.ommand`
- `Package.setup_dependent_*_environment` and `Package.setup_run_environment`
- [x] Add stub methods to packages with warnings
- [x] Add a missing "root=False" in cmd/fetch.py
- [x] Assert that a spec is concrete before checking installation status
2022-04-22 16:46:45 -07:00
7188 changed files with 46941 additions and 21984 deletions
==> Adding "clingo-bootstrap@spack+python %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "gnupg@2.3: %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "patchelf@0.13.1:0.13.99 %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding binary packages from "https://github.com/alalazo/spack-bootstrap-mirrors/releases/download/v0.1-rc.2/bootstrap-buildcache.tar.gz" to the mirror at /opt/bootstrap/local-mirror
To register the mirror on the platform where it's supposed to be used run the following command(s):
Notice that this function, if you define it, requires a result object (generated by
``run()``, a monitor (if you want to send), and a boolean ``overwrite`` to be used
to check if a result exists first, and not write to it if the result exists and
overwrite is False. Also notice that since we already saved these files to the analyzer metadata folder, we return early if a monitor isn't defined, because this function serves to send results to the monitor. If you haven't saved anything to the analyzer metadata folder
yet, you might want to do that here. You should also use ``tty.info`` to give
the user a message of "Writing result to $DIRNAME."
.._writing-commands:
@@ -699,23 +448,6 @@ with a hook, and this is the purpose of this particular hook. Akin to
``on_phase_success`` we require the same variables - the package that failed,
the name of the phase, and the log file where we might find errors.
"""""""""""""""""""""""""""""""""
``on_analyzer_save(pkg, result)``
"""""""""""""""""""""""""""""""""
After an analyzer has saved some result for a package, this hook is called,
and it provides the package that we just ran the analysis for, along with
the loaded result. Typically, a result is structured to have the name
of the analyzer as key, and the result object that is defined in detail in
:ref:`analyzer_run_function`.
..code-block::python
defon_analyzer_save(pkg,result):
"""given a package and a result...
"""
print('Do something extra with a package analysis result here')
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse2"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse2"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse2"
}
]
}
},
@@ -1192,6 +1439,20 @@
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
]
}
},
@@ -1246,6 +1507,20 @@
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse3"
}
]
}
},
@@ -1301,6 +1576,20 @@
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse4.2"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse4.2"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags":"-msse4.2"
}
]
}
},
@@ -1360,6 +1649,22 @@
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
]
}
},
@@ -1422,6 +1727,22 @@
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
]
}
},
@@ -1485,6 +1806,22 @@
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
]
}
},
@@ -1543,6 +1880,30 @@
"name":"znver3",
"flags":"-march={name} -mtune={name}"
}
],
"intel":[
{
"versions":"16.0:",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"oneapi":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"name":"core-avx2",
"flags":"-march={name} -mtune={name}"
}
],
"dpcpp":[
{
"versions":":",
"warnings":"Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
# We want to present them to the user as simple key: values
intersect=sorted(a_facts.intersection(b_facts))
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.