* fix remaining flake8 errors
* imports: sort imports everywhere in Spack
We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.
This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
* Fix compiler test
Use `self.spec.satisfies` on compiler to determine if a flag should be
applied or not. This approach avoids issues with the strings `gcc`
or `clang` appearing in the full path to the compiler executables, as
happens with spack-installed compilers (e.g. `nvhpc%gcc`).
* Limit compiler name search to last path component
@skosukhin pointed out that the cflag modification should happen for any
clang or gcc compiler, regardless of what compiler spec provides them.
This commit reverts to searching for a compiler name containing "gcc"
or "clang", but limits the search to the last path component, which
avoids matching spack-installed compilers built with gcc (e.g.
`nvhpc%gcc`), which will have "gcc" in the compiler path.
* Use `os.path` rather than `pathlib`
Co-authored-by: Paul Henning <phenning@lanl.gov>
`dateutil.parser` was an optional dependency for CVS tests. It was failing on macOS
beacuse the dateutil types were not being installed, and mypy was failing *even when the
CVS tests were skipped*. This seems like it was an oversight on macOS --
`types-dateutil-parser` was not installed there, though it was on Linux unit tests.
It takes 6 lines of YAML and some weird test-skipping logic to get `python-dateutil` and
`types-python-dateutil` installed in all the tests where we need them, but it only takes
4 lines of code to write the date parser we need for CVS, so I just did that instead.
Note that CVS date format can vary from system to system, but it seems like it's always
pretty similar for the parts we care about.
- [x] Replace dateutil.parser with a simpler date regex
- [x] Lose the dependency on `dateutil.parser`
Previous tests of `spack style` didn't really run the tools --
they just ensure that the commands worked enough to get coverage.
This adds several real tests and ensures that we hit the corner
cases in `spack style`. This also tests sucess as well as failure
cases.
This consolidates code across tools in `spack style` so that each
`run_<tool>` function can be called indirecty through a dictionary
of handlers, and os that checks like finding the executable for the
tool can be shared across commands.
- [x] rework `spack style` to use decorators to register tools
- [x] define tool order in one place in `spack style`
- [x] fix python 2/3 issues to Get `isort` checks working
- [x] make isort error regex more robust across versions
- [x] remove unused output option
- [x] change vestigial `TRAVIS_BRANCH` to `GITHUB_BASE_REF`
- [x] update completion
This PR configures the spack docbook packages
- docbook-xsl
- docbook-xml
The public entities are now mapped to the locally installed files of the
respective packages. The example catalogs are left in place and
XML_CATALOG_FILES points to the newly created catalogs.
Perl keeps copies of the bzip2 and zlib source code in its own source
tree and by default uses them in favor of outside libraries. Instead,
put these dependencies under control of spack and tell perl to use the
spack-built versions.
* py-keyring: fix installation on linux
* Update var/spack/repos/builtin/packages/py-keyring/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-keyring/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We should not fail the generate stage simply due to the presence of
a broken-spec somewhere in the DAG. Only fail if the known broken
spec needs to be rebuilt.
This PR adds a context manager that permit to group the common part of a `when=` argument and add that to the context:
```python
class Gcc(AutotoolsPackage):
with when('+nvptx'):
depends_on('cuda')
conflicts('@:6', msg='NVPTX only supported in gcc 7 and above')
conflicts('languages=ada')
conflicts('languages=brig')
conflicts('languages=go')
```
The above snippet is equivalent to:
```python
class Gcc(AutotoolsPackage):
depends_on('cuda', when='+nvptx')
conflicts('@:6', when='+nvptx', msg='NVPTX only supported in gcc 7 and above')
conflicts('languages=ada', when='+nvptx')
conflicts('languages=brig', when='+nvptx')
conflicts('languages=go', when='+nvptx')
```
which needs a repetition of the `when='+nvptx'` argument. The context manager might help improving readability and permits to group together directives related to the same semantic aspect (e.g. all the directives needed to model the behavior of `gcc` when `+nvptx` is active).
Modifications:
- [x] Added a `when` context manager to be used with package directives
- [x] Add unit tests and documentation for the new feature
- [x] Modified `cp2k` and `gcc` to show the use of the context manager
I installed curl on my mac and it picked up a homebrew (I think?)
installation of gsasl. A later system update broke git because of the
implicitly added dependency. Explicitly disabling libraries that *might*
exist on the system is the safe approach here.
```
dyld: Library not loaded: /usr/local/opt/gsasl/lib/libgsasl.7.dylib
Referenced from: /rnsdhpc/code/spack/opt/spack/apple-clang/curl/gag5v3c/lib/libcurl.4.dylib
Reason: image not found
error: git-remote-https died of signal 6
```
* Added Perl workaround for CUDA <= 8
* Re-wrapped comment
* Proofreading corrections
* Added a reference
* Do not override Perl include path
* Retrieve shell once
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* trilinos: add teko conflict
* trilinos: improve gotype variant
Instead of 'none' and 'long' typically being the same (but not for older
trilinos versions), add an explicit 'all' variant that only works for
older trilinos which supports multiple simultaneous tpetra
instantiations.
* trilinos: add self as maintainer
* trilinos: disable vendored gtest by default
This changes several conflicting variants to a single
multi-value variant, and uses conflicts instead of raising InstallError.
(With clingo, requesting +gui automatically selects features=huge!)
I have also rearranged the dependencies for clarity and simplified the
conifgure args.
ci: only write to broken-specs list on SpackError
Only write to the broken-specs list when `spack install` raises a SpackError,
instead of writing to this list unnecessarily when infrastructure-related problems
prevent a develop job from completing successfully.
If two Specs have the same hash (and prefix) but are not equal, Spack
originally had logic to detect this and raise an error (since both
cannot be installed in the same place). Recently this has eroded and
the check no-longer works; moreover, when defining projections (which
may truncate the hash or other distinguishing properties from the
prefix) Spack was also failing to detect collisions (in both of these
cases, Spack would overwrite the old prefix with the new Spec).
This PR maintains a list of all "taken" prefixes: if a hash is not
registered (i.e. recorded as installed in the database) but the prefix
is occupied, that is a collision. This can detect collisions created
by defining projections (specifically when they omit the hash).
The PR does not detect collisions where specs have the same hash
(and prefix) but are not equal.
Fix syntax of conflict between numpy 1.21.0 and gcc11 to that the clingo
concretizer recognizes it.
In addition the upstream master branch was renamed to main.
* Switch hdf5 package from autotools to cmake.
* Add variant for building with zlib, default to ON.
* Update for format requirements.
* Format change.
* Fix breakage from last merge from develop.
Switch szip to use libaec (unrestricted encryption).
Remove 'static' variant: static libs will only be installed when
~shared.
* Improve args based on suggestions from pull request.
* Update code URL to github.com
Add/modify 4 depends_on lines to fix running "spack graph --deptype=link hdf5".
* Remove trailing whitespace.
* Remove dependencies added solely to make "spack greph --type=link" work.
* Add new version HDF5 1.8.22.
* Remove unnecessary java_check.
* Fix whitespace for style checks.
* Reverted zlib version dependency to 1.1.2:.
zlib variant removed.
api version default renamed "default".
* Remove blank line.
* Whitespace corrections.
* iRemoved unnecessary 'debug' variant.
* Fix typo in version number in conflict for '+szip'.
* Set default for tools variant to True.
Remove patch functions dependent on 'libtool' file that cmake doesn't
produce.
* Remove line to set ONLY_SHARED_LIBS to true.
Add post_install code to install only one version of tools with shared
linkage and original tool names.
* Remove trailing white space and import of glob package not used.
* Leave BUILD_TESTING set to default which is ON.
* Remove post_install code to install only one version of tools because
some dependent packages running tests in e4s testing are using
h5diff-shared. Keep both tools versions for now.
* No longer need to import os.
Instead of refusing to build +mpi with gcc10, add what I guess is now
the standard workaround, ie., `-fallow-argument-mismatch`.
Getting this into pfunit's cmake-based but kinda non-standard build isi
a bit ugly, but you gotta do what you gotta do...
Version 1.17 of DD4hep was renamed from "01-17-00" to "01-17", in line
with the naming conventions of previous releases. Since release archives
contain a subdirectory with the version string in it, this changes the contents
of the tarball ever so slightly, so the SHA-256 checksum must change as well.
Fix url to find newer versions, add newest version 4.0.2 and add
variants for
- cxxstd: To use a specific c++ standard
- static: Enable or disable build of static libraries
- boost: Boost support
- sqlite: SQLite support
- postgresql: PostgreSQL support
When having a few packages loaded, installing go-bootstrap will fail
because the `PATH` variable is truncated at 4096 bytes. Increase the
limit to 128 KiB to make longer paths fit.
1. "+simplex" conflicts with "dealii@:9.2" [The interface to simplex is supported from version 9.3.0 onwards. Please explicitly disable this variant via ~simplex]
2. "+arborx" conflicts with "dealii@:9.2" [The interface to arborx is supported from version 9.3.0 onwards. Please explicitly disable this variant via ~arborx]
Prior to any Spack build, Spack modifies PATH etc. to help the build
find the dependencies it needs. It also allows any package to define
custom environment modifications (and furthermore a package can
specify environment modifications to apply when it is used as a
dependency). If an external package defines custom environment
modifications that alter PATH, and the external package is in a merged
or system prefix, then that prefix could "override" the Spack-built
packages.
This commit reorders environment modifications so that PrependPath
actions which expose Spack-built packages override PrependPath actions
for custom environment modifications of external packages.
In more detail, the original order of environment modifications is:
* Modules
* Compiler flag variables
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH for dependencies
* Custom package.py modifications in the following order:
* dependencies
* root
This commit changes the order:
* Modules
* Compiler flag variables
* For each external dependency
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH modifications
* Custom modifications
* For each Spack-built dependency
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH modifications
* Custom modifications
Spack pipelines need to take specific actions internally that depend
on whether the pipeline is being run on a PR to spack or a merge to
the develop branch. Pipelines can also run in other repositories,
which represents other possible use cases than just the two mentioned
above. This PR creates a "SPACK_PIPELINE_TYPE" gitlab variable which
is propagated to rebuild jobs, and is also used internally to determine
which pipeline-specific tasks to run.
One goal of the PR is fix an issue where rebuild jobs which failed on
develop pipelines did not properly report the broken full hash to the
"broken-specs-url".
* Add Externally Findable section to info command
* Use comma delimited detection attributes in addition to boolean value
* Unit test externally detectable part of spack info
yes I know this name isn't popular but that's the way it is right now.
master and the upcoming v5.0.x release branch use git submodules.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
* [py-xxhash] created template
* [py-xxhash] working on dependencies
* [py-xxhash] set version for xxhash
* [py-xxhash] Final cleanup
- added homepage
- added description
- removed fixmes
* Force the Python interpreter with an env variable
This commit forces the Python interpreter with an
environment variable, to ensure that the Python set
by the "setup-python" action is the one being used.
Due to the policy adopted by Spack to prefer python3
over python we may end up picking a Python 3.X
interpreter where Python 2.7 was meant to be used.
* Revert "Update conftest.py (#24473)"
This reverts commit 477c8ce820.
* Make python-dateutil a soft dependency for unit tests
Before #23212 people could clone spack and run
```
spack unit-tests
```
while now this is not possible, since python-dateutil is
a required but not vendored dependency. This change makes
it not a hard requirement, i.e. it will be used if found
in the current interpreter.
* Workaround mypy complaint
This commit fixes a subtle bug that may occur when
a package is a "possible_provider" of a virtual but
no "provides_virtual" can be deduced. In that case
the cardinality constraint on "provides_virtual"
may arbitrarily assign a package the role of provider
even if the constraints for it to be one are not fulfilled.
The fix reworks the logic around three concepts:
- "possible_provider": a package may provide a virtual if some constraints are met
- "provides_virtual": a package meet the constraints to provide a virtual
- "provider": a package selected to provide a virtual
Spack packages can now fetch versions from CVS repositories. Note
this fetch mechanism is unsafe unless using :extssh:. Most public
CVS repositories use an insecure protocol implemented as part of CVS.
Here we are adding an install_times.json into the spack install metadata folder.
We record a total, global time, along with the times for each phase. The type
of phase or install start / end is included (e.g., build or fail)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
The original implementation of `flag_handler` searched the
`self.compiler.cc` string for `clang` or `gcc` in order to add a flag
for those compilers. This approach fails when using a spack-installed
compiler that was itself built with gcc or clang, as those strings will
appear in the fully-qualified compiler executable paths. This commit
switches to searching for `%gcc` or `%clang` in `self.spec`.
Co-authored-by: Paul Henning <phenning@lanl.gov>
the 4.1.1 release has fixes for problems that kept 4.1.0 from
being the default open mpi version to build using spack.
related to #24396
Signed-off-by: Howard Pritchard <hppritcha@gmail.com>
* remove blueos check on cuda variant, fix typo
* restore necessary compiler guard
* remove axom+cuda from testing because it only partially works outside ppc systems
This PR does the following:
- adds version corresponding to commit at 08/03/2020
- adds missing get_DE_events.py script
- adds dependencies needed by get_DE_events.py
- removes REDItoolDenovo.py.patch and python2to3.patch in favor of
running 2to3 and reindent pre-build
- add batch_sort.patch to handle differences in string/char handling
betweeen python2 and python3
- adds a variant for the Nature Protocol
- adds dependencies for the nature_protocol variant
- added myself as maintainer
This PR adds a new version of reditools from git.
This PR fixes a couple of issues with the opencv package, mostly in
relation to cuda. This is only focused on cuda, not any of the other
variants.
- Added versions to the contrib_vers list. Added for all that can be
retrieved from github. The one for the latest version was missing.
- Added a cmake patch for v3.2.0.
- Deprecated versions 3.1.0 and 3.2.0 as neither of those could be
built, with or without cuda.
- Adjusted constraints on applying initial cmake patch.
- Added cudnn dependency when +cuda.
- Set constraints for cudnn and cuda for older versions of opencv.
Add a new "spack audit" command. This command can check for issues
with configuration or with packages and is intended to help a
user debug a failed Spack build.
In some cases the reported issues are always errors but are too
costly to check for (e.g. packages that specify missing variants on
dependencies). In other cases the issues may be legitimate but
uncommon usage of Spack and we want to be sure the user intended the
behavior (e.g. duplicate compiler definitions).
Audits are grouped by theme, and for now the two themes are packages
and configuration. For example you can run all available audits
on packages with "spack audit packages". It is intended that in
the future users will be able to define their own audits.
The package audits are good candidates for running in package_sanity
(i.e. they could catch bugs in user-submitted packages before they
are merged) but that is left for a later PR.
Building magma has been failing consistently and is currently
blocking PRs from being merged. Disable that spec while we
investigate the failure and work on a fix.
This should get us most of the way there to support using monitor during a spack container build, for both Singularity and Docker. Some quick notes:
### Docker
Docker works by way of BUILDKIT and being able to specify --secret. What this means is that you can prefix a line with a mount of type secret as follows:
```bash
# Install the software, remove unnecessary deps
RUN --mount=type=secret,id=su --mount=type=secret,id=st cd /opt/spack-environment && spack env activate . && export SPACKMON_USER=$(cat /run/secrets/su) && export SPACKMON_TOKEN=$(cat /run/secrets/st) && spack install --monitor --fail-fast && spack gc -y
```
Where the id for one or more secrets corresponds to the file mounted at `/run/secrets/<name>`. So, for example, to build this container with su (spackmon user) and sv (spackmon token) defined I would export them on my host and do:
```bash
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
```
And when we add `env` to the secret definition that tells the build to look for the secret with id "st" in the environment variable `SPACKMON_TOKEN` for example.
If the user is building locally with a local spack monitor, we also need to set the `--network` to be the host, otherwise you can't connect to it (a la isolation of course.)
## Singularity
Singularity doesn't have as nice an ability to clearly specify secrets, so (hoping this eventually gets implemented) what I'm doing now is providing the user instructions to write the credentials to a file, add it to the container to source, and remove when done.
## Tags
Note that the tags PR https://github.com/spack/spack/pull/23712 will need to be merged before `--monitor-tags` will actually work because I'm checking for the attribute (that doesn't exist yet):
```bash
"tags": getattr(args, "monitor_tags", None)
```
So when that PR is merged to update the argument group, it will work here, and I can either update the PR here to not check if the attribute is there (it will be) or open another one in the case this PR is already merged.
Finally, I added a bunch of documetation for how to use monitor with containerize. I say "mostly working" because I can't do a full test run with this new version until the container base is built with the updated spack (the request to the monitor server for an env install was missing so I had to add it here).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Inline codecov annotations make the code hard to read, and they add annotations
in files that seemingly have nothing to do with the PR. Sadly, they add a whole
lot of noise and not a lot of benefit over looking at the PR on codecov. We
should just have people look at the coverage on codecov itself.
* New package: py-pyusb
Change-Id: I606127858b961b5841c60befc5a8353df0f9f38c
* fixup dependencies
Change-Id: I0c9b0ccee693d2c4e847717950d4ce64cb319794
* fixup 2
Change-Id: Ibaccbdafd865e363564f491054e4e4ceb778727b
* Update var/spack/repos/builtin/packages/py-pyusb/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
A patch no longer applies cleanly as its fixed in v4.0.6 - fix it here
==> Installing openmpi-4.0.6-in47f6rxspbnyibkdx6x4ekg6piujobd
==> No binary for openmpi-4.0.6-in47f6rxspbnyibkdx6x4ekg6piujobd found: installing from source
==> Fetching https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.6.tar.bz2
Reversed (or previously applied) patch detected! Assume -R? [n]
Apply anyway? [n]
2 out of 2 hunks ignored -- saving rejects to file opal/include/opal/sys/gcc_builtin/atomic.h.rej
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
When running executables from build dependencies, we want to avoid that
`LD_PRELOAD` and `DYLD_INSERT_LIBRARIES` any of their shared libs build
by spack with system libraries.
The Z3 solver provides a Z3Config.cmake file when built using the CMake build
system. This submission changes the package build system to inherit the
CMakePackage type. In addition to changing the build system, this submission:
- Adds the GMP variant
- Removes v4.4.0 and v4.4.1 as CMake was implemented starting with v4.5.0
This adds a package for `irep`, a tool for reading `lua` input decks from
Fortran, C, and C++.
`irep` can be built with either `lua` or `luajit`. To address this, we also add
a virtual package for lua called `lua-lang`. `luajit` isn't, by default, a drop-in
replacement for `lua`, but we add a `+lualinks` variant to it that adds symlinks
that make it behave like `lua@5.1`. With this variant enabled, it provides the
`lua-lang` virtual. `lua` always provides `lua-lang`.
- [x] add `irep` package
- [x] add `+lualinks` variant to `lua-luajit`
- [x] create `lua-lang` virtual, provided by `lua` and `luajit+lualinks`
Co-authored-by: Kayla Richarda Butler <butler59@quartz1148.llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* libdrm: fix one configure error and require libpciaccess
Failure with `LIBS`: the linker can't find `-lrt` so configure fails on
darwin-bigsur %apple-clang@12.0.5
```
>> 22 configure: error: in `/private/var/folders/gy/mrg1ffts2h945qj9k29s1l1dvvmbqb/T/s3j/spack-s
tage/spack-stage-libdrm-2.4.100-ofhk6m25n2pi427ihnxmvjkfmgyzlrqc/spack-src':
>> 23 configure: error: C compiler cannot create executables
24 See `config.log' for more details
See build log for details:
/var/folders/gy/mrg1ffts2h945qj9k29s1l1dvvmbqb/T/s3j/spack-stage/spack-stage-libdrm-2.4.100-ofhk6m25n2pi427ihnxmvjkfmgyzlrqc/spack-build-out.txt
```
* libpciaccess: Mark conflict with darwin
```
make[2]: *** [common_init.lo] Error 1
make[2]: *** Waiting for unfinished jobs....
common_interface.c:75:10: fatal error: 'sys/endian.h' file not found
^~~~~~~~~~~~~~
```
and
```
common_init.c:73:3: error: "Unsupported OS"
```
and others
* extending example for buildcaches
I was attempting to create a local build cache from a directory, and I found the
docs for both buildcaches and mirrors, but did not connect the docs that the
url variable could be the local filesystem variable. I am extending the docs for
buildcaches with an example of creating and interacting with one on the filesystem
because I suspect other users will run into this need and possibly not find what
they are looking for.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* adding as follows to spack mirror list
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* update url, add all new versions and fix installation
* add wxparaver package and set the old paraver package as deprecated
* remove update of deprecated package
* remove old version from new wxparaver
* Update url
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
It is currently kind of confusing to the reader to distinguish spack buildcache install
and spack install, and it is not clear how to use a build cache once a mirror is added.
Hopefully this little big of description can help (and I hope I got it right!)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Use the 'version_yearlike' attribute instead of 'version' to
check if the SPACK_COMPILER_EXTRA_RPATHS should be set to include
the built-in 'libfabrics'.
When using the bare 'version', the comparison is wrong when
building with 'intel-parallel-studio', which has the version
format '<edition>.YYYY.Nupdate', due to the leading '<edition>'.
xfsprogs currently does not install with error message:
FATAL ERROR: could not find a valid ini.h header.
Adding this package libinih, and including it as
a dependency for xfsprogs seems to fix the issue. It could be
that we only need to add it for newer versions (if it worked before)
and maybe a maintainer can comment on that.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Pagination on Github prevent spack from easily parse all available
versions. Also, due to recent migration to GitHub, tarballs for
versions up to 3.12.13 have be regenerated, changing the hash.
The current URL will apparently be supported, so we keep it, and give
the alternative one as a comment.
This should fix#24278
$INSTALLDIR/lib/python3.7/site-packages/IPython/core/events.py contains an
import from backcall even in @7.3.0, so dependency on py-backcall needs
to start earlier.
Restrict poppler version for texlive to poppler@:0.84
Should fix#19946
See also https://github.com/NixOS/nixpkgs/issues/79170
Looks like poppler@0.84 upgraded their header files to use the C++ cstdio
instead of the C stdio.h. Since TeX is using C, not C++. this causes problems.
* zfp: several package improvements
- add variants for build targets, language bindings, backends
- ensure selected variants are compatible with zfp version
- point to GitHub (not LLNL) tar balls
- add dependencies
- update link to homepage
- add maintainers
* zfp: address suggestions by Spack team
- use conflicts() instead of raising exceptions
- use define() and define_from_variant() where applicable
* Apply suggestions from code review
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Fix ZFP OpenMP build.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* [py-keyboard] created template
* [py-keyboard]
- updated homepage
- added dependency for OSX
- added description
- removed fixmes
* [py-keyboard] Until py-pyobjc can be created, specifying conflict with platform=darwin
* [py-keyboard] is verb
* Update of Flecsi Spackage
Update of flecsi spackage to reconcile differences between flecsi@1:1.9
and flecsi@2: for future support purposes
* Removing Unnecessary Conditional
Removing unused conditional. Initially the plan was to switch based on
version in `cmake_args` but this was not necessary as build system
variable names remained mostly the same and conflicts prevent the rest.
For the most part, if a variant is there it does not need to check
against what version of the code is being built.
* Updated CI To Reconcile Flecsi Changes
Updated CI to target flecsi@1.4.2 which best matches the previous
release version and reconciled change in variant name
The common.inc script in TBB uses the environ var 'OS' to determine
the platform it's on. On Linux, this is normally empty and TBB falls
back to uname. But some systems set this to 'CentOS Linux 8' which is
descriptive, but not exactly what common.inc is looking for.
Instead, take the value from python and explicitly set OS to what TBB
expects to avoid this problem.
Since the two packages share a common history, the installation
procedure has been factored into a common base class.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* Tcl: fix TCLLIBPATH
* Fix TCL|TK|TIX_LIBRARY paths
* Fix TCL_LIBRARY, no tcl8.6 subdir
* Don't rely on os.listdirs sorting
For tcl and tk, we also install the source directory, so there are
two init.tcl and tk.tcl locations. We want the one in lib/lib64,
which should come before the one in share.
* Add more patches
* Fix dylib on macOS
* Tk: add smoke tests
* Tix: add smoke test
Extracting specs for the result of a solve has been factored
as a method into the asp.Result class. The method account for
virtual specs being passed as initial requests.
Minimizing compiler mismatches in the DAG and preferring newer
versions of packages are now higher priority than trying to use as
many default values as possible in multi-valued variants.
According to the docs, r is needed for plotting, but plotting is
untested. In addition, the specific version requirement of java for gatk
could lead to multiple installations of r being triggered in an
environment. That might cause people to have to be deliberate about
java in a deployment. All in all, it seems that r is better as a
variant for gatk.
* Set job_id for SGE in darshan-runtime package
* Use a multi value variant for scheduler
Only one scheduler can be selected so make the variant multi valued and
set multi=False.
* hdf-eos5: Fix issue when linking against hdf5+szip (#23411)
Should fix issue #23411 when linking against hdf5+szip
Also fix bug if hdf5 does not depend on zlib
Reluctantly added payerle as a maintainer
Added version 1.1.13
Fixed versions for dependencies based on README.md for package
In particular:
* versions 1.1.x require python@3, at least 3.4 and for 1.1.13 at least 3.6
* py-osqp had been pinned to version 0.4.1, but README.md either shows
no version restriction, of 0.4.1 and higher
* @1.1.13 requires at least 1.1.6 of py-scs
* I am assuming since 1.1.x is python@3 only, py-six no longer required
(it was not explicitly showing up in README.md for these versions)
Since the module roots were removed from the config file,
`--print-shell-vars` cannot find the module roots anymore. Fix it by
using the new `root_path` function. Moreover, the roots for lmod and
modules seems to have been flipped by accident.
* add versions 2.2.0.2 and 2.2.1.1
* Add maintainer
Added Ishaan as additional maintainer as he is also maintainer of the Python bindings
* add new major precice version as dependency
The VALID_VERSION regex didn't check that the version string was
completely valid, only that a prefix of it was. This version ensures
the entire string represents a valid version.
This makes a few related changes.
1. Make the SEGMENT_REGEX identify *which* arm it matches by what groups
are populated, including whether it's a string or int component or a
separator all at once.
2. Use the updated regex to parse the input once with a findall rather
than twice, once with findall and once with split, since the version
components and separators can be distinguished by their group status.
3. Rather than "convert to int, on exception stay string," if the int
group is set then convert to int, if not then construct an instance
of the VersionStrComponent class
4. VersionStrComponent now implements all of the special string
comparison logic as part of its __lt__ and __eq__ methods to deal
with infinity versions and also overloads comparison with integers.
5. Version now uses direct tuple comparison since it has no per-element
special logic outside the VersionStrComponent class.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* New Package:py-haphpipe@1.0.3
* removed llvm restrict. & changed freebayes
* Style fix
* Removed pip, wheel, added url for deps list
* used proper gsutil naming
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* url src for deps, samtools fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* petsc: add hip variant
* libceed: add 0.8, disable occa by default, and let autodetect AVX
Disabling OCCA because backend updates did not make this release and
there are some known bugs so most users won't have reason to use OCCA.
https://github.com/CEED/libCEED/pull/688
* WIP: ceed: 4.0 release
* MFEM package updates (#19748)
* MFEM package updates
* mfem: flake8
* [mfem] Various fixes and tweaks.
[arpack-ng] Add a patch to fix building with IBM XL Fortran.
[libceed] Fix building with IBM XL C/C++.
[pumi] Add C++11 flag for version 2.2.3.
* [mfem] Fix the shared CUDA build.
Reported by: @MPhysXDev
* [mfem] Fix a TODO item
* [mfem] Tweak the AmgX dependencies
* [suite-sparse] Fix the version of the mpfr dependency
* MFEM: add initial HIP support using the ROCmPackage.
* MFEM: add 'slepc' variant.
* MFEM: update the patch for v4.2 for SLEPc.
* mfem: apply 'mfem-4.2-slepc.patch' just to v4.2.
* ceed: apply 'spack style'
* [mfem] Add a patch for mfem v4.2 to work with petsc v3.15.0.
[laghos] Add laghos version 3.1 based on the latest commit in
the repository; this version works with mfem v4.2.
[ceed] For ceed v4.0 use laghos v3.1.
* [libceed] Explicitly set 'CC_VENDOR=icc' when using 'intel'
compiler.
* [mfem] Allow pumi >= 2.2.3 with mfem >= 4.2.0.
[ceed] Use pumi v2.2.5 with ceed v4.0.0.
* [ceed] Explicitly use occa v1.1.0 with ceed v4.0.0.
Use mfem@4.2.0+rocm with ceed@4.0.0+mfem+hip.
* [ceed] Add NekRS v21 as a dependency for ceed v4.0.0.
* [ceed] Fix NekRS version: 21 --> 21.0
* [ceed] Propagate +cuda variant to petsc for ceed v4.0.
* [mfem] Propagate '+rocm' variant to some other packages.
* [ceed] Use +rocm variant of nekrs instead of +hip.
* [ceed] Do not enable magma with ceed@4.0.0+hip.
* [libceed] Fix hip build with libceed@0.8.
* [laghos] For v3.1, use the release .tar.gz file instead of commit.
* Remove cuda & hip variants as they are inherited
* [ceed] Remove comments and FIXMEs about 'magma+hip'.
* [ceed] [libceed] Remove TODOs about occa + hip.
* libceed: use ROCmPackage and +rocm
* petsc: use ROCmPackage for HIP
* libceed, petsc: use CudaPackage
* ceed: forward cuda_arch and amdgpu_target
* [mfem] Use Spack's CudaPackage as a base class; as a result,
'cuda_arch' values should not include the 'sm_' prefix.
Also, propagate 'cuda_arch' and 'amdgpu_target' variants
to enabled dependencies.
* petsc: variant is +rocm, package name is hip
Co-authored-by: Jed Brown <jed@jedbrown.org>
Co-authored-by: Thilina Rathnayake <thilinarmtb@gmail.com>
Passing absolute paths from pipeline generate job to downstream rebuild jobs
causes problems when the CI_PROJECT_DIR is not the same for the generate and
rebuild jobs. This has happened, for example, when gitlab checks out the
project into a runner-specific directory and different runners are chosen
for the generate and rebuild jobs.
* ensure that the stage root exists for `spack stage -p <PATH>`
* add test to verify `spack stage -p <PATH>` works!
* move out shared tmp staging path setup to a fixture to fix the test
* Simplified the spack.util.gpg implementation
All the classes defined in this Python module,
which were previously used to construct singleton
instances, have been removed in favor of four
global variables. These variables are initialized
lazily, like before.
The API of the module has been unchanged for the
most part. A few tests have been modified to use
the new global names.
1. add version 2021.05.15.
2. add patch to build old revs with gcc 11.x, version 2021.15.05
already has patch integrated, fixes#23667.
3. add variant +debug to build unoptimized, debug version.
4. add variant +viewer to include hpcviewer and add viewer path to
hpctoolkit module.
5. add dependency on memkind to workaround a glibc problem found on
some Cray platforms.
For me the buildcache force overwrite option does not work. It tries to
delete a file, but errors with a key error, apparently because the
leading / has to be removed.
* util.tty.log: read up to 100 lines if ready
Rework to read up to 100 lines from the captured stdin as long as data
is ready to be read immediately. Adds a helper function to poll with
`select` for ready data. This showed a roughly 5-10x perf improvement
for high-rate writes through the logger with relatively short lines.
* util.tty.log: Defer flushes to end of ready reads
Rather than flush per line, flush per set of reads. Since this is a
non-blocking loop, the total perceived wait is short.
* util.tty.log: only scan each line once, usually
Rather than always find all control characters then substitute them all,
use `subn` to count the number of control characters replaced. Only if
control characters exist find out what they are. This could be made
truly single pass with sub with a function, but it's a more intrusive
change and this got 99%ish of the performance improvement (roughly
another 2x in some cases).
* util.tty.log: remove check for `readable`
Python < 3 does not support a readable check on streams, should not be
necessary here since we control the only use and it's explicitly a
stream to be read.
* e4s ci: enable full e4s
* add llvm-amdgpu to list of specs needing an xlarge tagged runner
* comment out qt and qwt because of intermittent build failures
* remove +rocm specs because rocblas job consistently fails due to infrastructure
* qt: skip multimedia when ~opengl
On 5.9 on macOS the multimedia option causes build errors; on other
platforms and versions it should probably be assumed inoperative anyway.
* qt: Omit flags when disabling multimedia
```
ERROR: Unknown command line option '-no-pulseaudio'.
```
* Work around another qt@5.9 error
* qt: Fix build error on darwin
This PR allows users to `--export`, `--export-secret`, or both to export GPG keys
from Spack. The docs are updated that include a warning that this usually does not
need to be done.
This addresses an issue brought up in slack, and also represented in #14721.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the config section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
* New Package:py-ucsf-pyem
* Dep additions, eun env deletion
* extraction step change
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
### Overview
The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment. The two key changes here which aim to improve reproducibility are:
1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts. This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.
To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:
- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build
#### Some highlights
One consequence of this change will be much smaller pipeline yaml files. By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.
Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace. With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows. If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.
There are some changes to be aware of in how pipelines should be set up after this PR:
#### Pipeline generation
Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile. This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts. If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:
```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack ci generate --check-index-only
+ --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+ - "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags: ["spack", "public", "medium", "x86_64"]
interruptible: true
```
Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`. This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.
#### Rebuild jobs
Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts. When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs. The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`. The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.
When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment. If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate. No other changes to existing custom rebuild scripts should be required as a result of this PR.
As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI. The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`. If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:
```
To reproduce this build locally, run:
spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```
When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you can copy-paste to run a local reproducer of the CI job.
This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.
This PR erelies on
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
* embree: allow for compiling with gcc 7.3
strip out unsupported -mprefer-vector-width=256
* embree: fix build on AMD CPUs
The ISAs that embree is compiled for have to match the CPU
features enabled by the compiler, as embree derives theISA
that it compiles for from the latter.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Spack's source mirror was previously in a plain old S3 bucket. That will still
work, but we can do better. This switches to AWS's CloudFront CDN for hosting
the mirror.
CloudFront is 16x faster (or more) than the old bucket.
- [x] change mirror to https://mirror.spack.io
* New package:py-coveralls
* dep fixes
* added python constraint
* pyyaml version constraint
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- [x] add `in_buildcache` field to DB records to indicate what parts of an index,
which includes roots and dependencies, are in the buildcache.
- [x] add `mark()` method to DB for setting values on single nodes of the DAG.
This also fixes the build with %gcc@11:. According to upstream, the
proper solution is to disable -Werror=array-bounds since the stable
branch will not receive a patch for newer compilers.
* Update py-pint and fix runtime dependency on setuptools
Without the runtime dependency on setuptools, importing pint yields:
0.11:
ModuleNotFoundError: No module named 'pkg_resources'
0.17:
ModuleNotFoundError: No module named 'packaging'
* Fix
* Address comments
I would like to be able to export (and save and then load programatically)
spack blame metadata, so this commit adds a spack blame --json argument,
along with developer docs for it
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
This work will come in two phases. The first here is to allow saving of a local result
with spack monitor, and the second will add a spack monitor command so the user can
do spack monitor upload.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently if one package does `depends_on('pkg default_library=shared')`
and another does `depends_on('pkg default_library=both')`, you'd get a
concretization error.
With this PR one package can do `depends_on('pkg default_library=shared')`
and another depends_on('default_library=static'), and it would concretize to
`pkg default_library=shared,static`
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Package update to version 1.0.2
* switched submodule boolean to string
* switched from string to bools
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- Changed to cmake package with backward compatibility with older
makefile
- Removed unused cmake variable 'blas_blas_libs'
- Added new version 5.2.2 which change to external blas variable
- Remove unused tcsh dependency
- Change URL to use git repository for current and future versions
- Add older 4.2 version
- Add conflict for older versions with apple-clang
This adds RHEL8's `/usr/libexec/platform-python` to Spack's list of preferred
pythons. It will only be used if no other `python` is available in the `PATH`.
We have been testing with this python for a while now, and it seems to do all
that we need. If Spack one day isn't able to work with it, we'll take it out,
but for now it is useful to allow Spack to be used on RHEL8 without a dedicated
`python` installation.
Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.
When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`). That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations. It's a win now to make `Repo.exists()` check files directly.
**Fix:**
This PR does a number of things to speed up `spack load`, `spack info`, and other commands:
- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
(avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`
The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).
Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.
The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.
This PR thus modifies spack.hook in the following ways:
- Use __import__ instead of spack.util.imp.load_source (this
addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
to stay local
fixes#22786
Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.
We remove system paths from search variables like PATH and
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages
may be installed to prefixes that are not actually system paths
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.
If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.
This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.
This makes a similar change to spack cd.
If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).
Co-authored-by: Gregory Becker <becker33@llnl.gov>
fixes#22294
A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.
This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.
fixes#22565
This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.
Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.
This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.
* clingo: modify recipe for bootstrapping
Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping
* Remove option that breaks on linux
* Give more hints for the current Python
* Disable CLINGO_BUILD_PY_SHARED for bootstrapping
* bootstrapping: try to detect the current python from std library
This is much faster than calling external executables
* Fix compatibility with Python 2.6
* Give hints on which compiler and OS to use when bootstrapping
This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.
* Use spec_for_current_python to constrain module requirement
(cherry picked from commit d5fa509b07)
* ASP-based solver: avoid adding values to variants when they're set
fixes#22533fixes#21911
Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.
* Ensure disjoint sets have a clear semantics for external packages
fixes#22547
SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.
In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.
This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.
We add one special case for this: dependencies of externals.
We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.
- [x] change asp.py so that this can't happen -- we now only generate
dependency types for possible dependencies.
This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.
Given some directives in a package like:
```python
depends_on("foo@1.0+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```
We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:
```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
attr(Name, Arg1) : required_dependency_condition(ID, Name, Arg1);
attr(Name, Arg1, Arg2) : required_dependency_condition(ID, Name, Arg1, Arg2);
attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
dependency_condition(ID, Parent, Dependency);
node(Parent).
```
And we handled `foo@1.0+bar` and `mpi@2:` parts ("imposed constraints")
like this:
```prolog
attr(Name, Arg1, Arg2) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2).
attr(Name, Arg1, Arg2, Arg3) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```
These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.
Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:
```prolog
condition_holds(ID) :-
condition(ID);
attr(Name, A1) : condition_requirement(ID, Name, A1);
attr(Name, A1, A2) : condition_requirement(ID, Name, A1, A2);
attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).
attr(Name, A1) :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```
this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:
```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```
The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:
```prolog
dependency_holds(Package, Dependency, Type) :-
dependency_condition(ID, Package, Dependency),
dependency_type(ID, Type),
condition_holds(ID).
```
Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:
```python
def package_dependencies_rules(self, pkg, tests):
"""Translate 'depends_on' directives into ASP logic."""
for _, conditions in sorted(pkg.dependencies.items()):
for cond, dep in sorted(conditions.items()):
condition_id = self.condition(cond, dep.spec, pkg.name) # create a condition and get its id
self.gen.fact(fn.dependency_condition( # associate specifics about the dependency w/the id
condition_id, pkg.name, dep.spec.name
))
# etc.
```
- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals
LocalWords: concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords: virtuals def pkg cond dep fn refactor github py
This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.
(cherry picked from commit 413c422e53)
* Allow the bootstrapping of clingo from sources
Allow python builds with system python as external
for MacOS
* Ensure consistent configuration when bootstrapping clingo
This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.
* Github actions: test clingo with bootstrapping from sources
* Add command to inspect and clean the bootstrap store
Prevent users to set the install tree root to the bootstrap store
* clingo: documented how to bootstrap from sources
Co-authored-by: Gregory Becker <becker33@llnl.gov>
(cherry picked from commit 10e9e142b7)
Bash has a builtin `fc` that will override the compiler if you use "fc",
so it's better to use the full spack-supplied compiler path.
Additionally, the filter regex in the docs was wrong: it replaced the
entire assignment operation with the RHS.
* py-kubernetes: add new package
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-kubernetes: remove alpha/beta versions, fix dependency types
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR updates the abinit package. The underlying build system has
several changes from previous versions, which are reflected in the
package recipe.
- added version 9.4.2
- removed commented out code
- add new libxml2 variant, with dependency and conflicts
- add dependency on atompaw
- depend on fftw-api when ~openmp
This allows other fftw implementations to be used. This PR adds MKL.
- depend on netcdf explicitly
- remove hdf5 variant as hdf5 is required
- only use wannier90 if +mpi as the wannier90 spack package is MPI only
- allow newer versions of libxc for abinit 9
- split configure options for versions before and after abinit 9
- always use MPI compiler wrappers
- add patch to remove march settings for version 9
- Set conflict for fftw~openmp if abinit+openmp
This allows the virtual fftw-api to be used for the dependency. If fftw
is the fftw-api provider then bail if fftw~openmp is set when
abinit+openmp is used.
- Set conflicts for +openmp and mkl
- Be explicit about +mkl for intel-parallel-studio
- Add TODO entry for switching conflicts/depends_on logic
* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo
package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically
* clingo-bootstrap: apple-clang options to bootstrap statically on darwin
* clingo: fix the path of the Python interpreter
In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.
This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.
* clingo: the commit for "spack" version has been updated.
Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.
- [x] make `master` the default so we don't have to keep telling people
to install `clingo@master`. We'll update the preferred version when
there's a new release.
* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
if an environment is active and if yes we fetch all
uninstalled specs.
* Update pylint to 2.8.2
* Update var/spack/repos/builtin/packages/py-pylint/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Address comments
* Update
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.
This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated
(cherry picked from commit 4558dc06e2)
The context manager can be used to swap the current
configuration temporarily, for any use case that may need it.
(cherry picked from commit 553d37a6d6)
The method is now called "use_repositories" and
makes it clear in the docstring that it accepts
as arguments either Repo objects or paths.
Since there was some duplication between this
contextmanager and "use_repo" in the testing framework,
remove the latter and use spack.repo.use_repositories
across the entire code base.
Make a few adjustment to MockPackageMultiRepo, since it was
stating in the docstring that it was supposed to mock
spack.repo.Repo and was instead mocking spack.repo.RepoPath.
(cherry picked from commit 1a8963b0f4)
There clingo-cffi job has two issues to be solved:
1. It uses the default concretizer
2. It requires a package from https://test.pypi.org/simple/
The former can be fixed by setting the SPACK_TEST_SOLVER
environment variable to "clingo".
The latter though requires clingo-cffi to be pushed to a
more stable package index (since https://test.pypi.org/simple/
is meant as a scratch version of PyPI that can be wiped at
any time).
For the time being run the tests in a container. Switch back to
PyPI whenever a new official version of clingo will be released.
* Support clingo when used with cffi
Clingo recently merged in a new Python module option based on cffi.
Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.
manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi
(cherry picked from commit 93ed1a410c)
* Improve error message for inconsistencies in package.py
Sometimes directives refer to variants that do not exist.
Make it such that:
1. The name of the variant
2. The name of the package which is supposed to have
such variant
3. The name of the package making this assumption
are all printed in the error message for easier debugging.
* Add unit tests
(cherry picked from commit 7226bd64dc)
The "fact" method before was dealing with multiple facts
registered per call, which was used when we were emitting
grounded rules from knowledge of the problem instance.
Now that the encoding is changed we can simplify the method
to deal only with a single fact per call.
(cherry picked from commit ba42c36f00)
* Modification to R environment
This PR modifies how the R environmnet is presented, and fixes
installing the standalone Rmath library.
- The Rmath build and install methods are combined into one
- Set parallel=False when installing Rmath
- remove the run environment that set up variables for libraries and
headers that are not really needed, and pollute the environment.
* Add setup_run_environment back
- Add back the setup_run_environment with LD_LIBRARY_PATH and
PKG_CONFIG_PATH.
- Adjust documentation to reflect the current code.
The previous `gasnet` spack package was not vetted/approved by the GASNet library maintainers. This one is.
Notably adds build-time testing and smoke-testing.
Convert network variants into a multi-valued `conduits` variant has the minor advantage of enabling a concise `conduits=none` spec, but the major drawback that it degrades the `spack info gasnet` output.
* py-lazyarray: add new version 0.3.2
Change-Id: Ie8a40f3ff1fe7477e27f6085b9ad6673395258b2
* fixup dependencies
Change-Id: I4b2fb7a0abb462f8df74c383c67517065cd95b67
* Update var/spack/repos/builtin/packages/py-lazyarray/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-batchspawner
Change-Id: I508bad7ba7f1fc32c2f6c0bfccf35d864cf47ced
* fixup
Change-Id: If183933ce40a8d12214ea24acc683cb046fcfbcb
* fix broken version
Change-Id: Ie4dd8d18465877cd8f9cb862112af37d85b1c30f
* fixup license
Change-Id: I51d92a6d229f6a6b56eea6e53c65ed31fe59f6af
* Update var/spack/repos/builtin/packages/py-batchspawner/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Example replacement:
```
'-D(\w+)(:BOOL)?=\{0\}'\.\s*format\s*\(\s*'(ON|YES|true|TRUE)' if '\+(\w+)' in (self\.)?spec else '(OFF|NO|false|FALSE)'\)
```
with
```
self.define_from_variant('\1', '\4')
```
This will cause failures if any variants were misspelled: I have already caught two packages with nonexistent variants.
Spack uses curl to fetch URL resources. For locally-stored resources
it uses curl's file protocol; when using this protocol, curl expects
that the URL encoding conforms to RFC 3986 (which reserves characters
like '?' and '=' for special use).
We were not performing this encoding, and found a resource where
curl was interpreting this in an unfavorable way (succeeding, but
producing an empty file). This commit properly encodes URLs when
using curl's file protocol.
This error did not likely come up before because in most contexts
Spack was either fetching via http or it was using URLs without
offending characters (for example, the sha-based URLs in mirrors
never contain these characters).
* Add versions 1.9.4 and 1.9.4.1 for cbtf-* packages
* Add versions 2.4.2 and 2.4.2.1 for openspeedshop packages
* Remove older versions
* Switch from generic dependency on elf to a dependency on the
elfutils implementation for cbtf-* and openspeedshop packages
* For llvm-openmp-ompt, relax dependency on libelf to elf (cbtf-krell
now depends on elfutils, and llvm-openmp-ompt, so unless this
dependency is relaxed there would be a conflict)
* Update CMake build_type to support Debug, Release, RelWithDebInfo
in cbtf-* and openspeedshop packages
* Update libmonitor patches when building as a dependency of
cbtf-krell
Pass -ef to the cce fortran compiler, fix the build system to use the correct openmp flag for CCE
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
This also changes the checksum for 1.22.1 because I switched the package
to use the proper upstream tarballs to get rid of the autotools
dependencies. Moreover, a few dependencies were missing. netdata also
requires a few directories to be created in its prefix to actually work.
Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.
When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`). That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations. It's a win now to make `Repo.exists()` check files directly.
**Fix:**
This PR does a number of things to speed up `spack load`, `spack info`, and other commands:
- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
(avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`
- [x] `analyze` isn't commonly used; move it to long help
(`spack -H` vs `spack -h`). Give it its own section.
- [x] make it clear from `spack -h` that `spack module` can generate
module files
- [x] shorten help for `spack style`
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the `config` section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
TODO:
- [x] code changes to support multiple module sets
- [x] code changes to support modules relative to a view
- [x] Tests for multiple module configurations
- [x] Tests for modules relative to a view
- [x] Backwards compatibility for module roots from config section
- [x] Backwards compatibility for default module set without the name specified
- [x] Tests for backwards compatibility
Simplify logic by just enabling or disabling fsync as user specified
(default to off currently). Also remove the 4.1 version check, since
that version isn't actually supported in here.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
The implementation for __str__ has been simplified to traverse the spec directly,
and doesn't call anymore the flat_dependencies method. Dead code has been
removed.
For configure (e.g. for hdf5) to pass, this option needs to be pulled out when invoked in ccld mode.
I thought it had fixed the issue but I still saw it after that. After some digging, my guess is that I was able
to get hdf5 to build with ifort instead of ifx. Lot of overlapping changes occurring at the time, as it were.
There are still outstanding issues building hdf5 with ifx, and Intel is looking into what appears to be a
compiler bug, but this manifests during build and is likely a separate issue.
I have verified that the making the edit in 'ccld' mode removes the -loopopt=0 and enables hdf5 to pass
configure. It should be fine to make the edit in 'ld' mode as well, but I have not tested that and didn't
include an -or- condition for it.
Add new release of SEACAS.
Update netcdf-c version to recent release which fixes some issues that have caused problems in past
Use release version of CGNS instead of develop
* Update Nalu-Wind to remove SuperLU from Trilinos requirement. Also simplify Nalu-Wind package.
* Leave boost option in nalu-wind.
* Add git branches into TPL requirements. Update OpenFAST for change to main branch.
Currently, environment views blink out of existence during the view regeneration, and are slowly built back up to their new and improved state. This is not good if other processes attempt to access the view -- they can see it in an inconsistent state.
This PR fixes makes environment view updates atomic. This requires a level of indirection (via symlink, similar to nix or guix) from the view root to the underlying implementation on the filesystem.
Now, an environment view at `/path/to/foo` is a symlink to `/path/to/._foo/<hash>`, where `<hash>` is a hash of the contents of the view. We construct the view in its content-keyed hash directory, create a new symlink to this directory, and atomically replace the symlink with one to the new view.
This PR has a couple of other benefits:
* It future-proofs environment views so that we can implement rollback.
* It ensures that we don't leave users in an inconsistent state if building a new view fails for some reason.
For background:
* there is no atomic operation in posix that allows for a non-empty directory to be replaced.
* There is an atomic `renameat2` in the linux kernel starting in version 3.15, but many filesystems don't support the system call, including NFS3 and NFS4, which makes it a poor implementation choice for an HPC tool, so we use the symlink approach that others tools like nix and guix have used successfully.
* Added the option to use high performance linkers: gold and lld, for
LBANN. Including them as build flags causes unnecessary propagation
to all dependent packages, reducing package reuse.
fixes#22351
The ASP-based solver now accounts for the presence
in the DAG of deprecated versions and tries to minimize
their number at highest priority.
* gobject-introspection: fix for Python 3.9.
* Fixes the too long line formatting issue.
* gobject-introspection: limits the scope of the patch
Co-authored-by: Robert Mijakovic <robert.mijakovic@lxp.lu>
Variants explicitly set in an abstract root spec are considered
as defaults for the package they refer to, and they override
what is in packages.yaml and in package.py. This is relevant
only for multi-valued variants, where a constraint may extend
an already default value.
* Fixes to flex
- Prefer the version that doesn't need all the patches and extra build
tools
- Make dependency on gettext optional under the nls variant (off by
default)
- Drop the dependency on help2man if we don't have to regenerate the man
pages (when no patches are necessary)
* Bring back gettext dep as it is used during autoconf
The code for guessing cpu archtype based on craype modules names got confused,
at least on LLNL RZ prototype systems. In particular a (L) or (D) at the end of a craype-x86-xxx or other
cpu architecture module was geting the logic confused.
With this patch, any white space + remaining characters in the moduel name are removed.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
There have been a lot of questions and some confusion recently surrounding Spack installation test capabilities so this PR is intended to clean up and refine the documentation for "Checking an installation".
It aims to better distinguish between checks that are performed during an installation (i.e., build-time tests) and those that can be done days and weeks after the software has been installed (i.e., install (or smoke) tests).
* Enhancing package gmsh to more options, new version
* Enhancing package gmsh, url from https
* Enhancing package gmsh, following reviewer 1
* Improving package gmsh from reviewer
* Adding MED dependency
* Removing env variables and unused dependency (netgen/tetgen)
`flag_handler` currently passes all flags via injection. This makes it
impossible to override the default flags provided by autotools (for
instance, `binutils cflags='-O2'` will still build with `-O2 -g`).
Instead, use injection for our workaround flags and pass other flags to
the build system.
When we first merged the ASP-based solver, unit-tests
were run in a Docker container with root permissions
and that was preventing a few tests to succeed.
Since some time though, clingo is tested as a regular
user within Github Actions VMs, so we should start to
run checks again.
* geos: Fix config issues with python bindings using python3 (#23479)
This should fix some config issues when building geos with python
bindings and using python3 --- the geos configuration scripts had
a few python2-isms.
I only tested (lightly; geos built and I can import geos in python3)
on 3.8.1, but I did check that the patch can at least be applied
in 3.5.
I belatedly discovered that geos dropped all the SWIG bindings
in @3.9, so I also added some conflicts on the +python and +ruby
options to note that they are not supported in 3.9.
* geos: adding omitted patch file
In an active concretize environment, support installing one or more
cli specs only if they are already present in the environment. The
`--no-add` option is the default for root specs, but optional for
dependency specs. I.e. if you `spack install <depspec>` in an
environment, the dependency-only spec `depspec` will be added as a
root of the environment before being installed. In addition,
`spack install --no-add <spec>` fails if it does not find an
unambiguous match for `spec`.
Added the checksum for 4.1.2 and 4.2.0
The `parallel` variant did the exact same behavior as the `mpi` variant, but they had different default values than each other. Both variants set the value of `-DCGNS_ENABLE_PARALLEL`, so it was unclear which variant was "winning" and could definitely result in a non-intuitive build. Did a grep of the spack packages and none of them where using the `parallel` variant to control the cgns options. Retained the `mpi` variant as that one is being used by multiple packages.
One issue that remains to be solved is that the default integer size has changed from 32-bit to 64-bit for the 4.2.0 release. This is controlled by the `int64` variant which currently defaults to `OFF`. There should maybe be some thought about changing the default to match the default of the current release, or maybe having a version-specific default... For now, left the behavior as it has been for previous versions.
The patch available in spack does not patch
cleanly for the 4.1.1 and presumably later releases.
See Open MPI commit b8a8096a3f153380f95af8f285f48e926eb18bf1
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
SILO has optional support for compression libraries that require
C++ (hzip and fpzip). This patch exposes those options as variants
to enable configuration of SILO without the C++ libraries for C
applications. hzip and fpzip are enabled by default to preserve
current behavior.
Like compilers targets now try to minimize
mismatches, instead of maximizing matches.
Deduction of mismatches is reworked to be
the opposite of a match, since computing
that is faster.
The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).
Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.
* py-keras: new version
* Adds missing dependencies.
* Removes the newline which is against formatting rules.
* py-keras: limits some dependencies to older versions
* py-keras: restricts dependencies
* pykeras: fixes dependency ranges :)
Co-authored-by: Robert Mijakovic <robert.mijakovic@lrz.de>
Co-authored-by: Robert Mijakovic <robert.mijakovic@lxp.lu>
The loading protocol mandates that the the module we are going
to import needs to be already in sys.modules before its code is
executed, so to prevent unbounded recursions and multiple loading.
Loading a module from file exits early if the module is already
in sys.modules
When installing OneAPI packages as root (e.g. in a container), the
installer places cache files in /var/intel/installercache that
interfere with future Spack installs. This ensures that when
running an installation as a root user that this is removed.
* Adding hip support
* Added new blaspp version and rocm support. Fixed error in mesa18 package.
* Correcting variant name.
* Code style fixes
* Change of name of library
* Change "make check" to correctly run from the build directory.
* Upgraded version to fix testing errors
* Fixed testing directory
* Removed unnecessary variant entry (already inherited from CudaPackage)
* Generalization of version matching logic
* Code style
* Corrected version requirement
SCR moved to a component version some time ago, but never had a
release associated with these changes. SCR v2 is a legacy version
that is no longer being developed/supported. In preparation for an
upcoming SCR v3 release, there is now a 3.0rc1 release available to
users.
This adds the 3.0rc1 release to the spack package and deprecates the
older versions.
Additional changes include:
- Enforce using the main branches of the components when installing
scr@develop
- Enforce SCR v3 uses at least the recently released versions of each
of the components
- Use a simple `detect_scheduler()` function in an attempt to be
smarter about setting the default resource manager and not require
users to always manually provide the variant
- Add/update variants that were recently added to AXL and KVTree
components
- Fix cmake arg naming bug of setting `SCR_CONFIG_FILE`
- `SCR_ASYNC_API` is now being handled by a component and is only
needed by the legacy versions.
* Added checksum for recently released 4.8.0
* Added `enable-fsync` variant. The `fsync` flag was added to the configuration as of version 4.1.0 and later. Originally, it defaulted to `on`, but at version 4.3.0 and later, it was changed to default to `off` and a `enable-fsync` configuration flag was added to enable it.
The spack package has the `--enable-fsync` specified with no way to disable for all builds of netcdf-c 4.1.0 and later. This can cause horrendously slow I/O for certain use cases (e.g. 7 seconds with no-fsync versus 2300 seconds with fsync enabled). With the new variant, the default build behavior matches the default of non-spack netCDF.
* Metall: add version 0.2
* Add Metall v0.3
* Update Metall package to v0.4 and v0.5.
* Metall package: add v0.6
* Metall package: add v0.7
* Metall package: add v0.8 and v0.9
* Add Metall package v0.10
* Metall package: set run_environment METALL_ROOT
* Metall package: removed blanks
* Metall package: add v0.11 and v0.12
* Metall package: change required cmake version
* Metall package: support build test
* Metall package: add v0.13
* Metall package: change to use setup_build_environment
gettext uses a test with <libxml2/libxml/someheader.h> to locate a header,
and libxml2 itself includes <libxml/otherheader.h>, so both have to be
in the include path.
* Building binutils with gold implies building ld
* add +ld to llvm to make the old concretizer happy and add +gas to gcc since that's used in the package.py
* Remove sys
* Metall: add version 0.2
* Add Metall v0.3
* Update Metall package to v0.4 and v0.5.
* Metall package: add v0.6
* Metall package: add v0.7
* Metall package: add v0.8 and v0.9
* Add Metall package v0.10
* Metall package: set run_environment METALL_ROOT
* Metall package: removed blanks
* Metall package: add v0.11 and v0.12
* Metall package: change required cmake version
* qt: update versions and URLs
- Add LTS releases of 5.12.10, 5.9.9, 5.6.3
- Mark other minor versions of 5 as deprecated
- Use https
- The URL for older QT versions changed recently to "new_archive"
- Prefer xz instead of gz for >=5.6 because 5.6.3 isn't available as
gz. This invalidates the SHA of 5.7-5.8.
* mxnet: new version 1.8.0
use submodules on master
introduce constraints on cuda versions supported
handle USE_MKLDNN->USE_ONEDNN conversion
* * use define for USE_CUTENSOR
* fix up dependencies for 2.0.0+
libtirpc puts its header files under prefix/include/tirpc, but
spack was returning just prefix/include for location of headers.
This will cause spack to return both prefix/include and
prefix/include/tirpc for headers, so both
include <rpc/xdr.h>
or
include <tirpc/rpc/xdr.h>
should work.
Help dependents find libraries/headers. Like intel-oneapi-mkl, this
package offers several different versions of libraries that conflict.
This PR chooses one of those versions. When
https://github.com/spack/spack/discussions/22749 is resolved, this
package should be updated to choose which libraries to use.
Previously the tau package got the cxx and cc names from
os.path.basename(self.compiler.cxx), however if the path to the compiler
looks like "/usr/bin/g++-10.2.0" then tau's custom build system doesn't
recognize it. What we want instead is something that looks like "g++"
which is exactly what cxx_names[0] gives us. We already did this for
fortran, so I am not sure why we didn't do it here. Not doing this
causes a build failure when tau tries to use a polyfill (vector.h,
iostream.h) that doesn't seem to be packaged with tau.
Additionally, tau needs some help finding mpi include directories when
building with MPI, so we provide them. Unfortunately, we can't just say
that the compilers are mpicc and mpicxx in the previous fix to have
these things found automatically. This is because tau assumes we always
need the polyfill when the compilers are set to these values which again
causes a build failure.
The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.
This PR thus modifies spack.hook in the following ways:
- Use __import__ instead of spack.util.imp.load_source (this
addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
to stay local
Complete overhaul of the Legion package to better capture a more
up-to-date set of configuration options and variants. This update
adds additional flexibility and features that were requested by
users.
* Add version 21.03.0 and "stable" branch
* Remove all older numeric versions
* Add support for CUDA, Python, PAPI support and more
* Add maintainer
* This no longer uses the Spack `gasnet` package: it defaults to
using an embedded gasnet or can be pointed to an external
* MUMPS: Use GEMMT BLAS extension when possible.
This should improve the performance and is recommanded by the developers.
* MUMPS: Add a new "openmp" variant.
* MUMPS: Add a "blr_mt" variant.
This improves performance when using OpenMP but might not be compatible with all multithreaded BLAS.
Set the path to javah via the JAVAH environment variable. If it is
a version of java that does not have javah it will fall back to `javac
-h`. Without specifying this the build could pick up a javah from the
system.
- add version 3.4.0
- add patch for bam2wig when version 3.4.0
- url format changed again, hopefully stable now
- added missing python dependency when version >3.3.1
- have older version compile with htslib, samtools ,bcftools
- new dependencies for version 3.4.0
- sqlite
- mysql-client
- mysqlpp
- lp-solve
- suite-sparse
- refactored filtering code
- set python interpreter in scripts
This is as much a question as it is a minor fine-tuning of the docs. I've been known to add things to an environment by editing the `spack.yaml` file directly. When I read the previous version of this sentence, I was afraid that `spack add` was actually doing *two* things, modifying the `spack.yaml` and updating something else that defined the roots of the Environment. A bit of experimentation suggests that editing the `spack.yaml` file is sufficient to change the roots.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This adds that new version to the package, updates the url, and
updates the hash of v0.0.3 for the new url.
This also updates the KVTree dependency as MPI is required to be
enabled in KVTree for er to work.
rankstr is now also required by er for recently added tests.
PR #22864 added a patch to hpctoolkit to fix an issue with gcc 10.x, and the patch was applied to all revs unconditionally. But this was fixed in hpctoolkit master on Aug 11, 2020, so the patch should only apply to old revs.
Fixes#22951.
Update package with 4.1 sha keys.
Use variant to disable openmp in the build of llvm-amdgpu.
Set CPATH, LIBRARY_PATH so that clang knows to look in the rocm-openmp-extras for headers/libraries.
Disable flang warnings as Spack thinks they are errors.
In ROCm 4.1, the plugin changed names from hsa -> amdgpu.
Update HSA_INCLUDE for 4.1.0.
Clingo has been released on PyPI, so there
are no more concerns on our CI depending
on pypy.test for installing the wheel.
Apparently we have parts of Spack which
are not compatible with kcov > 3.4
UnifyFS has been integrated with updated versions of its mochi-margo
dependency (and mochi-margo's mercury and libfabric dependencies).
This removes support for version 0.9.0
fixes#22786
Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.
These were deprecated when the custom cuda_arch list was
removed. Also fixed up the Aluminum dependencies for Hydrogen and
DiHydrogen. Turns out that Aluminum v0.6.0 didn't have a correct
version in CMake and thus the interaction with older versions of
Hydrogen and DiHydrogen needed to be corrected.
This isn't a significant issue, but I noticed that the docstring incorrectly references "tty.fail" and I wanted to quickly fix it to reflect the correct command, tty.die. I also wanted to fix the docstrings to not be large clumps, to what @tgamblin suggested after I wrote this - having one line at the top that is a quick summary, and more verbose after that.
* New package: py-pymumps
Python bindings for MUMPS, a parallel sparse direct solver
* py-pymumps: fixing flake issues
* py-pymumps: fix dependency types
Following suggestion of @adamjstewart
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pymumps/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This has been checked with gcc on ubuntu 16.04, which ships binutils 2.26 by
default, using spack's binutils 2.36. Only the combination +gas and ~ld
seems to trigger this incompatibility with debug symbols (gcc -g -O2
main.c fails with the error in the comment above the conflict)
- Add dependency on eigen package
- Add last version known to work with ROOT 6.16.00. Until recently GenFit lacked
any tagged versions, therefore, we use a commit hash
FFTW:
(1) Condition to ensure Quad precision is not supported in MPI under FFTW base class
AMDFFTW:
(1) Support for debug and quad precision for aocc compiler
(2) Dedicated variant for threads for enabling SMP threads
(3) Restricted simd features to 'sse2', 'avx' and 'avx2'
(4) Removed float simd features
(5) If debug option is enabled, configure option will be appended with --enable-debug option
(6) Condition to ensure amd-fast-planner is supported from 3.0 onwards under amdfftw derived class
(7) New variant amd-fast-planner - This option will reduce the planning time without much tradeoff in the performance. It is supported for single and double precisions.
(8) Removed following flags for amdfftw - '--enable-threads', '--enable-fma' and '--enable-sse'
MDSplus is a set of software tools for data acquisition and storage and
a methodology for management of complex scientific data.
https://www.mdsplus.org
Co-authored-by: Marijn van Vliet <marijn.vanvliet@aalto.fi>
This provides initial support for [spack monitor](https://github.com/spack/spack-monitor), a web application that stores information and analysis about Spack installations. Spack can now contact a monitor server and upload analysis -- even after a build is already done.
Specifically, this adds:
- [x] monitor options for `spack install`
- [x] `spack analyze` command
- [x] hook architecture for analyzers
- [x] separate build logs (in addition to the existing combined log)
- [x] docs for spack analyze
- [x] reworked developer docs, with hook docs
- [x] analyzers for:
- [x] config args
- [x] environment variables
- [x] installed files
- [x] libabigail
There is a lot more information in the docs contained in this PR, so consult those for full details on this feature.
Additional tests will be added in a future PR.
In debug mode, processes taking an exclusive lock write out their node name to
the lock file. We were using `getfqdn()` for this, but it seems to produce
inconsistent results when used from within some github actions containers.
We get this error because getfqdn() seems to return a short name in one place
and a fully qualified name in another:
```
File "/home/runner/work/spack/spack/lib/spack/spack/test/llnl/util/lock.py", line 1211, in p1
assert lock.host == self.host
AssertionError: assert 'fv-az290-764....cloudapp.net' == 'fv-az290-764'
- fv-az290-764.internal.cloudapp.net
+ fv-az290-764
!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!
== 1 failed, 2547 passed, 7 skipped, 22 xfailed, 2 xpassed in 1238.67 seconds ==
```
This seems to stem from https://bugs.python.org/issue5004.
We don't really need to get a fully qualified hostname for debugging, so use
`gethostname()` because its results are more consistent. This seems to fix the
issue.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
New version has new dependencies (which are also added here as new
packages):
* perl-mce
* perl-threads
* perl-thread-queue
The new version of genemark-et also has a different URL scheme.
* Add a +gui variant (default off) which adds dependencies on
qt, paraview, and qwt
* Backport upstream patch when installing version 8.4 (this patch
is already applied for versions >= 9.0)
Both binary packages would otherwise require X11 and Mesa libraries to
be installed on the host to run. Make sure they use the Spack-provided
libraries by patching the `rpath` via `patchelf`.
* Clarify stub compiler definition in compilers.yaml
* Update explanation of why stub compiler definition is needed
* Add note about required module definition when using Spack-installed
intel-parallel-studio as intel-compiler
* Add suggestion about updating package config preferences based on
choice of variants when installing intel-parallel-studio to avoid
reinstallation
on multilib distros with lib/lib64 (rather than lib32/lib) the library ends up in a dir lib64/ instead of lib/, breaking the libs property (and the cp2k+spglib build)
We remove system paths from search variables like PATH and
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages
may be installed to prefixes that are not actually system paths
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.
See #17270.
```
make[2]: Entering directory `/tmp/vavolkl/spack-stage/spack-stage-qt-5.14.2-63dapppjbq6vqh3le7pazsprijls7cfl/spack-src/qtwebengine/src'
/bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `echo Modules will not be built. Python version 2 (2.7.5 or later) is required to build QtWebEngine.'
make[2]: *** [errorbuild] Error 1
```
We set LC_ALL=C to encourage a build process to generate ASCII
output (so our logger daemon can decode it). Most packages
respect this but it appears that intel-oneapi-compilers does
not in some cases (see #22813). This reads the output of the build
process as UTF-8, which still works if the build process respects
LC_ALL=C but also works if the process generates UTF-8 output.
For Python >= 3.7 all files are opened with UTF-8 encoding by
default. Python 2 does not support the encoding argument on
'open', so to support Python 2 the files would have to be
opened in byte mode and explicitly decoded (as a side note,
this would be the only way to handle other encodings without
being informed of them in advance).
* bugfix: fix representation of null in spack_yaml output
Nulls were previously printed differently by `spack config blame config`
and `spack config get config`. Fix this in the `spack_yaml` dumpers.
* bugfix: `spack config blame` should print all lines of config
`spack config blame` was not printing all lines of configuration because
there were no annotations for empty lines in the YAML dump output. Fix
this by removing empty lines.
Fixed previously unspecified python dependency and ensured that spack's
python is what exodus@v2016 uses. Also, in the process, identified many
missing versions
* new package: gatetools
This PR adds the gatetools package and dependencies. The gatetools
package is a set of command line tools for gate. Since it is primarily a
CLI, although python modules can be loaded, it is named gatetools as
opposed to py-gatetools.
* Fix quote characterss to avoid test error
* Found another UTF8 character that was tripping up tests
* Another UTF-8 character to replace
* Remove py-python-box dependency and package file
* Make numpy a variant
- py-setuptools needs to be a run dependendency
This was masked by py-numpy having py-setuptools as a run dependency.
* Add missing build depency on py-pytest-runner
- set constraint for geant4 to version 10.6 as gate does not work with
geant-10.7+
- set GATE_USE_ITK: Although RTK is built under ITK, there are some ITK
macros that need to be set explicitly.
As pointed out in https://github.com/STEllAR-GROUP/hpx/issues/5239,
there is an issues in OTF2 <=2.2 where a variable is not properly
initialized. As currently no release of OTF2 is available fixing this,
the patch should be applied.
* [py-scikit-image] Added py-setuptools back into depends_on. Otherwise it is putting skimage in scikit_image-version-pyX.Y-arch.egg dir under site-packages
* [py-scikit-image] Added latest version
* [py-scikit-image] Added py-numpy version dependency when package version greater than 0.18
* [py-scikit-image] Updates to python dependency
* Fix typo
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- Use debugoptimized as default build type, just like RelWithDebInfo for cmake
- Do not strip by default, and add a default_library variant which conveniently support both shared and static
- Add a maintainer
- Help libtool to find the correct paths to libraries
- Handle externals from system directories
- Enable eccodes for older versions
* The fltk package can build libraries with opengl support. By default, the configure script looks for opengl headers in the sytem include paths. If 'devel' packages have not been installed on the system it omits the 'ftlk_gl.so' library. This can break packages like 'octave' which expects 'fltk' to have opengl support and looks for the library 'fltk_gl'.
Make opengl support explicit in fltk by adding a dependency on 'gl' and adding a new variant of the same name 'gl' (default On).
With these modifications 'fltk_gl' and 'octave' build successfully on CentOS8.
The default behavior is to always enable opengl.
https://www.fltk.org/doc-1.3/intro.html
* Add patch for latest hwloc@:1 to locate ncurses
This way we don't have to depend on ncurses~termlib, which may run into
issues when another package explicitly depends on ncurses+termlib
* Move termcap to the back, cause it's a system symlink on macos and isn't set by spack
- add new version, 4.09.1
- use github url
- convert to autotools package
- deprecate version 4.07b: This version requires manual download and is
a binary only installation.
- version 4.0.7 was not building
- version 4.0.9 was not setting search correctly due to an extra "return"
in config
- added version 4.1.2-p1
- new version needs py-h5py
- new version does not need utf8 patch
- url format changed
Add a conflict for CUDA and shared libraries in Ascent.
The new concretizer will automatically change the default for
Ascent in that case. Until then, dependencies like WarpX need
to hint the `~shared` wish explicitly.
This initial package recipe uses a custom-built wrapper to drive an internal CMake file. Since nekRS also includes built-in copies of several dependencies such as BLAS and HYPRE, it cannot be linked with other such dependencies. However, to work with the `ceed` metapackage, we cannot add `^blas` conflicts to nekRS.
See https://github.com/spack/spack/pull/22519 for discussion.
By default, clingo doesn't show any optimization criteria (maximized or
minimized sums) if the set they aggregate is empty. Per the clingo
mailing list, we can get around that by adding, e.g.:
```
#minimize{ 0@2 : #true }.
```
for the 2nd criterion. This forces clingo to print out the criterion but
does not affect the optimization.
This PR adds directives as above for all of our optimization criteria, as
well as facts with descriptions of each criterion,like this:
```
opt_criterion(2, "number of non-default variants")
```
We use facts in `concretize.lp` rather than hard-coding these in `asp.py`
so that the names can be maintained in the same place as the other
optimization criteria.
The now-displayed weights and the names are used to display optimization
output like this:
```console
(spackle):solver> spack solve --show opt zlib
==> Best of 0 answers.
==> Optimization Criteria:
Priority Criterion Value
1 version weight 0
2 number of non-default variants (roots) 0
3 multi-valued variants + preferred providers for roots 0
4 number of non-default variants (non-roots) 0
5 number of non-default providers (non-roots) 0
6 count of non-root multi-valued variants 0
7 compiler matches + number of nodes 1
8 version badness 0
9 non-preferred compilers 0
10 target matches 0
11 non-preferred targets 0
zlib@1.2.11%apple-clang@12.0.0+optimize+pic+shared arch=darwin-catalina-skylake
```
Note that this is all hidden behind a `--show opt` option to `spack
solve`. Optimization weights are no longer shown by default, but you can
at least inspect them and more easily understand what is going on.
- [x] always show optimization criteria in `clingo` output
- [x] add `opt_criterion()` facts for all optimizationc criteria
- [x] make display of opt criteria optional in `spack solve`
- [x] rework how optimization criteria are displayed, and add a `--show opt`
optiong to `spack solve`
CachedCMakePackage is a CMakePackage subclass for using CMake initial
cache. This feature of CMake allows packages to increase reproducibility,
especially between spack builds and manual builds. It also allows
packages to sidestep certain parsing bugs in extremely long cmake
commands, and to avoid system limits on the length of the command line.
Co-authored by: Chris White <white238@llnl.gov>
* Add patch for Intel C++ compiler
- On some machines (in particular MacOSX Catalina), the icpc in some way
utilizes the preprocessor of the associated "developer tools" used by
icpc. This leads to, in some cases, a preprocessor claiming support for
__tuple_element_packs, even though icpc (as of version 21.1) can't
actually parse such code. Just use the MPARK_TUPLE_ELEMENT_PACK impl
with __icc until icpc supports it, to avoid issues with developer tools
that are untested.
- The same patch has been PRed against mpark-variant
In the face of two consecutive spaces in the command line, the compiler wrapper would skip all remaining arguments, causing problems building py-scipy with Intel compiler. This PR solves the problem.
* Fixed compiler wrapper in the face of extra spaces between arguments
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Backwards incompatible cleanup to target single-tarball-per-arch builds
going forwards.
* Replace per-distro versions with new per-arch builds, and add
url_for_version to avoid specifying per tarball.
* Customise environment setup to avoid adding lib to LD_LIBRARY_PATH.
* Update homepage and licensing URLs.
* Avoid shell interpretation when running textinstall.sh.
* Added NickRF as maintainer.
Use `conflicts` directive whenever possible.
This allows failing early when conflicting variants are used.
Do not silently ignore `+parmetis` variant when `~metis`.
Instead throw an error during concretization.
Simplify the "Makefile.inc" generation.
This will make easier to add new variants in the future.
* Added version patch for 1.4.0 tag on mpark-variant
Redirected urls to git and github tags.
* Updated to commit hashes
* Update var/spack/repos/builtin/packages/mpark-variant/package.py
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Update var/spack/repos/builtin/packages/mpark-variant/package.py
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Update var/spack/repos/builtin/packages/mpark-variant/package.py
Co-authored-by: Anthony J Zukaitis <zukaitis@lanl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Original commit message:
This feature of CMake allows packages to increase reproducibility, especially between
Spack- and manual builds. It also allows packages to sidestep certain parsing bugs in
extremely long ``cmake`` commands, and to avoid system limits on the length of the
command line.
Adding:
Co-authored by: Chris White <white238@llnl.gov>
This reverts commit c4f0a3cf6c.
CachedCMakePackage is a specialized class for packages built using CMake initial cache.
This feature of CMake allows packages to increase reproducibility, especially between
Spack- and manual builds. It also allows packages to sidestep certain parsing bugs in
extremely long ``cmake`` commands, and to avoid system limits on the length of the
command line.
Autoconf before 2.70 will erroneously pass ifx's -loopopt argument to the
linker, requiring all packages to use autoconf 2.70 or newer to use ifx.
This is a hotfix enabling ifx to be used in Spack. Instead of bothering
to upgrade autoconf for every package, we'll just strip out the
problematic flag if we're in `ld` mode.
- [x] Add a conditional to the `cc` wrapper to skip `-loopopt` in `ld`
mode. This can probably be generalized in the future to strip more
things (e.g., via an environment variable we can constrol from
Spack) but it's good enough for now.
- [x] Add a test ensuring that `-loopopt` arguments are stripped in link
mode, but not in compile mode.
Since `lazy_lexicographic_ordering` handles `None` comparison for us, we
don't need to adjust the spec comparators to return empty strings or
other type-specific empty types. We can just leverage the None-awareness
of `lazy_lexicographic_ordering`.
- [x] remove "or ''" from `_cmp_iter` in `Spec`
- [x] remove setting of `self.namespace` to `''` in `MockPackage`
We have been using the `@llnl.util.lang.key_ordering` decorator for specs
and most of their components. This leverages the fact that in Python,
tuple comparison is lexicographic. It allows you to implement a
`_cmp_key` method on your class, and have `__eq__`, `__lt__`, etc.
implemented automatically using that key. For example, you might use
tuple keys to implement comparison, e.g.:
```python
class Widget:
# author implements this
def _cmp_key(self):
return (
self.a,
self.b,
(self.c, self.d),
self.e
)
# operators are generated by @key_ordering
def __eq__(self, other):
return self._cmp_key() == other._cmp_key()
def __lt__(self):
return self._cmp_key() < other._cmp_key()
# etc.
```
The issue there for simple comparators is that we have to bulid the
tuples *and* we have to generate all the values in them up front. When
implementing comparisons for large data structures, this can be costly.
This PR replaces `@key_ordering` with a new decorator,
`@lazy_lexicographic_ordering`. Lazy lexicographic comparison maps the
tuple comparison shown above to generator functions. Instead of comparing
based on pre-constructed tuple keys, users of this decorator can compare
using elements from a generator. So, you'd write:
```python
@lazy_lexicographic_ordering
class Widget:
def _cmp_iter(self):
yield a
yield b
def cd_fun():
yield c
yield d
yield cd_fun
yield e
# operators are added by decorator (but are a bit more complex)
There are no tuples that have to be pre-constructed, and the generator
does not have to complete. Instead of tuples, we simply make functions
that lazily yield what would've been in the tuple. If a yielded value is
a `callable`, the comparison functions will call it and recursively
compar it. The comparator just walks the data structure like you'd expect
it to.
The ``@lazy_lexicographic_ordering`` decorator handles the details of
implementing comparison operators, and the ``Widget`` implementor only
has to worry about writing ``_cmp_iter``, and making sure the elements in
it are also comparable.
Using this PR shaves another 1.5 sec off the runtime of `spack buildcache
list`, and it also speeds up Spec comparison by about 30%. The runtime
improvement comes mostly from *not* calling `hash()` `_cmp_iter()`.
* New package py-argh
* Fixed deps
* Changed setuptools type
* Update var/spack/repos/builtin/packages/py-argh/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Make -j flag less exceptional
The -j flag in spack behaves differently from make, ctest, ninja, etc,
because it caps the number of jobs to an arbitrary number 16.
Spack will behave like other tools if `spack install` uses a reasonable
default, and `spack install -j <num>` *overrides* that default.
This will be particularly useful for Spack usage outside of a traditional
HPC context and for HPC centers that encourage users to compile on
login nodes with many cores instead of on compute nodes, which has
become increasingly common as individual nodes have more cores.
This maintains the existing default value of min(num_cpus, 16). However,
as it is right now, Spack does a poor job at determining the number of
cpus on linux, since it doesn't take cgroups into account. This is
particularly problematic when using distributed builds with slurm. This PR
also introduces `spack.util.cpus.cpus_available()` to consolidate
knowledge on determining the number of available cores, and improves
core detection for linux. This should also improve core detection for Docker/
Kubernetes, which also use cgroups.
This commit extends the API of the __call__ method of the
SpackCommand class to permit passing global arguments
like those interposed between the main "spack" command
and the subsequent subcommand.
The functionality is used to fix an issue where running
```spack -e . location -b some_package```
ends up printing the name of the environment instead of
the build directory of the package, because the location arg
parser also stores this value as `arg.env`.
fixes#22294
A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.
This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.
* Fixed a bug in the DiHydrogen package where the variant legacy was
changed to distconv and wasn't fully propagated. Cleaned up the
openmp variants on the blas library packages in DiHydrogen and
Elemental. Extended support for Aluminum v1.0 in LBANN, Hydrogen, and
DiHydrogen. Fixed a when clause in the LBANN dependencies.
* Removed the upper range limit for the Aluminum library dependence
* Update var/spack/repos/builtin/packages/dihydrogen/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Remote buildcache indices need to be stored in a place that does not
require writing to the Spack prefix. Move them from the install_tree to
the misc_cache.
fixes#22565
This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.
Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.
This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.
* Make stage use concrete specs from environment
Same as in https://github.com/spack/spack/pull/21642, the idea is that
we want to easily stage a package that fails to build in a complex
environment. Instead of making the user create a spec by hand (basically
transforming all the rules in the environment manifest into a spec,
defying the purpose of the environment...), use the provided spec as a
filter for the already concretized specs. This also speeds up things,
cause we don't have to reconcretize.
This adds MPICC=/path/to/intel-oneapi/mpicc etc to he dependents build stage enabling the use of the compiler wrappers.
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
* clingo: modify recipe for bootstrapping
Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping
* Remove option that breaks on linux
* Give more hints for the current Python
* Disable CLINGO_BUILD_PY_SHARED for bootstrapping
* bootstrapping: try to detect the current python from std library
This is much faster than calling external executables
* Fix compatibility with Python 2.6
* Give hints on which compiler and OS to use when bootstrapping
This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.
* Use spec_for_current_python to constrain module requirement
* ASP-based solver: avoid adding values to variants when they're set
fixes#22533fixes#21911
Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.
* Ensure disjoint sets have a clear semantics for external packages
fixes#22547
SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.
* py-dask-glm: Push again for testing with git.
* py-dask-glm: Fixed the pointed out OSS dependency setting to type=build.
* py-dask-glm: Set depends_on to type=build in the OSS to be built when building the document.
* py-dask-glm: Fix type of depends_on (py-scikit-learn)
Co-authored-by: miura <miura@fx7-pg01.cm.cluster>
Notice that this function, if you define it, requires a result object (generated by
``run()``, a monitor (if you want to send), and a boolean ``overwrite`` to be used
to check if a result exists first, and not write to it if the result exists and
overwrite is False. Also notice that since we already saved these files to the analyzer metadata folder, we return early if a monitor isn't defined, because this function serves to send results to the monitor. If you haven't saved anything to the analyzer metadata folder
yet, you might want to do that here. You should also use ``tty.info`` to give
the user a message of "Writing result to $DIRNAME."
.._writing-commands:
----------------
@@ -345,6 +600,183 @@ Whenever you add/remove/rename a command or flags for an existing command,
make sure to update Spack's `Bash tab completion script
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.