Revamp the timer so we always have a designated begin and end.
Fix a bug where the phase timer was stopped before the phase started,
resulting in incorrect timing reports in timers.json.
The `intel` compiler at versions > 20 is provided by the `intel-oneapi-compilers-classic`
package (a thin wrapper around the `intel-oneapi-compilers` package), and the `oneapi`
compiler is provided by the `intel-oneapi-compilers` package.
Prior to this work, neither of these compilers could be bootstrapped by Spack as part of
an install with `install_missing_compilers: True`.
Changes made to make these two packages bootstrappable:
1. The `intel-oneapi-compilers-classic` package includes a bin directory and symlinks
to the compiler executables, not just logical pointers in Spack.
2. Spack can look for bootstrapped compilers in directories other than `$prefix/bin`,
defined on a per-package basis
3. `intel-oneapi-compilers` specifies a non-default search directory for the
compiler executables.
4. The `spack.compilers` module now can make more advanced associations between
packages and compilers, not just simple name translations
5. Spack support for lmod hierarchies accounts for differences between package
names and the associated compiler names for `intel-oneapi-compilers/oneapi`,
`intel-oneapi-compilers-classic/intel@20:`, `llvm+clang/clang`, and
`llvm-amdgpu/rocmcc`.
- [x] full end-to-end testing
- [x] add unit tests
Compilers and linker optimize string constants for space by aliasing
them when one is a suffix of another. For gcc / binutils this happens
already at -O1, due to -fmerge-constants. This means that we have
to take care during relocation to always preserve a certain length
of the suffix of those prefixes that are C-strings.
In this commit we pick length 7 as a safe suffix length, assuming the
suffix is typically the 7 characters from the hash (i.e. random), so
it's unlikely to alias with any string constant used in the sources.
In general we now pad shortened strings from the left with leading
dir seperators, but in the case of C-strings that are much shorter
and don't share a common suffix (due to projections), we do allow
shrinking the C-string, appending a null, and retaining the old part
of the prefix.
Also when rewiring, we ensure that the new hash preserves the last
7 bytes of the old hash.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Currently, compiler flags and variants are inconsistent: compiler flags set for a
package are inherited by its dependencies, while variants are not. We should have these
be consistent by allowing for inheritance to be enabled or disabled for both variants
and compiler flags.
- [x] Make new (spec language) operators
- [x] Apply operators to variants and compiler flags
- [x] Conflicts currently result in an unsatisfiable spec
(i.e., you can't propagate two conflicting values)
What I propose is using two of the currently used sigils to symbolized that the variant
or compiler flag will be inherited:
Example syntax:
- `package ++variant`
enabled variant that will be propagated to dependencies
- `package +variant`
enabled variant that will NOT be propagated to dependencies
- `package ~~variant`
disabled variant that will be propagated to dependencies
- `package ~variant`
disabled variant that will NOT be propagated to dependencies
- `package cflags==True`
`cflags` will be propagated to dependencies
- `package cflags=True`
`cflags` will NOT be propagated to dependencies
Syntax for string-valued variants is similar to compiler flags.
This PR introduces breadth-first traversal, and moves depth-first traversal
logic out of Spec's member functions, into `traverse.py`.
It introduces a high-level API with three main methods:
```python
spack.traverse.traverse_edges(specs, kwargs...)
spack.traverse.traverse_nodes(specs, kwags...)
spack.traverse.traverse_tree(specs, kwargs...)
```
with the usual `root`, `order`, `cover`, `direction`, `deptype`, `depth`, `key`,
`visited` kwargs for the first two.
What's new is that `order="breadth"` is added for breadth-first traversal.
The lower level API is not exported, but is certainly useful for advanced use
cases. The lower level API includes visitor classes for direction reversal and
edge pruning, which can be used to create more advanced traversal methods,
especially useful when the `deptype` is not constant but depends on the node
or depth.
---
There's a couple nice use-cases for breadth-first traversal:
- Sometimes roots have to be handled differently (e.g. follow build edges of
roots but not of deps). BFS ensures that root nodes are always discovered at
depth 0, instead of at any depth > 1 as a dep of another root.
- When printing a tree, it would be nice to reduce indent levels so it fits in the
terminal, and ensure that e.g. `zlib` is not printed at indent level 10 as a
dependency of a build dep of a build dep -- rather if it's a direct dep of my
package, I wanna see it at depth 1. This basically requires one breadth-first
traversal to construct a tree, which can then be printed with depth-first traversal.
- In environments in general, it's sometimes inconvenient to have a double
loop: first over the roots then over each root's deps, and maintain your own
`visited` set outside. With BFS, you can simply init the queue with the
environment root specs and it Just Works. [Example here](3ec7304699/lib/spack/spack/environment/environment.py (L1815-L1816))
Currently, many tests hardcode to older versions of gcc for comparisons of
concretization among compiler versions. Those versions are too old to concretize for
`aarch64`-family targets, which leads to failing tests on `aarch64`.
This PR fixes those tests by updating the compiler versions used for testing.
Currently, many tests hardcode the expected architecture result in concretization to the
`x86_64` family of architectures.
This PR generalizes the tests that can be generalized, to cover multiple architecture
families. For those that test specific relationships among `x86_64`-family targets, it
ensures that concretization uses the `x86_64`-family targets in those cases.
Currently, many tests rely on the fact that `AutotoolsPackage` imposes no dependencies
on the inheriting package. That is not true on `aarch64`-family architectures.
This PR ensures that the fact `AutotoolsPackage` on `aarch64` pulls in a dependency on
`gnuconfig` is ignored when testing for the appropriate relationships among dependencies
Additionally, 5 tests currently prompt the user for input when `gpg` is available in the
user's path. This PR fixes that issue. And 7 tests fail currently when the user has a
yubikey available. This PR fixes the incorrect gpg argument causing those issues.
This commit extends the DSL that can be used in packages
to allow declaring that a package uses different build-systems
under different conditions.
It requires each spec to have a `build_system` single valued
variant. The variant can be used in many context to query, manipulate
or select the build system associated with a concrete spec.
The knowledge to build a package has been moved out of the
PackageBase hierarchy, into a new Builder hierarchy. Customization
of the default behavior for a given builder can be obtained by
coding a new derived builder in package.py.
The "run_after" and "run_before" decorators are now applied to
methods on the builder. They can also incorporate a "when="
argument to specify that a method is run only when certain
conditions apply.
For packages that do not define their own builder, forwarding logic
is added between the builder and package (methods not found in one
will be retrieved from the other); this PR is expected to be fully
backwards compatible with unmodified packages that use a single
build system.
Scan the text files for relocatable prefixes *before* creating a tarball,
to reduce the amount of work to be done during install from binary
cache.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Currently, module changes from `setup_dependent_package` are applied only to the module of the package class, but not to any parent classes' modules between the package class module and `spack.package_base`.
In this PR, we create a custom class to accumulate module changes, and apply those changes to each class that requires it. This design allows us to code for a single module, while applying the changes to multiple modules as needed under the hood, without requiring the user to reason about package inheritance.
Add a post-install step which runs (only) on Windows to modify an
install prefix, adding symlinks to all dependency libraries.
Windows does not have the same concept of RPATHs as Linux, but when
resolving symbols will check the local directory for dependency
libraries; by placing a symlink to each dependency library in the
directory with the library that needs it, the package can then
use all Spack-built dependencies.
Note:
* This collects dependency libraries based on Package.rpath, which
includes only direct link dependencies
* There is no examination of libraries to check what dependencies
they require, so all libraries of dependencies are symlinked
into any directory of the package which contains libraries
Ensure that build tools with module-level commands in spack use
the version built as part of their build graph if one exists.
This is now also required for mesa, scons, cmake and ctest, out
of graph versions of these tools in path will not be found unless
added as an external.
This bug appeared because a new version of rocprim needs cmake
3.16, while I have 3.14 in my path I had added an external for
cmake 3.20 to the dag, but 3.14 was still used to configure
rocprim causing it to fail. As far as I can tell, all the build
tools added in build_environment.py had this problem, despite the
fact that they should have been resolving these tools by name
with a path search and find the one in the dag that way. I'm
still investigating why the path searching and Executable logic
didn't do it, but this makes three of the build systems much more
explicit, and leaves only gmake and ninja as dependencies from
out in the system while ensuring the version in the dag is used
if there is one.
The additional sqlite version is to perturb the hash of python to
work around a relocation bug which will be fixed in a subsequent
PR.
The current use of git ref's as a version requires a search algorithm to pick the right matching version based on the tags in the git history of the package.
This is less than ideal for the use case where users already know the specific version they want the git ref to be associated with. This PR makes a new version syntax [package]@[ref]=[version] to allow the users to specify the exact hash they wish to use.
Black will automatically fix a lot of the exceptions we previously allowed for
directives, so we don't need them in our custom `flake8_formatter` anymore.
- [x] remove `E501` (long line) exceptions for directives from `flake8_formatter`,
as they won't help us now.
- [x] Refine exceptions for long URLs in the `flake8_formatter`.
- [x] Adjust the mock `flake8-package` to exhibit the exceptions we still allow.
- [x] Update style tests for new `flake8-package`.
- [x] Blacken style test.
Many noqa's in the code are no longer necessary now that the column limit is 99
characters. Others can easily be eliminated, and still more can just be made more
specific if they do not have to do with line length.
The only E501's still in the code are in the tests for `spack.util.path` and the tests
for `spack style`.
The goal of this PR is to make clearer where we need a package object in Spack as opposed to a package class.
We currently instantiate a lot of package objects when we could make do with a class. We should use the class
when we only need metadata, and we should only instantiate and us an instance of `PackageBase` at build time.
Modifications:
- [x] Remove the `spack.repo.get` convenience function (which was used in many places, and not really needed)
- [x] Use `spack.repo.path.get_pkg_class` wherever possible
- [x] Try to route most of the need for `spack.repo.path.get` through `Spec.package`
- [x] Introduce a non-data descriptor, that can be used as a decorator, to have "class level properties"
- [x] Refactor unit tests that had to be modified to reduce code duplication
- [x] `Spec.package` and `Repo.get` now require a concrete spec as input
- [x] Remove `RepoPath.all_packages` and `Repo.all_packages`
Explicitly import package utilities in all packages, and corresponding fallout.
This includes:
* rename `spack.package` to `spack.package_base`
* rename `spack.pkgkit` to `spack.package`
* update all packages in builtin, builtin_mock and tutorials to include `from spack.package import *`
* update spack style
* ensure packages include the import
* automatically add the new import and remove any/all imports of `spack` and `spack.pkgkit`
from packages when using `--fix`
* add support for type-checking packages with mypy when SPACK_MYPY_CHECK_PACKAGES
is set in the environment
* fix all type checking errors in packages in spack upstream
* update spack create to include the new imports
* update spack repo to inject the new import, injection persists to allow for a deprecation period
Original message below:
As requested @adamjstewart, update all packages to use pkgkit. I ended up using isort to do this,
so repro is easy:
```console
$ isort -a 'from spack.pkgkit import *' --rm 'spack' ./var/spack/repos/builtin/packages/*/package.py
$ spack style --fix
```
There were several line spacing fixups caused either by space manipulation in isort or by packages
that haven't been touched since we added requirements, but there are no functional changes in here.
* [x] add config to isort to make sure this is maintained going forward
Currently, environments can either be concretized fully together or fully separately. This works well for users who create environments for interoperable software and can use `concretizer:unify:true`. It does not allow environments with conflicting software to be concretized for maximal interoperability.
The primary use-case for this is facilities providing system software. Facilities provide multiple MPI implementations, but all software built against a given MPI ought to be interoperable.
This PR adds a concretization option `concretizer:unify:when_possible`. When this option is used, Spack will concretize specs in the environment separately, but will optimize for minimal differences in overlapping packages.
* Add a level of indirection to root specs
This commit introduce the "literal" atom, which comes with
a few different "arities". The unary "literal" contains an
integer that id the ID of a spec literal. Other "literals"
contain information on the requests made by literal ID. For
instance zlib@1.2.11 generates the following facts:
literal(0,"root","zlib").
literal(0,"node","zlib").
literal(0,"node_version_satisfies","zlib","1.2.11").
This should help with solving large environments "together
where possible" since later literals can be now solved
together in batches.
* Add a mechanism to relax the number of literals being solved
* Modify spack solve to display the new criteria
Since the new criteria is above all the build criteria,
we need to modify the way we display the output.
Originally done by Greg in #27964 and cherry-picked
to this branch by the co-author of the commit.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Inject reusable specs into the solve
Instead of coupling the PyclingoDriver() object with
spack.config, inject the concrete specs that can be
reused.
A method level function takes care of reading from
the store and the buildcache.
* spack solve: show output of multi-rounds
* add tests for best-effort coconcretization
* Enforce having at least a literal being solved
Co-authored-by: Greg Becker <becker33@llnl.gov>
With the original DAG hash, we did not store build dependencies in the database, but
with the full DAG hash, we do. Previously, we'd never tell the concretizer about build
dependencies of things used by hash, because we never had them. Now, we have to avoid
telling the concretizer about them, or they'll unnecessarily constrain build
dependencies for new concretizations.
- [x] Make database track all dependencies included in the `dag_hash`
- [x] Modify spec_clauses so that build dependency information is optional
and off by default.
- [x] `spack diff` asks `spec_clauses` for build dependencies for completeness
- [x] Modify `concretize.lp` so that reuse optimization doesn't affect fresh
installations.
- [x] Modify concretizer setup so that it does *not* prioritize installed versions
over package versions. We don't need this with reuse, so they're low priority.
- [x] Fix `test_installed_deps` for full hash and new concretizer (does not work
for old concretizer with full hash -- leave this for later if we need it)
- [x] Move `test_installed_deps` mock packages to `builtin.mock` for easier debugging
with `spack -m`.
- [x] Fix `test_reuse_installed_packages_when_package_def_changes` for full hash
This is an amended version of https://github.com/spack/spack/pull/24894 (reverted in https://github.com/spack/spack/pull/29603). https://github.com/spack/spack/pull/24894
broke all instances of `spack external find` (namely when it is invoked without arguments/options)
because it was mandating the presence of a file which most systems would not have.
This allows `spack external find` to proceed if that file is not present and adds tests for this.
- [x] Add a test which confirms that `spack external find` successfully reads a manifest file
if present in the default manifest path
--- Original commit message ---
Adds `spack external read-cray-manifest`, which reads a json file that describes a
set of package DAGs. The parsed results are stored directly in the database. A user
can see these installed specs with `spack find` (like any installed spec). The easiest
way to use them right now as dependencies is to run
`spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described
in the file to Spack's installation DB and will also install described compilers to the
compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR
registers the origin as "external-db"). In the future, it is assumed users may want
to be able to treat installs registered with this command differently (e.g. they may
want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec
is concrete
* I don't think the hashes of installed-and-concrete specs should change and this
was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied
(external specs may mention compilers that are not registered, and without this
change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do
something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will
remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
* Allow packages to add a 'submodules' property that determines when ad-hoc Git-commit-based versions should initialize submodules
* add support for ad-hoc git-commit-based versions to instantiate submodules if the associated package has a 'submodules' property and it indicates this should happen for the associated spec
* allow Package-level submodule request to influence all explicitly-defined version() in the Package
* skip test on windows which fails because of long paths
Spack added support in #24639 for ad-hoc Git-commit-hash-based
versions: A user can install a package x@hash, where X is a package
that stores its source code in a Git repository, and the hash refers
to a commit in that repository which is not recorded as an explicit
version in the package.py file for X.
A couple issues were found relating to this:
* If an environment defines an alternative package repo (i.e. with
repos.yaml), and spack.yaml contains user Specs with ad-hoc
Git-commit-hash-based versions for packages in that repo,
then as part of retrieving the data needed for version comparisons
it will attempt to retrieve the package before the environment's
configuration is instantiated.
* The bookkeeping information added to compare ad-hoc git versions was
being stripped from Specs during concretization (such that user
Specs which succeeded before concretizing would then fail after)
This addresses the issues:
* The first issue is resolved by deferring access to the associated
Package until the versions are actually compared to one another.
* The second issue is resolved by ensuring that the Git bookkeeping
information is explicitly applied to Specs after they are concretized.
This also:
* Resolves an ambiguity in the mock_git_version_info fixture used to
create a tree of Git commits and provide a list where each index
maps to a known commit.
* Isolates the cache used for Git repositories in tests using the
mock_git_version_info fixture
* Adds a TODO which points out that if the remote Git repository
overwrites tags, that Spack will then fail when using
ad-hoc Git-commit-hash-based versions
Allow declaring possible values for variants with an associated condition. If the variant takes one of those values, the condition is imposed as a further constraint.
The idea of this PR is to implement part of the mechanisms needed for modeling [packages with multiple build-systems]( https://github.com/spack/seps/pull/3). After this PR the build-system directive can be implemented as:
```python
variant(
'build-system',
default='cmake',
values=(
'autotools',
conditional('cmake', when='@X.Y:')
),
description='...',
)
```
Modifications:
- [x] Allow conditional possible values in variants
- [x] Add a unit-test for the feature
- [x] Add documentation
* tests for rewiring pure specs to spliced specs
* relocate text, binaries, and links
* using llnl.util.symlink for windows compat.
Note: This does not include CLI hooks for relocation.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
Reduces the number of stat calls to a bare minimum:
- Single pass over src prefixes
- Handle projection clashes in memory
Symlinked directories in the src prefixes are now conditionally
transformed into directories with symlinks in the dst dir. Notably
`intel-mkl`, `cuda` and `qt` has top-level symlinked directories that
previously resulted in empty directories in the view. We now avoid
cycles and possible exponential blowup by only expanding symlinks that:
- point to dirs deeper in the folder structure;
- are a fixed depth of 2.
The number of commit characters in patch files fetched from GitHub can change,
so we should use `full_index=1` to enforce full commit hashes (and a stable
patch `sha256`).
Similarly, URLs for branches like `master` don't give us stable patch files,
because branches are moving targets. Use specific tags or commits for those.
- [x] update all github patch URLs to use `full_index=1`
- [x] don't use `master` or other branches for patches
- [x] add an audit check and a test for `?full_index=1`
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Adds `spack external read-cray-manifest`, which reads a json file that describes a set of package DAGs. The parsed results are stored directly in the database. A user can see these installed specs with `spack find` (like any installed spec). The easiest way to use them right now as dependencies is to run `spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described in the file to Spack's installation DB and will also install described compilers to the compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR registers the origin as "external-db"). In the future, it is assumed users may want to be able to treat installs registered with this command differently (e.g. they may want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec is concrete
* I don't think the hashes of installed-and-concrete specs should change and this was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied (external specs may mention compilers that are not registered, and without this change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
CMake - Windows Bootstrap (#25825)
Remove hardcoded cmake compiler (#26410)
Revert breaking cmake changes
Ensure no autotools on Windows
Perl on Windows (#26612)
Python source build windows (#26313)
Reconfigure sysconf for Windows
Python2.6 compatibility
Fxixup new sbang tests for windows
Ruby support (#28287)
Add NASM support (#28319)
Add mock Ninja package for testing
fixes#29446
The new setup_*_environment functions have been falling back
to calling the old functions and warn the user since #11115.
This commit removes the fallback behavior and any use of:
- setup_environment
- setup_dependent_environment
in the codebase
* Add sticky variants
* Add unit tests for sticky variants
* Add documentation for sticky variants
* Revert "Revert 19736 because conflicts are avoided by clingo by default (#26721)"
This reverts commit 33ef7d57c1.
* Add stickiness to "allow-unsupported-compiler"
* Use pip to bootstrap pip
* Bootstrap wheel from source
* Update PythonPackage to install using pip
* Update several packages
* Add wheel as base class dep
* Build phase no longer exists
* Add py-poetry package, fix py-flit-core bootstrapping
* Fix isort build
* Clean up many more packages
* Remove unused import
* Fix unit tests
* Don't directly run setup.py
* Typo fix
* Remove unused imports
* Fix issues caught by CI
* Remove custom setup.py file handling
* Use PythonPackage for installing wheels
* Remove custom phases in PythonPackages
* Remove <phase>_args methods
* Remove unused import
* Fix various packages
* Try to test Python packages directly in CI
* Actually run the pipeline
* Fix more packages
* Fix mappings, fix packages
* Fix dep version
* Work around bug in concretizer
* Various concretization fixes
* Fix gitlab yaml, packages
* Fix typo in gitlab yaml
* Skip more packages that fail to concretize
* Fix? jupyter ecosystem concretization issues
* Solve Jupyter concretization issues
* Prevent duplicate entries in PYTHONPATH
* Skip fenics-dolfinx
* Build fewer Python packages
* Fix missing npm dep
* Specify image
* More package fixes
* Add backends for every from-source package
* Fix version arg
* Remove GitLab CI stuff, add py-installer package
* Remove test deps, re-add install_options
* Function declaration syntax fix
* More build fixes
* Update spack create template
* Update PythonPackage documentation
* Fix documentation build
* Fix unit tests
* Remove pip flag added only in newer pip
* flux: add explicit dependency on jsonschema
* Update packages that have been added since this was branched off of develop
* Move Python 2 deprecation to a separate PR
* py-neurolab: add build dep on py-setuptools
* Use wheels for pip/wheel
* Allow use of pre-installed pip for external Python
* pip -> python -m pip
* Use python -m pip for all packages
* Fix py-wrapt
* Add both platlib and purelib to PYTHONPATH
* py-pyyaml: setuptools is needed for all versions
* py-pyyaml: link flags aren't needed
* Appease spack audit packages
* Some build backend is required for all versions, distutils -> setuptools
* Correctly handle different setup.py filename
* Use wheels for py-tomli to avoid circular dep on py-flit-core
* Fix busco installation procedure
* Clarify things in spack create template
* Test other Python build backends
* Undo changes to busco
* Various fixes
* Don't test other backends