![]() We have been using the `@llnl.util.lang.key_ordering` decorator for specs and most of their components. This leverages the fact that in Python, tuple comparison is lexicographic. It allows you to implement a `_cmp_key` method on your class, and have `__eq__`, `__lt__`, etc. implemented automatically using that key. For example, you might use tuple keys to implement comparison, e.g.: ```python class Widget: # author implements this def _cmp_key(self): return ( self.a, self.b, (self.c, self.d), self.e ) # operators are generated by @key_ordering def __eq__(self, other): return self._cmp_key() == other._cmp_key() def __lt__(self): return self._cmp_key() < other._cmp_key() # etc. ``` The issue there for simple comparators is that we have to bulid the tuples *and* we have to generate all the values in them up front. When implementing comparisons for large data structures, this can be costly. This PR replaces `@key_ordering` with a new decorator, `@lazy_lexicographic_ordering`. Lazy lexicographic comparison maps the tuple comparison shown above to generator functions. Instead of comparing based on pre-constructed tuple keys, users of this decorator can compare using elements from a generator. So, you'd write: ```python @lazy_lexicographic_ordering class Widget: def _cmp_iter(self): yield a yield b def cd_fun(): yield c yield d yield cd_fun yield e # operators are added by decorator (but are a bit more complex) There are no tuples that have to be pre-constructed, and the generator does not have to complete. Instead of tuples, we simply make functions that lazily yield what would've been in the tuple. If a yielded value is a `callable`, the comparison functions will call it and recursively compar it. The comparator just walks the data structure like you'd expect it to. The ``@lazy_lexicographic_ordering`` decorator handles the details of implementing comparison operators, and the ``Widget`` implementor only has to worry about writing ``_cmp_iter``, and making sure the elements in it are also comparable. Using this PR shaves another 1.5 sec off the runtime of `spack buildcache list`, and it also speeds up Spec comparison by about 30%. The runtime improvement comes mostly from *not* calling `hash()` `_cmp_iter()`. |
||
---|---|---|
.github | ||
bin | ||
etc/spack/defaults | ||
lib/spack | ||
share/spack | ||
var/spack | ||
.codecov.yml | ||
.coveragerc | ||
.dockerignore | ||
.flake8 | ||
.gitattributes | ||
.gitignore | ||
.mailmap | ||
.mypy.ini | ||
.readthedocs.yml | ||
CHANGELOG.md | ||
COPYRIGHT | ||
LICENSE-APACHE | ||
LICENSE-MIT | ||
NOTICE | ||
pytest.ini | ||
README.md |
Spack
Spack is a multi-platform package manager that builds and installs multiple versions and configurations of software. It works on Linux, macOS, and many supercomputers. Spack is non-destructive: installing a new version of a package does not break existing installations, so many configurations of the same package can coexist.
Spack offers a simple "spec" syntax that allows users to specify versions and configuration options. Package files are written in pure Python, and specs allow package authors to write a single script for many different builds of the same package. With Spack, you can build your software all the ways you want to.
See the Feature Overview for examples and highlights.
To install spack and your first package, make sure you have Python. Then:
$ git clone https://github.com/spack/spack.git
$ cd spack/bin
$ ./spack install zlib
Documentation
Full documentation is available, or
run spack help
or spack help --all
.
Tutorial
We maintain a hands-on tutorial. It covers basic to advanced usage, packaging, developer features, and large HPC deployments. You can do all of the exercises on your own laptop using a Docker container.
Feel free to use these materials to teach users at your organization about Spack.
Community
Spack is an open source project. Questions, discussion, and contributions are welcome. Contributions can be anything from new packages to bugfixes, documentation, or even new core features.
Resources:
- Slack workspace: spackpm.slack.com. To get an invitation, click here.
- Mailing list: groups.google.com/d/forum/spack
- Twitter: @spackpm. Be sure to
@mention
us!
Contributing
Contributing to Spack is relatively easy. Just send us a
pull request.
When you send your request, make develop
the destination branch on the
Spack repository.
Your PR must pass Spack's unit tests and documentation tests, and must be PEP 8 compliant. We enforce these guidelines with our CI process. To run these tests locally, and for helpful tips on git, see our Contribution Guide.
Spack's develop
branch has the latest contributions. Pull requests
should target develop
, and users who want the latest package versions,
features, etc. can use develop
.
Releases
For multi-user site deployments or other use cases that need very stable software installations, we recommend using Spack's stable releases.
Each Spack release series also has a corresponding branch, e.g.
releases/v0.14
has 0.14.x
versions of Spack, and releases/v0.13
has
0.13.x
versions. We backport important bug fixes to these branches but
we do not advance the package versions or make other changes that would
change the way Spack concretizes dependencies within a release branch.
So, you can base your Spack deployment on a release branch and git pull
to get fixes, without the package churn that comes with develop
.
The latest release is always available with the releases/latest
tag.
See the docs on releases for more details.
Code of Conduct
Please note that Spack has a Code of Conduct. By participating in the Spack community, you agree to abide by its rules.
Authors
Many thanks go to Spack's contributors.
Spack was created by Todd Gamblin, tgamblin@llnl.gov.
Citing Spack
If you are referencing Spack in a publication, please cite the following paper:
- Todd Gamblin, Matthew P. LeGendre, Michael R. Collette, Gregory L. Lee, Adam Moody, Bronis R. de Supinski, and W. Scott Futral. The Spack Package Manager: Bringing Order to HPC Software Chaos. In Supercomputing 2015 (SC’15), Austin, Texas, November 15-20 2015. LLNL-CONF-669890.
License
Spack is distributed under the terms of both the MIT license and the Apache License (Version 2.0). Users may choose either license, at their option.
All new contributions must be made under both the MIT and Apache-2.0 licenses.
See LICENSE-MIT, LICENSE-APACHE, COPYRIGHT, and NOTICE for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
LLNL-CODE-811652