Commit Graph

309 Commits

Author SHA1 Message Date
Harmen Stoppels
0f54a63dfd
remove activate/deactivate support in favor of environments (#29317)
Environments and environment views have taken over the role of `spack activate/deactivate`, and we should deprecate these commands for several reasons:

- Global activation is a really poor idea:
   - Install prefixes should be immutable; since they can have multiple, unrelated dependents; see below
   - Added complexity elsewhere: verification of installations, tarballs for build caches, creation of environment views of packages with unrelated extensions "globally activated"... by removing the feature, it gets easier for people to contribute, and we'd end up with fewer bugs due to edge cases.
- Environment accomplish the same thing for non-global "activation" i.e. `spack view`, but better.

Also we write in the docs:

```
However, Spack global activations have two potential drawbacks:

#. Activated packages that involve compiled C extensions may still
   need their dependencies to be loaded manually.  For example,
   ``spack load openblas`` might be required to make ``py-numpy``
   work.

#. Global activations "break" a core feature of Spack, which is that
   multiple versions of a package can co-exist side-by-side.  For example,
   suppose you wish to run a Python package in two different
   environments but the same basic Python --- one with
   ``py-numpy@1.7`` and one with ``py-numpy@1.8``.  Spack extensions
   will not support this potential debugging use case.
```

Now that environments are established and views can take over the role of activation
non-destructively, we can remove global activation/deactivation.
2022-11-11 00:50:07 -08:00
Peter Scheibel
1a3415619e
"spack uninstall": don't modify env (#33711)
"spack install foo" no longer adds package "foo" to the environment
(i.e. to the list of root specs) by default: you must specify "--add".
Likewise "spack uninstall foo" no longer removes package "foo" from
the environment: you must specify --remove. Generally this means
that install/uninstall commands will no longer modify the users list
of root specs (which many users found problematic: they had to
deactivate an environment if they wanted to uninstall a spec without
changing their spack.yaml description).

In more detail: if you have environments e1 and e2, and specs [P, Q, R]
such that P depends on R, Q depends on R, [P, R] are in e1, and [Q, R]
are in e2:

* `spack uninstall --dependents --remove r` in e1: removes R from e1
  (but does not uninstall it) and uninstalls (and removes) P
* `spack uninstall -f --dependents r` in e1: will uninstall P, Q, and
   R (i.e. e2 will have dependent specs uninstalled as a side effect)
* `spack uninstall -f --dependents --remove r` in e1: this uninstalls
   P, Q, and R, and removes [P, R] from e1
* `spack uninstall -f --remove r` in e1: uninstalls R (so it is
  "missing" in both environments) and removes R from e1 (note that e1
  would still install R as a dependency of P, but it would no longer
  be listed as a root spec)
* `spack uninstall --dependents r` in e1: will fail because e2 needs R

Individual unit tests were created for each of these scenarios.
2022-11-08 03:24:51 +00:00
Massimiliano Culpo
7e645f54c5
Deprecate spack bootstrap trust/untrust (#33600)
* Deprecate spack bootstrap trust/untrust
* Update CI
* Update tests
2022-10-29 12:24:26 -07:00
Tamara Dahlgren
512f8d14d2
feature: Add -x|--explicit option to 'spack test run' (#32910) 2022-10-25 12:32:55 -07:00
Harmen Stoppels
7d99fbcafd
backtraces with --backtrace (#33478)
* backtraces without --debug

Currently `--debug` is too verbose and not-`--debug` gives to little
context about where exceptions are coming from.

So, instead, it'd be nice to have `spack --backtrace` and
`SPACK_BACKTRACE=1` as methods to get something inbetween: no verbose
debug messages, but always a full backtrace.

This is useful for CI, where we don't want to drown in debug messages
when installing deps, but we do want to get details where something goes
wrong if it goes wrong.

* completion
2022-10-23 18:12:38 -07:00
Massimiliano Culpo
abf3a696bd
Remove "spack buildcache copy" in v0.19.0 (#33437) 2022-10-21 12:17:53 +02:00
Harmen Stoppels
e1344067fd
depfile: buildcache support (#33315)
When installing some/all specs from a buildcache, build edges are pruned
from those specs. This can result in a much smaller effective DAG. Until
now, `spack env depfile` would always generate a full DAG.

Ths PR adds the `spack env depfile --use-buildcache` flag that was
introduced for `spack install` before. This way, not only can we drop
build edges, but also we can automatically set the right buildcache
related flags on the specific specs that are gonna get installed.

This way we get parallel installs of binary deps without redundancy,
which is useful for Gitlab CI.
2022-10-19 13:57:06 -07:00
Massimiliano Culpo
7ad7fde09c
Add a command to bootstrap Spack right now (#33407) 2022-10-19 19:25:20 +02:00
kwryankrattiger
a01c36da45
Install: Add use-buildcache option to install (#32537)
Install: Add use-buildcache option to install

* Allow differentiating between top level packages and dependencies when
determining whether to install from the cache or not.

* Add unit test for --use-buildcache

* Use metavar to display use-buildcache options.

* Update spack-completion
2022-09-29 13:48:06 -05:00
Tom Scogland
762ba27036
Make GHA tests parallel by using xdist (#32361)
* Add two no-op jobs named "all-prechecks" and "all"

These are a suggestion from @tgamblin, they are stable named markers we
can use from gitlab and possibly for required checks to make CI more
resilient to refactors changing the names of specific checks.

* Enable parallel testing using xdist for unit testing in CI

* Normalize tmp paths to deal with macos

* add -u flag compatibility to spack python

As of now, it is accepted and ignored.  The usage with xdist, where it
is invoked specifically by `python -u spack python` which is then passed
`-u` by xdist is the entire reason for doing this.  It should never be
used without explicitly passing -u to the executing python interpreter.

* use spack python in xdist to support python 2

When running on python2, spack has many import cycles unless started
through main.  To allow that, this uses `spack python` as the
interpreter, leveraging the `-u` support so xdist doesn't error out when
it unconditionally requests unbuffered binary IO.

* Use shutil.move to account for tmpdir being in a separate filesystem sometimes
2022-09-07 20:12:57 +02:00
Scott Wittenburg
6239198d65
Fix cause of checksum failures in public binary mirror (#32407)
Move the copying of the buildcache to a root job that runs after all the child
pipelines have finished, so that the operation can be coordinated across all
child pipelines to remove the possibility of race conditions during potentially
simlutandous copies. This lets us ensure the .spec.json.sig and .spack files
for any spec in the root mirror always come from the same child pipeline
mirror (though which pipeline is arbitrary).  It also allows us to avoid copying
of duplicates, which we now do.
2022-09-01 15:29:44 -06:00
Peter Scheibel
2968ae667f
New command, spack change, to change existing env specs (#31995)
If you have an environment like

```
$ cat spack.yaml
spack:
  specs: [openmpi@4.1.0+cuda]
```

this PR provides a new command `spack change` that you can use to adjust environment specs from the command line:

```
$ spack change openmpi~cuda
$ cat spack.yaml
spack:
  specs: [openmpi@4.1.0~cuda]
```

in other words, this allows you to tweak the details of environment specs from the command line.

Notes:

* This is only allowed for environments that do not define matrices
  * This is possible but not anticipated to be needed immediately
  * If this were done, it should probably only be done for "named"/not-anonymous specs (i.e. we can change `openmpi+cuda` but not spec like `+cuda` or `@4.0.1~cuda`)
2022-09-01 11:04:01 -07:00
Tamara Dahlgren
3c3b18d858
spack ci: add support for running stand-alone tests (#27877)
This support requires adding the '--tests' option to 'spack ci rebuild'.
Packages whose stand-alone tests are broken (in the CI environment) can
be configured in gitlab-ci to be skipped by adding them to
broken-tests-packages.

Highlights include:
- Restructured 'spack ci' help to provide better subcommand summaries;
- Ensured only one InstallError (i.e., installer's) rather than allowing
  build_environment to have its own; and
- Refactored CI and CDash reporting to keep CDash-related properties and
  behavior in a separate class.

This allows stand-alone tests from `spack ci` to run when the `--tests`
option is used.  With `--tests`, stand-alone tests are run **after** a
**successful** (re)build of the package.  Test results are collected
and report(able) using CDash.

This PR adds the following features:
- Adds `-t` and `--tests` to `spack ci rebuild` to run stand-alone tests;
- Adds `--fail-fast` to stop stand-alone tests after the first failure;
- Ensures a *single* `InstallError` across packages
    (i.e., removes second class from build environment);
- Captures skipping tests for externals and uninstalled packages
    (for CDash reporting);
- Copies test logs and outputs to the CI artifacts directory to facilitate
    debugging;
- Parses stand-alone test results to report outputs from each `run_test` as
    separate test parts (CDash reporting);
- Logs a test completion message to allow capture of timing of the last
    `run_test` part;
- Adds the runner description to the CDash site to better distinguish entries
    in CDash tables;
- Adds `gitlab-ci` `broken-tests-packages` to CI configuration to skip
    stand-alone testing for packages with known issues;
- Changes `spack ci --help` so description of each subcommand is a single line;
- Changes `spack ci <subcommand> --help` to provide the full description of
    each command (versus no description); and
- Ensures `junit` test log file ends in an `.xml` extension (versus default where
    it does not).

Tasks:

- [x] Include the equivalent of the architecture information, or at least the host target, in the CDash output
- [x] Upload stand-alone test results files as  `test` artifacts
- [x] Confirm tests are run in GitLab
- [x] Ensure CDash results are uploaded as artifacts
- [x] Resolve issues with CDash build-and test results appearing on same row of the table 
- [x] Add unit tests  as needed
- [x] Investigate why some (dependency) packages don't have test results (e.g., related from other pipelines)
- [x] Ensure proper parsing and reporting of skipped tests (as `not run`) .. post- #28701 merge
- [x] Restore the proper CDash URLand or mirror ONCE out-of-band testing completed
2022-08-23 00:52:48 -07:00
sparkyniner
21e6679056
spack list: add --tag flag (#32016)
* modified list.py and added functionality for --tag

* Removed long and very long, shifted rest of code above return statement

* removed results variable

* added import statement at top

* added the line accidentally deleted

* added line accidentally deleted

* changed p.name to p, added line inside if statement

* line order switched

* [@spackbot] updating style on behalf of sparkyniner

* ran update completion command

* add tests

* Update lib/spack/spack/test/cmd/list.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* [@spackbot] updating style on behalf of sparkyniner

* changed argument to mock_packages and moved code under filter by tag

* removed bad rebase code and added additional test

* [@spackbot] updating style on behalf of sparkyniner

* added line removed earlier

* added line removed earlier

* replaced function

* added more recommended changes

Co-authored-by: sairaj <sairaj@sairajs-MacBook-Pro.local>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-08-20 16:09:44 -06:00
Massimiliano Culpo
9a48035e49 asp: refactor low level API to permit the injection of configuration
This allows writing extension commands that can benchmark
different configurations in clingo, or try different
configurations for a single test.
2022-08-03 18:01:08 -07:00
Todd Gamblin
143f3f830c style: simplify arguments with --tool TOOL and --skip TOOL
`spack style` tests were annoyingly brittle because we could not easily be
specific about which tools to run (we had to use `--no-black`, `--no-isort`,
`--no-flake8`, and `--no-mypy`). We should be able to specify what to run OR
what to skip.

Now you can run, e.g.:

    spack style --tool black,flake8

or:

    spack style --skip black,isort

- [x] Remove  `--no-black`, `--no-isort`, `--no-flake8`, and `--no-mypy` args.
- [x] Add `--tool TOOL` argument.
- [x] Add `--skip TOOL` argument.
- [x] Allow either `--tool black --tool flake8` or `--tool black,flake8` syntax.
2022-07-31 13:29:20 -07:00
Todd Gamblin
67d27841ae black: configuration
This adds necessary configuration for flake8 and black to work together.

This also sets the line length to 99, per the data here:

* https://github.com/spack/spack/pull/24718#issuecomment-876933636

Given the data and the spirit of black's 88-character limit, we set the limit to 99
characters for all of Spack, because:

* 99 is one less than 100, a nice round number, and all lines will fit in a
  100-character wide terminal (even when the text editor puts a \ at EOL).
* 99 is just past the knee the file size curve for packages, and it means that packages
  remain readable and not significantly longer than they are now.
* It doesn't seem to hurt core -- files in core might change length by a few percent but
  seem like they'll be mostly the same as before -- just a bit more roomy.

- [x] set line length to 99
- [x] remove most exceptions from `.flake8` and add the ones black cares about
- [x] add `[tool.black]` to `pyproject.toml`
- [x] make `black` run if available in `spack style --fix`

Co-Authored-By: Tom Scogland <tscogland@llnl.gov>
2022-07-31 13:29:20 -07:00
Harmen Stoppels
e7be8dbbcf
spack stage: add missing --fresh and --reuse (#31626) 2022-07-20 13:50:56 +02:00
Vanessasaurus
6b1e86aecc
removing feature bloat: monitor and analyzers (#31130)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2022-07-07 00:49:40 -06:00
Scott Wittenburg
85e13260cf
ci: Support secure binary signing on protected pipelines (#30753)
This PR supports the creation of securely signed binaries built from spack
develop as well as release branches and tags. Specifically:

- remove internal pr mirror url generation logic in favor of buildcache destination
on command line
    - with a single mirror url specified in the spack.yaml, this makes it clearer where 
    binaries from various pipelines are pushed
- designate some tags as reserved: ['public', 'protected', 'notary']
    - these tags are stripped from all jobs by default and provisioned internally
    based on pipeline type
- update gitlab ci yaml to include pipelines on more protected branches than just
develop (so include releases and tags)
    - binaries from all protected pipelines are pushed into mirrors including the
    branch name so releases, tags, and develop binaries are kept separate
- update rebuild jobs running on protected pipelines to run on special runners
provisioned with an intermediate signing key
    - protected rebuild jobs no longer use "SPACK_SIGNING_KEY" env var to
    obtain signing key (in fact, final signing key is nowhere available to rebuild jobs)
    - these intermediate signatures are verified at the end of each pipeline by a new
    signing job to ensure binaries were produced by a protected pipeline
- optionallly schedule a signing/notary job at the end of the pipeline to sign all
packges in the mirror
    - add signing-job-attributes to gitlab-ci section of spack environment to allow
    configuration
    - signing job runs on special runner (separate from protected rebuild runners)
    provisioned with public intermediate key and secret signing key
2022-05-26 08:31:22 -06:00
Massimiliano Culpo
ba907defca
Add a command to generate a local mirror for bootstrapping (#28556)
This PR builds on #28392 by adding a convenience command to create a local mirror that can be used to bootstrap Spack. This is to overcome the inconvenience in setting up this mirror manually, which has been reported when trying to setup Spack on air-gapped systems.

Using this PR the user can create a bootstrapping mirror, on a machine with internet access, by:

% spack bootstrap mirror --binary-packages /opt/bootstrap
==> Adding "clingo-bootstrap@spack+python %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "gnupg@2.3: %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "patchelf@0.13.1:0.13.99 %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding binary packages from "https://github.com/alalazo/spack-bootstrap-mirrors/releases/download/v0.1-rc.2/bootstrap-buildcache.tar.gz" to the mirror at /opt/bootstrap/local-mirror

To register the mirror on the platform where it's supposed to be used run the following command(s):
  % spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources
  % spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries
The mirror has to be moved over to the air-gapped system, and registered using the commands shown at prompt. The command has options to:

1. Add pre-built binaries downloaded from Github (default is not to add them)
2. Add development dependencies for Spack (currently the Python packages needed to use spack style)

* bootstrap: refactor bootstrap.yaml to move sources metadata out

* bootstrap: allow adding/removing custom bootstrapping sources

This operation can be performed from the command line since
new subcommands have been added to `spack bootstrap`

* Add --trust argument to spack bootstrap add

* Add a command to generate a local mirror for bootstrapping

* Add a unit test for mirror creation
2022-05-24 21:33:52 +00:00
Greg Becker
ee04a1ab0b
errors: model error messages as an optimization problem (#30669)
Error messages for the clingo concretizer have proven challenging. The current messages are incredibly vague and often don't help users at all. Unsat cores in clingo are not guaranteed to be minimal, and lead to cores that are either not useful or need to be post-processed for hours to reach a minimal core.

Following up on an idea from a slack conversation with kwryankrattiger on slack, this PR takes a new approach. We eliminate most integrity constraints and minima/maxima on choice rules in clingo, and instead force invalid states to imply an error predicate. The error predicate can include context on the cause of the error (Package, Version, etc). These error predicates are then heavily optimized against, to ensure that we do not include error facts in the solution when a solution with no error facts could be generated. When post-processing the clingo solution to construct specs, any error facts cause the program to raise an exception. This leads to much more legible error messages. Each error predicate includes a priority and an error message. The error message is formatted by the remaining arguments to produce the error message. The priority is used to ensure that when clingo has a choice of which rules to violate, it chooses the one which will be most informative to the user.

Performance:

"fresh" concretizations appear to suffer a ~20% performance penalty under this branch, while "reuse" concretizations see a speedup of around 33%. 

Possible optimizations if users still see unhelpful messages:

There are currently 3 levels of priority of the error messages. Additional priorities are possible, and can allow us finer granularity to ensure more informative error messages are provided in lieu of less informative ones.

Future work:

Improve tests to ensure that every possible rule implying an error message is exercised
2022-05-20 08:27:07 -07:00
Todd Gamblin
7c1d566959 Remove all uses of runtime_hash; document lockfile formats and fix tests
This removes all but one usage of runtime hash. The runtime hash was being used to write
historical lockfiles for tests, but we don't need it for that; we can just save those
lockfiles.

- [x] add legacy lockfiles for v1, v2, v3
- [x] fix bugs with v1 lockfile tests (the dummy lockfile we were writing was not actually
      a v1 lockfile because it used the new spec file format).
- [x] remove all but one runtime_hash usage -- that one needs a small rework of the
      concretizer to really fix, as it's about separate concretization of build
      dependencies.
- [x] Document the history of the lockfile format in `environment/__init__.py`
2022-05-13 10:45:12 -07:00
Scott Wittenburg
f6e7c0b740 hashes: remove full_hash and build_hash from spack 2022-05-13 10:45:12 -07:00
Harmen Stoppels
2836648904
Makefile generator for parallel spack install of environments (#30254)
`make` solves a lot of headaches that would otherwise have to be implemented in Spack:

1. Parallelism over packages through multiple `spack install` processes
2. Orderly output of parallel package installs thanks to `make --sync-output=recurse` or `make -Orecurse` (works well in GNU Make 4.3; macOS is unfortunately on a 16 years old 3.x version, but it's one `spack install gmake` away...)
3. Shared jobserver across packages, which means a single `-j` to rule them all, instead of manually finding a balance between `#spack install processes` & `#jobs per package` (See #30302).

This pr adds the `spack env depfile` command that generates a Makefile with dag hashes as
targets, and dag hashes of dependencies as prerequisites, and a command
along the lines of `spack install --only=packages /hash` to just install
a single package.

It exposes two convenient phony targets: `all`, `fetch-all`. The former installs the environment, the latter just fetches all sources. So one can either use `make all -j16` directly or run `make fetch-all -j16` on a login node and `make all -j16` on a compute node. 

Example:

```yaml
spack:
  specs: [perl]
  view: false
```

running

```
$ spack -e . env depfile --make-target-prefix env | tee Makefile
```
generates

```Makefile
SPACK ?= spack

.PHONY: env/all env/fetch-all env/clean

env/all: env/env

env/fetch-all: env/fetch

env/env: env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww
	@touch $@

env/fetch: env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc
	@touch $@

env/dirs:
	@mkdir -p env/.fetch env/.install

env/.fetch/%: | env/dirs
	$(info Fetching $(SPEC))
	$(SPACK) -e '/tmp/tmp.7PHPSIRACv' fetch $(SPACK_FETCH_FLAGS) /$(notdir $@) && touch $@

env/.install/%: env/.fetch/%
	$(info Installing $(SPEC))
	+$(SPACK) -e '/tmp/tmp.7PHPSIRACv' install $(SPACK_INSTALL_FLAGS) --only-concrete --only=package --no-add /$(notdir $@) && touch $@

# Set the human-readable spec for each target
env/%/cdqldivylyxocqymwnfzmzc5sx2zwvww: SPEC = perl@5.34.1%gcc@10.3.0+cpanm+shared+threads arch=linux-ubuntu20.04-zen2
env/%/gv5kin2xnn33uxyfte6k4a3bynhmtxze: SPEC = berkeley-db@18.1.40%gcc@10.3.0+cxx~docs+stl patches=b231fcc arch=linux-ubuntu20.04-zen2
env/%/cuymc7e5gupwyu7vza5d4vrbuslk277p: SPEC = bzip2@1.0.8%gcc@10.3.0~debug~pic+shared arch=linux-ubuntu20.04-zen2
env/%/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: SPEC = diffutils@3.8%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws: SPEC = libiconv@1.16%gcc@10.3.0 libs=shared,static arch=linux-ubuntu20.04-zen2
env/%/yfz2agazed7ohevqvnrmm7jfkmsgwjao: SPEC = gdbm@1.19%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/73t7ndb5w72hrat5hsax4caox2sgumzu: SPEC = readline@8.1%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/trvdyncxzfozxofpm3cwgq4vecpxixzs: SPEC = ncurses@6.2%gcc@10.3.0~symlinks+termlib abi=none arch=linux-ubuntu20.04-zen2
env/%/sbzszb7v557ohyd6c2ekirx2t3ctxfxp: SPEC = pkgconf@1.8.0%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/c4go4gxlcznh5p5nklpjm644epuh3pzc: SPEC = zlib@1.2.12%gcc@10.3.0+optimize+pic+shared patches=0d38234 arch=linux-ubuntu20.04-zen2

# Install dependencies
env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww: env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc
env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p: env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk
env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao: env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu
env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu: env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs
env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs: env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp

env/clean:
	rm -f -- env/env env/fetch env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc
```

Then with `make -O` you get very nice orderly output when packages are built in parallel:
```console
$ make -Orecurse -j16
spack -e . install --only-concrete --only=package /c4go4gxlcznh5p5nklpjm644epuh3pzc && touch c4go4gxlcznh5p5nklpjm644epuh3pzc
==> Installing zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc
...
  Fetch: 0.00s.  Build: 0.88s.  Total: 0.88s.
[+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc
spack -e . install --only-concrete --only=package /sbzszb7v557ohyd6c2ekirx2t3ctxfxp && touch sbzszb7v557ohyd6c2ekirx2t3ctxfxp
==> Installing pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp
...
  Fetch: 0.00s.  Build: 3.96s.  Total: 3.96s.
[+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp
```

For Perl, at least for me, using `make -j16` versus `spack -e . install -j16` speeds up the builds from 3m32.623s to 2m22.775s, as some configure scripts run in parallel.

Another nice feature is you can do Makefile "metaprogramming" and depend on packages built by Spack. This example fetches all sources (in parallel) first, print a message, and only then build packages (in parallel).

```Makefile
SPACK ?= spack

.PHONY: env

all: env

spack.lock: spack.yaml
	$(SPACK) -e . concretize -f

env.mk: spack.lock
	$(SPACK) -e . env depfile -o $@ --make-target-prefix spack

fetch: spack/fetch
	@echo Fetched all packages && touch $@

env: fetch spack/env
	@echo This executes after the environment has been installed

clean:
	rm -rf spack/ env.mk spack.lock

ifeq (,$(filter clean,$(MAKECMDGOALS)))
include env.mk
endif
```
2022-05-05 10:45:21 -07:00
Greg Becker
e6346eb033
spack external find: add search path customization (#30479) 2022-05-05 08:59:44 +02:00
Massimiliano Culpo
5c7d6c6e10
Remove deprecated "--run-tests" option of "spack install" (#30461) 2022-05-04 07:43:29 +02:00
lpoirel
9e6298569e
Delocalize type output for bash completion (#30360) 2022-04-28 23:24:10 +00:00
Peter Scheibel
bb43308c44
Add command for reading JSON-based DB description (now with more tests) (#29652)
This is an amended version of https://github.com/spack/spack/pull/24894 (reverted in https://github.com/spack/spack/pull/29603). https://github.com/spack/spack/pull/24894
broke all instances of `spack external find` (namely when it is invoked without arguments/options)
because it was mandating the presence of a file which most systems would not have.
This allows `spack external find` to proceed if that file is not present and adds tests for this.

- [x] Add a test which confirms that `spack external find` successfully reads a manifest file
      if present in the default manifest path

--- Original commit message ---

Adds `spack external read-cray-manifest`, which reads a json file that describes a
set of package DAGs. The parsed results are stored directly in the database. A user
can see these installed specs with `spack find` (like any installed spec). The easiest
way to use them right now as dependencies is to run
`spack spec ... ^/hash-of-external-package`.

Changes include:

* `spack external read-cray-manifest --file <path/to/file>` will add all specs described
  in the file to Spack's installation DB and will also install described compilers to the
  compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR
  registers the origin as "external-db"). In the future, it is assumed users may want
  to be able to treat installs registered with this command differently (e.g. they may
  want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec
  is concrete
  * I don't think the hashes of installed-and-concrete specs should change and this
    was the easiest way to handle that
  * also specs that are concrete preserve their `.normal` property when copied
    (external specs may mention compilers that are not registered, and without this
    change they would fail in `normalize` when calling `validate_or_raise`)
  * it might be this should only be the case if the spec was installed

- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do
      something like "uninstall all packages added with `spack read-external-db`)
  * This is now possible with `spack uninstall --all --origin=external-db` (this will
    remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
2022-04-28 10:56:26 -07:00
Greg Becker
3e863848f8
build_env/test_env: add concretizer args (#30289) 2022-04-28 11:37:15 +02:00
lorddavidiii
3a0aba0835
spack spec: add '--format' argument (#27908) 2022-04-26 09:08:56 -07:00
iarspider
834f8e04ca
Environments: add flag to skip printing concretized specs (#30272)
With an active environment, you can now run "spack concretize --quiet"
and it will suppress printing the concretized specs.
2022-04-25 15:54:54 -07:00
Zack Galbreath
dec3e31e60
spack ci: remove relate-CDash-builds functionality (#29950)
gitlab ci: Remove code for relating CDash builds

Relating CDash builds to their dependencies was a seldom used feature.  Removing
it will make it easier for us to reorganize our CDash projects & build groups in the 
future by eliminating the needs to keep track of CDash build ids in our binary mirrors.
2022-04-14 10:42:30 -06:00
Tamara Dahlgren
fd055d4678
spack info: make sections optional; add build/stand-alone test information (#22097)
Add output of build- and install-time tests to info command

Enable dependencies, variants, and versions by default (i.e., provide --no* 
options; add gcc to test_info_fields to increase coverage for c_names->v_names
2022-03-28 22:15:38 +00:00
Nils Vu
bfb6873ce3
Revert "Add command for reading a json-based DB description (#24894)" (#29603)
This reverts commit 531b1c5c3d.
2022-03-19 16:30:46 -06:00
Peter Scheibel
531b1c5c3d
Add command for reading a json-based DB description (#24894)
Adds `spack external read-cray-manifest`, which reads a json file that describes a set of package DAGs. The parsed results are stored directly in the database. A user can see these installed specs with `spack find` (like any installed spec). The easiest way to use them right now as dependencies is to run `spack spec ... ^/hash-of-external-package`.

Changes include:

* `spack external read-cray-manifest --file <path/to/file>` will add all specs described in the file to Spack's installation DB and will also install described compilers to the compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR registers the origin as "external-db"). In the future, it is assumed users may want to be able to treat installs registered with this command differently (e.g. they may want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec is concrete
  * I don't think the hashes of installed-and-concrete specs should change and this was the easiest way to handle that
  * also specs that are concrete preserve their `.normal` property when copied (external specs may mention compilers that are not registered, and without this change they would fail in `normalize` when calling `validate_or_raise`)
  * it might be this should only be the case if the spec was installed

- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do something like "uninstall all packages added with `spack read-external-db`)
  * This is now possible with `spack uninstall --all --origin=external-db` (this will remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2022-03-18 17:07:22 -07:00
John Parent
cf1349ba35 "spack commands --update-completion" 2022-03-17 09:01:01 -07:00
John Parent
90c773488c Add Github Actions for Windows (#24504)
Setup Installer CI (#25184), (#25191)

Co-authored-by: Zack Galbreath <zack.galbreath@kitware.com>
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
2022-03-17 09:01:01 -07:00
lou.lawrence@kitware.com
012758c179 Windows: Create installer and environment
* Add 'make-installer' command for Windows

* Add '--bat' arg to env activate, env deactivate and unload commands

* An equivalent script to setup-env on linux: spack_cmd.bat. This script
has a wrapper to evaluate cd, load/unload, env activate/deactivate.(#21734)

* Add spacktivate and config editor (#22049)

* spack_cmd: will find python and spack on its own. It preferentially
tries to use python on your PATH (#22414)

* Ignore Windows python installer if found (#23134)

* Bundle git in windows installer (#23597)

* Add Windows section to Getting Started document
(#23131), (#23295), (#24240)

Co-authored-by: Stephen Crowell <stephen.crowell@kitware.com>
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Co-authored-by: Ben Cowan <benc@txcorp.com>

Update Installer CI

Co-authored-by: John Parent <john.parent@kitware.com>
2022-03-17 09:01:01 -07:00
百地 希留耶
3370d3f57e
Fix tab completion erroring with spack unit-test (#29405) 2022-03-09 16:09:57 +01:00
Todd Gamblin
36b0730fac
Add spack --bootstrap option for accessing bootstrap store (#25601)
We can see what is in the bootstrap store with `spack find -b`, and you can clean it with `spack
clean -b`, but we can't do much else with it, and if there are bootstrap issues they can be hard to
debug.

We already have `spack --mock`, which allows you to swap in the mock packages from the command
line. This PR introduces `spack -b` / `spack --bootstrap`, which runs all of spack with
`ensure_bootstrap_configuration()` set. This means that you can run `spack -b find`, `spack -b
install`, `spack -b spec`, etc. to see what *would* happen with bootstrap configuration, to remove
specific bootstrap packages, etc. This will hopefully make developers' lives easier as they deal
with bootstrap packages.

This PR also uses a `nullcontext` context manager. `nullcontext` has been implemented in several
other places in Spack, and this PR consolidates them to `llnl.util.lang`, with a note that we can
delete the function if we ever reqyire a new enough Python.

- [x] introduce `spack --bootstrap` option
- [x] consolidated all `nullcontext` usages to `llnl.util.lang`
2022-02-22 12:35:34 -07:00
Massimiliano Culpo
7fd94fc4bc
spack external find: change default behavior (#29031)
See https://github.com/spack/spack/issues/25353#issuecomment-1041868116

This commit changes the default behavior of
```
$ spack external find
```
from searching all the possible packages Spack knows about to
search only for the ones tagged as being a "build-tool".

It also introduces a `--all` option to restore the old behavior.
2022-02-18 11:51:01 -07:00
Tamara Dahlgren
fefe65a35b
Testing: optionally run tests on externally installed packages (#28701)
Since Spack does not install external packages, this commit skips them by
default when running stand-alone tests. The assumption is that such packages
have likely undergone an acceptance test process. 

However, the tests can be run against installed externals using 
```
% spack test run --externals ...
```
2022-02-17 19:47:42 +01:00
Todd Gamblin
a2b8e0c3e9 commands: refactor --reuse handling to use config
`--reuse` was previously handled individually by each command that
needed it. We are growing more concretization options, and they'll
need their own section for commands that support them.

Now there are two concretization options:

* `--reuse`: Attempt to reuse packages from installs and buildcaches.
* `--fresh`: Opposite of reuse -- traditional spack install.

To handle thes, this PR adds a `ConfigSetAction` for `argparse`, so
that you can write argparse code like this:

```
     subgroup.add_argument(
        '--reuse', action=ConfigSetAction, dest="concretizer:reuse",
        const=True, default=None,
        help='reuse installed dependencies/buildcaches when possible'
     )
```

With this, you don't need to add logic to pull the argument out and
handle it; the `ConfigSetAction` just does it for you. This can probably
be used to clean up some other commands later, as well.

Code that was previously passing `reuse=True` around everywhere has
been refactored to use config, and config is set from the CLI using
a new `add_concretizer_args()` function in `spack.cmd.common.arguments`.

- [x] Add `ConfigSetAction` to simplify concretizer config on the CLI
- [x] Refactor code so that it does not pass `reuse=True` to every function.
- [x] Refactor commands to use `add_concretizer_args()` and to pass
      concretizer config using the config system.
2022-02-16 10:17:18 -08:00
Todd Gamblin
93377942d1 Update copyright year to 2022 2022-01-14 22:50:21 -08:00
Todd Gamblin
a18a0e7a47 commands: add spack pkg source and spack pkg hash
To make it easier to see how package hashes change and how they are computed, add two
commands:

* `spack pkg source <spec>`: dumps source code for a package to the terminal

* `spack pkg source --canonical <spec>`: dumps canonicalized source code for a
   package to the terminal. It strips comments, directives, and known-unused
   multimethods from the package. It is used to generate package hashes.

* `spack pkg hash <spec>`: This gives the package hash for a particular spec.
  It is generated from the canonical source code for the spec.

- [x] `add spack pkg source` and `spack pkg hash`
- [x] add tests
- [x] fix bug in multimethod resolution with boolean `@when` values

Co-authored-by: Greg Becker <becker33@llnl.gov>
2022-01-12 06:14:18 -08:00
Harmen Stoppels
f0b70b7c8e
Fix spack install --v[tab] spec (#28278) 2022-01-06 14:47:03 +01:00
Massimiliano Culpo
4381cb5957
New subcommand: spack bootstrap status (#28004)
This command pokes the environment, Python interpreter
and bootstrap store to check if dependencies needed by
Spack are available.

If any are missing, it shows a comprehensible message.
2021-12-23 10:34:04 -08:00
Vanessasaurus
d9c4b91af3
Remove ability to run spack monitor without auth (#27888)
spack monitor now requires authentication as each build must be associated
with a user, so it does not make sense to allow the --monitor-no-auth flag
and this commit will remove it
2021-12-17 18:00:43 +01:00
victorusu
18615b1485
Add setdefault option to tcl module (#14686)
This commit introduces the command

spack module tcl setdefault <package>

similar to the one already available for lmod

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2021-12-17 10:05:32 +01:00
Greg Becker
5319b6e3b1
Add option to minimize full debug cores. include warning message about performance (#27970)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2021-12-15 08:52:53 +01:00
Massimiliano Culpo
d17511a806
Refactor "spack buildcache" command (#27776)
This PR is meant to move code with "business logic" from `spack.cmd.buildcache` to appropriate core modules[^1]. 

Modifications:
- [x] Add `spack.binary_distribution.push` to create a binary package from a spec and push it to a mirror
- [x] Add `spack.binary_distribution.install_root_node` to install only the root node of a concrete spec from a buildcache (may check the sha256 sum if it is passed in as input)
- [x] Add `spack.binary_distribution.install_single_spec` to install a single concrete spec from a buildcache
- [x] Add `spack.binary_distribution.download_single_spec` to download a single concrete spec from a buildcache to a local destination
- [x] Add `Spec.from_specfile` that construct a spec given the path of a JSON or YAML spec file
- [x] Removed logic from `spack.cmd.buildcache`
- [x] Removed calls to `spack.cmd.buildcache` in `spack.bootstrap`
- [x] Deprecate `spack buildcache copy` with a message that says it will be removed in v0.19.0

[^1]: The rationale is that commands should be lightweight wrappers of the core API, since that helps with both testing and scripting (easier mocking and no need to invoke `SpackCommand`s in a script).
2021-12-10 10:23:14 +01:00
Massimiliano Culpo
270ba10962
spack flake8: remove deprecated command (#27290)
The "spack flake8" command wwas deprecated in favor
of "spack style". The deprecation wwarning is in the
0.17.X series, so removing it for v0.18.x
2021-11-24 14:20:11 -08:00
Joseph Snyder
d759612523
Add connection specification to mirror creation (#24244)
* Add connection specification to mirror creation

This allows each mirror to contain information about the credentials
used to access it.

Update command and tests based on comments

Switch to only "long form" flags for the s3 connection information.
Use the "any" function instead of checking for an empty list when looking
for s3 connection information.

Split test to use the access token separately from the access id and key.
Use long flag form in test.

Add endpoint_url to available S3 options.

Extend the special parameters for an S3 mirror to accept the
endpoint_url parameter.

Add a test.

* Add connection information per URL not per mirror

Expand the mirror-based connection information to be per-URL.
This will allow a user to specify different S3 connection information
for both the fetch and the push URLs.

Add a parameter for "profile", another way of storing the id/secret pair.

* Switch from "access_profile" to "profile"
2021-11-19 15:28:34 -05:00
Michael Davis
3375db12a5
Adding --reuse to dev-build command. (#27487) 2021-11-19 09:25:45 +01:00
Harmen Stoppels
9637ed05f5
Fix overloaded argparse keys (#27379)
Commands should not reuse option names defined in main.
2021-11-11 23:34:18 -08:00
Todd Gamblin
e13e697067
commands: spack load --list alias for spack find --loaded (#27184)
See #25249 and https://github.com/spack/spack/pull/27159#issuecomment-958163679.
This adds `spack load --list` as an alias for `spack find --loaded`.  The new command is
not as powerful as `spack find --loaded`, as you can't combine it with all the queries or
formats that `spack find` provides.  However, it is more intuitively located in the command
structure in that it appears in the output of `spack load --help`.

The idea here is that people can use `spack load --list`  for simple stuff but fall back to
`spack find --loaded` if they need more.

- add help to `spack load --list` that references `spack find`
- factor some parts of `spack find` out to be called from `spack load`
- add shell tests
- update docs

Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Richarda Butler <39577672+RikkiButler20@users.noreply.github.com>
2021-11-05 00:58:29 -07:00
Massimiliano Culpo
31dfad9c16 spack install: add --reuse argument 2021-11-05 00:15:47 -07:00
Massimiliano Culpo
e2744fafa1 spack concretize: add --reuse argument 2021-11-05 00:15:47 -07:00
Massimiliano Culpo
290f57c779 spack spec: add --reuse argument 2021-11-05 00:15:47 -07:00
Massimiliano Culpo
da57b8775f Update command completion 2021-11-05 00:15:47 -07:00
Massimiliano Culpo
78c08fccd5
Bootstrap GnuPG (#24003)
* GnuPG: allow bootstrapping from buildcache and sources

* Add a test to bootstrap GnuPG from binaries

* Disable bootstrapping in tests

* Add e2e test to bootstrap GnuPG from sources on Ubuntu

* Add e2e test to bootstrap GnuPG on macOS
2021-11-02 23:15:24 -07:00
Michael Kuhn
1e26e25bc8
spack arch: add --generic argument (#27061)
The `--generic` argument allows printing the best generic target for the
current machine. This can be quite handy when wanting to find the
generic architecture to use when building a shared software stack for
multiple machines.
2021-11-02 10:19:23 +01:00
Tamara Dahlgren
9d3d7c68fb
Add tag filters to spack test list (#26842) 2021-11-02 10:00:21 +01:00
Tamara Dahlgren
d4cecd9ab2
feature: add "spack tags" command (#26136)
This PR adds a "spack tags" command to output package tags or 
(available) packages with those tags. It also ensures each package
is listed in the tag cache ONLY ONCE per tag.
2021-11-01 20:40:29 +00:00
Tom Scogland
87e456d59c
spack setup-env.sh: make zsh loading async compatible, and ~10x faster (in some cases) (#26120)
Currently spack is a bit of a bad actor as a zsh plugin, and it was my
fault.  The autoload and compinit should really be handled by the user,
as was made abundantly clear when I found spack was doing completion
initialization for *all* of my plugins due to a deferred setup that was
getting messed up by it.

Making this conditional took spack load time from 1.5 seconds (with
module loading disabled) to 0.029 seconds. I can actually afford to load
spack by default with this change in.

Hopefully someday we'll do proper zsh completion support, but for now
this helps a lot.

* use zsh hist expansion in place of dirname
* only run (bash)compinit if compdef/complete missing
* add zsh compiled files to .gitignore
* move changes to .in file, because spack
2021-10-28 11:32:59 -07:00
Massimiliano Culpo
6063600a7b
containerize: pin the Spack version used in a container (#21910)
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442:
```yaml
container:
  images:
    os: amazonlinux:2
    spack:
      ref: develop
      resolve_sha: true
```
The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`.

Modifications:
- [x] Permit to pin the version of Spack, either by branch or tag or sha
- [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1)
- [x] Permit to print the bootstrap image as a standalone
- [x] Add documentation on the new part of the schema
- [x] Add unit tests for different use cases
2021-10-25 13:09:27 -07:00
Tamara Dahlgren
cc8b6ca69f
Add --preferred and --latest tospack checksum (#25830) 2021-10-20 13:38:55 +00:00
Harmen Stoppels
c0c9ab113e
Add spack env activate --temp (#25388)
Creates an environment in a temporary directory and activates it, which
is useful for a quick ephemeral environment:

```
$ spack env activate -p --temp
[spack-1a203lyg] $ spack add zlib
==> Adding zlib to environment /tmp/spack-1a203lyg
==> Updating view at /tmp/spack-1a203lyg/.spack-env/view
```
2021-10-11 06:56:03 -04:00
Harmen Stoppels
f28b08bf02
Remove unused --dependencies flag (#25731) 2021-10-11 10:16:11 +02:00
Nathan Hanford
d83f7110d5
specs: move to new spec.json format with build provenance (#22845)
This is a major rework of Spack's core core `spec.yaml` metadata format.  It moves from `spec.yaml` to `spec.json` for speed, and it changes the format in several ways. Specifically:

1. The spec format now has a `_meta` section with a version (now set to version `2`).  This will simplify major changes like this one in the future.
2. The node list in spec dictionaries is no longer keyed by name. Instead, it is a list of records with no required key. The name, hash, etc. are fields in the dictionary records like any other.
3. Dependencies can be keyed by any hash (`hash`, `full_hash`, `build_hash`).
4. `build_spec` provenance from #20262 is included in the spec format. This means that, for spliced specs, we preserve the *full* provenance of how to build, and we can reproduce a spliced spec from the original builds that produced it.

**NOTE**: Because we have switched the spec format, this PR changes Spack's hashing algorithm.  This means that after this commit, Spack will think a lot of things need rebuilds.

There are two major benefits this PR provides:
* The switch to JSON format speeds up Spack significantly, as Python's builtin JSON implementation is orders of magnitude faster than YAML. 
* The new Spec format will soon allow us to represent DAGs with potentially multiple versions of the same dependency -- e.g., for build dependencies or for compilers-as-dependencies.  This PR lays the necessary groundwork for those features.

The old `spec.yaml` format continues to be supported, but is now considered a legacy format, and Spack will opportunistically convert these to the new `spec.json` format.
2021-09-09 01:48:30 -07:00
Todd Gamblin
c309adb4b3
url stats: add --show-issues option (#25792)
* tests: make `spack url [stats|summary]` work on mock packages

Mock packages have historically had mock hashes, but this means they're also invalid
as far as Spack's hash detection is concerned.

- [x] convert all hashes in mock package to md5 or sha256
- [x] ensure that all mock packages have a URL
- [x] ignore some special cases with multiple VCS fetchers

* url stats: add `--show-issues` option

`spack url stats` tells us how many URLs are using what protocol, type of checksum,
etc., but it previously did not tell us which packages and URLs had the issues. This
adds a `--show-issues` option to show URLs with insecure (`http`) URLs or `md5` hashes
(which are now deprecated by NIST).
2021-09-08 07:59:06 -07:00
Vanessasaurus
8e61f54260
start of work to add spack audit packages-https checker (#25670)
This PR will add a new audit, specifically for spack package homepage urls (and eventually
other kinds I suspect) to see if there is an http address that can be changed to https.

Usage is as follows:

```bash
$ spack audit packages-https <package>
```
And in list view:

```bash
$ spack audit list
generic:
  Generic checks relying on global variables

configs:
  Sanity checks on compilers.yaml
  Sanity checks on packages.yaml

packages:
  Sanity checks on specs used in directives

packages-https:
  Sanity checks on https checks of package urls, etc.
```

I think it would be unwise to include with packages, because when run for all, since we do requests it takes a long time. I also like the idea of more well scoped checks - likely there will be other addresses for http/https within a package that we eventually check. For now, there are two error cases - one is when an https url is tried but there is some SSL error (or other error that means we cannot update to https):

```bash
$ spack audit packages-https zoltan
PKG-HTTPS-DIRECTIVES: 1 issue found
1. Error with attempting https for "zoltan": 
    <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'www.cs.sandia.gov'. (_ssl.c:1125)>
```
This is either not fixable, or could be fixed with a change to the url or (better) contacting the site owners to ask about some certificate or similar.

The second case is when there is an http that needs to be https, which is a huge issue now, but hopefully not after this spack PR.

```bash
$ spack audit packages-https xman
Package "xman" uses http but has a valid https endpoint.
```

And then when a package is fixed:

```bash
$ spack audit packages-https zlib
PKG-HTTPS-DIRECTIVES: 0 issues found.
```
And that's mostly it. :)

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-09-02 08:46:27 +02:00
Scott Wittenburg
350372e3bf
buildcache: Add environment-aware buildcache sync command (#25470) 2021-08-19 12:15:40 -06:00
Massimiliano Culpo
4318ceb2b3
Bootstrap clingo from binaries (#22720)
* Bootstrap clingo from binaries

* Move information on clingo binaries to a JSON file

* Add support to bootstrap on Cray

Bootstrapping on Cray requires, at the moment, to
swap the platform when looking for binaries - due
to #22800.

* Add SHA256 verification for bootstrapped software

Use sha256 verification for binaries necessary to bootstrap
the concretizer and gpg for signature verification

* patchelf: use Spec._old_concretize() to bootstrap

As noted in #24450 we may happen to need the
concretizer when bootstrapping clingo. In that case
only the old concretizer is available.

* Add a schema for bootstrapping methods

Two fields have been added to bootstrap.yaml:
  "sources" which lists the methods available for
       bootstrapping software
  "trusted" which records if a source is trusted or not

A subcommand has been added to "spack bootstrap" to list
the sources currently available.

* Methods used for bootstrapping are configurable from bootstrap:sources

The function that tries to ensure a given Python module
is importable now tries bootstrapping methods in the same
order as they are defined in `bootstrap.yaml`

* Permit to trust/untrust bootstrapping methods

* Add binary tests for MacOS, Ubuntu

* Add documentation

* Add a note on bash
2021-08-18 11:14:02 -07:00
Vanessasaurus
54e8e19a60
adding spack diff command (#22283)
A `spack diff` will take two specs, and then use the spack.solver.asp.SpackSolverSetup to generate
lists of facts about each (e.g., nodes, variants, etc.) and then take a set difference between the
two to show the user the differences.

Example output:

    $ spack diff python@2.7.8 python@3.8.11
     ==> Warning: This interface is subject to change.

     --- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
     +++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
     @@ variant_value @@
     -  python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
     +  python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
     @@ version @@
     -  openssl Version(1.0.2u)
     +  openssl Version(1.1.1k)
     -  python Version(2.7.8)
     +  python Version(3.8.11)

Currently this uses diff-like output but we will attempt to improve on this in the future.

One use case for `spack diff` is whenever a user has a disambiguate situation and cannot 
remember how two different installs are different. The command can also output `--json` in
the case of a more analysis type use case where we want to save complete data with all
diffs and the intersection. However, the command is really more intended for a command
line use case, and we likely will have an analyzer more suited to saving data

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2021-07-30 00:08:38 -07:00
Todd Gamblin
c8efec0295
spack style: add --root option (#25085)
This adds a `--root` option so that `spack style` can check style for
a spack instance other than its own.

We also change the inner workings of `spack style` so that `--config FILE`
(and similar options for the various tools) options are used. This ensures
that when `spack style` runs, it always uses the config from the running spack,
and does *not* pick up configuration from the external root.

- [x] add `--root` option to `spack style`
- [x] add `--config` (or similar) option when invoking style tools
- [x] add a test that verifies we can check an external instance
2021-07-27 15:09:17 -06:00
Massimiliano Culpo
3228c35df6
Enable/disable bootstrapping and customize store location (#23677)
* Permit to enable/disable bootstrapping and customize store location

This PR adds configuration handles to allow enabling
and disabling bootstrapping, and to customize the store
location.

* Move bootstrap related configuration into its own YAML file

* Add a bootstrap command to manage configuration
2021-07-12 19:00:37 -04:00
Todd Gamblin
24a4d81097 style: clean up and restructure spack style command
This consolidates code across tools in `spack style` so that each
`run_<tool>` function can be called indirecty through a dictionary
of handlers, and os that checks like finding the executable for the
tool can be shared across commands.

- [x] rework `spack style` to use decorators to register tools
- [x] define tool order in one place in `spack style`
- [x] fix python 2/3 issues to Get `isort` checks working
- [x] make isort error regex more robust across versions
- [x] remove unused output option
- [x] change vestigial `TRAVIS_BRANCH` to `GITHUB_BASE_REF`
- [x] update completion
2021-07-07 17:27:31 -07:00
Massimiliano Culpo
32f1aa607c
Add an audit system to Spack (#23053)
Add a new "spack audit" command. This command can check for issues
with configuration or with packages and is intended to help a
user debug a failed Spack build. 

In some cases the reported issues are always errors but are too
costly to check for (e.g. packages that specify missing variants on
dependencies). In other cases the issues may be legitimate but
uncommon usage of Spack and we want to be sure the user intended the
behavior (e.g. duplicate compiler definitions).

Audits are grouped by theme, and for now the two themes are packages
and configuration. For example you can run all available audits
on packages with "spack audit packages". It is intended that in
the future users will be able to define their own audits.

The package audits are good candidates for running in package_sanity
(i.e. they could catch bugs in user-submitted packages before they
are merged) but that is left for a later PR.
2021-06-18 07:52:08 -06:00
Vanessasaurus
e7ac422982
Adding support for spack monitor with containerize (#23777)
This should get us most of the way there to support using monitor during a spack container build, for both Singularity and Docker. Some quick notes:

### Docker
Docker works by way of BUILDKIT and being able to specify --secret. What this means is that you can prefix a line with a mount of type secret as follows:

```bash
# Install the software, remove unnecessary deps
RUN --mount=type=secret,id=su --mount=type=secret,id=st cd /opt/spack-environment && spack env activate . && export SPACKMON_USER=$(cat /run/secrets/su) && export SPACKMON_TOKEN=$(cat /run/secrets/st) && spack install --monitor --fail-fast && spack gc -y
```
Where the id for one or more secrets corresponds to the file mounted at `/run/secrets/<name>`. So, for example, to build this container with su (spackmon user) and sv (spackmon token) defined I would export them on my host and do:

```bash
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container . 
```
And when we add `env` to the secret definition that tells the build to look for the secret with id "st" in the environment variable `SPACKMON_TOKEN` for example.

If the user is building locally with a local spack monitor, we also need to set the `--network` to be the host, otherwise you can't connect to it (a la isolation of course.)

## Singularity

Singularity doesn't have as nice an ability to clearly specify secrets, so (hoping this eventually gets implemented) what I'm doing now is providing the user instructions to write the credentials to a file, add it to the container to source, and remove when done.

## Tags

Note that the tags PR https://github.com/spack/spack/pull/23712 will need to be merged before `--monitor-tags` will actually work because I'm checking for the attribute (that doesn't exist yet):

```bash
"tags": getattr(args, "monitor_tags", None)
```
So when that PR is merged to update the argument group, it will work here, and I can either update the PR here to not check if the attribute is there (it will be) or open another one in the case this PR is already merged. 

Finally, I added a bunch of documetation for how to use monitor with containerize. I say "mostly working" because I can't do a full test run with this new version until the container base is built with the updated spack (the request to the monitor server for an env install was missing so I had to add it here).

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-06-17 17:15:22 -07:00
Vanessasaurus
53dae0040a
adding spack upload command (#24321)
this will first support uploads for spack monitor, and eventually could be
used for other kinds of spack uploads

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-06-15 14:36:02 -07:00
Vanessasaurus
6f534acbef
adding support for export of private gpg key (#22557)
This PR allows users to `--export`, `--export-secret`, or both to  export GPG keys
from Spack. The docs are updated that include a warning that this usually does not
need to be done.

This addresses an issue brought up in slack, and also represented in #14721.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-28 23:32:57 -07:00
Greg Becker
7490d63c38
Separable module configuration -- without the bugs this time (#23703)
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).

This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.

As part of this change, the module roots configuration moved from the config section to inside each module configuration.

Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
2021-05-28 14:12:05 -07:00
Scott Wittenburg
91f66ea0a4
Pipelines: reproducible builds (#22887)
### Overview

The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment.  The two key changes here which aim to improve reproducibility are: 

1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts.  This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.  

To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:

- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build

#### Some highlights

One consequence of this change will be much smaller pipeline yaml files.  By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.

Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace.  With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows.  If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.

There are some changes to be aware of in how pipelines should be set up after this PR:

#### Pipeline generation

Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile.  This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts.  If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:

```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
     - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
     - spack env activate --without-view .
     - spack ci generate --check-index-only
+      --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
       --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
   artifacts:
     paths:
-      - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+      - "${CI_PROJECT_DIR}/jobs_scratch_dir"
   tags: ["spack", "public", "medium", "x86_64"]
   interruptible: true
```

Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`.  This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.

#### Rebuild jobs

Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts.  When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs.  The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`.  The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.

When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment.   If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate.  No other changes to existing custom rebuild scripts should be required as a result of this PR. 

As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI.  The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`.  If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:

```
To reproduce this build locally, run:
  spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
  export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```

When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you  can copy-paste to run a local reproducer of the CI job.

This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.

This  PR erelies on 
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
2021-05-28 09:38:07 -07:00
Vanessasaurus
3cef5663d8
adding json export for spack blame (#23417)
I would like to be able to export (and save and then load programatically)
spack blame metadata, so this commit adds a spack blame --json argument,
along with developer docs for it

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-25 12:40:08 -06:00
Vanessasaurus
b44bb952eb
first set of work to allow for saving local results with spack monitor (#23804)
This work will come in two phases. The first here is to allow saving of a local result
with spack monitor, and the second will add a spack monitor command so the user can
do spack monitor upload.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-25 11:29:34 -07:00
vsoch
f2b362b5b3 adding support to tag a build
This will be useful to run multiple build experiments and organize by name

Signed-off-by: vsoch <vsoch@users.noreply.github.com>
2021-05-19 07:01:18 -07:00
Harmen Stoppels
8446bebdd9
Revert "Separable module configurations (#22588)" (#23674)
This reverts commit cefbe48c89.
2021-05-17 06:42:48 -07:00
Greg Becker
cefbe48c89
Separable module configurations (#22588)
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).

This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.

As part of this change, the module roots configuration moved from the `config` section to inside each module configuration.

Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.

TODO:
- [x] code changes to support multiple module sets
- [x] code changes to support modules relative to a view
- [x] Tests for multiple module configurations
- [x] Tests for modules relative to a view
- [x] Backwards compatibility for module roots from config section
- [x] Backwards compatibility for default module set without the name specified
- [x] Tests for backwards compatibility
2021-05-14 15:03:28 -07:00
Scott Wittenburg
91de23ce65 install cmd: --no-add in an env installs w/out concretize and add
In an active concretize environment, support installing one or more
cli specs only if they are already present in the environment.  The
`--no-add` option is the default for root specs, but optional for
dependency specs.  I.e. if you `spack install <depspec>` in an
environment, the dependency-only spec `depspec` will be added as a
root of the environment before being installed.  In addition,
`spack install --no-add <spec>` fails if it does not find an
unambiguous match for `spec`.
2021-05-07 10:07:53 -07:00
Vanessasaurus
7f91c1a510
Merge pull request #21930 from vsoch/add/spack-monitor
This provides initial support for [spack monitor](https://github.com/spack/spack-monitor), a web application that stores information and analysis about Spack installations.  Spack can now contact a monitor server and upload analysis -- even after a build is already done.

Specifically, this adds:
- [x] monitor options for `spack install`
- [x] `spack analyze` command
- [x] hook architecture for analyzers
- [x] separate build logs (in addition to the existing combined log)
- [x] docs for spack analyze
- [x] reworked developer docs, with hook docs
- [x] analyzers for:
  - [x] config args
  - [x] environment variables
  - [x] installed files
  - [x] libabigail

There is a lot more information in the docs contained in this PR, so consult those for full details on this feature.

Additional tests will be added in a future PR.
2021-04-15 00:38:36 -07:00
Harmen Stoppels
8b16728fd9
Add "spack [cd|location] --source-dir" (#22321) 2021-03-29 17:31:24 +02:00
Harmen Stoppels
195341113e
Expand relative dev paths in environment files (#22045)
* Rewrite relative dev_spec paths internally to absolute paths in case of relocation of the environment file

* Test relative paths for dev_path in environments

* Add a --keep-relative flag to spack env create

This ensures that relative paths of develop paths are not expanded to
absolute paths when initializing the environment in a different location
from the spack.yaml init file.
2021-03-15 15:38:35 -05:00
Harmen Stoppels
4f1d9d6095
Propagate --test= for environments (#22040)
* Propagate --test= for environments

* Improve help comment for spack concretize --test flag

* Add tests for --test with environments
2021-03-15 15:34:18 -05:00
Vanessasaurus
746081e933
adding spack -c to set one off config arguments (#22251)
This pull request will add the ability for a user to add a configuration argument on the fly, on the command line, e.g.,:

```bash
$ spack -c config:install_tree:root:/path/to/config.yaml -c packages:all:compiler:[gcc] list --help
```
The above command doesn't do anything (I'm just getting help for list) but you can imagine having another root of packages, and updating it on the fly for a command (something I'd like to do in the near future!)

I've moved the logic for config_add that used to be in spack/cmd/config.py into spack/config.py proper, and now both the main.py (where spack commands live) and spack/cmd/config.py use these functions. I only needed spack config add, so I didn't move the others. We can move the others if there are also needed in multiple places.
2021-03-13 05:31:26 +00:00
Danny McClanahan
f0275e84ad
fix setup-env.sh on older linux zsh (#21721) 2021-03-10 09:44:50 -08:00
Todd Gamblin
8d3272f82d
spack python: add --path option (#22006)
This adds a `--path` option to `spack python` that shows the `python`
interpreter that Spack is using.

e.g.:

```console
$ spack python --path
/Users/gamblin2/src/spack/var/spack/environments/default/.spack-env/view/bin/python
```

This is useful for debugging, and we can ask users to run it to
understand what python Spack is picking up via preferences in `bin/spack`
and via the `SPACK_PYTHON` environment variable introduced in #21222.
2021-03-07 13:37:26 -08:00
Todd Gamblin
7aa5cc241d
add spack test list --all (#22032)
`spack test list` will show you which *installed* packages can be tested
but it won't show you which packages have tests.

- [x] add `spack test list --all` to show which packages have test methods
- [x] update `has_test_method()` to handle package instances *and*
      package classes.
2021-03-07 11:44:17 -08:00
Massimiliano Culpo
10e9e142b7
Bootstrap clingo from sources (#21446)
* Allow the bootstrapping of clingo from sources

Allow python builds with system python as external
for MacOS

* Ensure consistent configuration when bootstrapping clingo

This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.

* Github actions: test clingo with bootstrapping from sources

* Add command to inspect and clean the bootstrap store

 Prevent users to set the install tree root to the bootstrap store

* clingo: documented how to bootstrap from sources

Co-authored-by: Gregory Becker <becker33@llnl.gov>
2021-03-03 09:37:46 -08:00
Paul Ferrell
e85a8cde37
Config prefer upstream (#21487)
This allows for quickly configuring a spack install/env to use upstream packages by default. This is particularly important when upstreaming from a set of officially supported spack installs on a production cluster. By configuring such that package preferences match the upstream, you ensure maximal reuse of existing package installations.
2021-02-24 10:57:50 -08:00
Scott Wittenburg
5b0507cc65
Pipelines: Temporary buildcache storage (#21474)
Before this change, in pipeline environments where runners do not have access
to persistent shared file-system storage, the only way to pass buildcaches to
dependents in later stages was by using the "enable-artifacts-buildcache" flag
in the gitlab-ci section of the spack.yaml.  This change supports a second
mechanism, named "temporary-storage-url-prefix", which can be provided instead
of the "enable-artifacts-buildcache" feature, but the two cannot be used at the
same time.  If this prefix is provided (only "file://" and "s3://" urls are
supported), the gitlab "CI_PIPELINE_ID" will be appended to it to create a url
for a mirror where pipeline jobs will write buildcache entries for use by jobs
in subsequent stages.  If this prefix is provided, a cleanup job will be
generated to run after all the rebuild jobs have finished that will delete the
contents of the temporary mirror.  To support this behavior a new mirror
sub-command has been added: "spack mirror destroy" which can take either a
mirror name or url.

This change also fixes a bug in generation of "needs" list for each job.  Each
jobs "needs" list is supposed to only contain direct dependencies for scheduling
purposes, unless "enable-artifacts-buildcache" is specified.  Only in that case
are the needs lists supposed to contain all transitive dependencies.  This
changes fixes a bug that caused the needs lists to always contain all transitive
dependencies, regardless of whether or not "enable-artifacts-buildcache" was
specified.
2021-02-16 18:21:18 -07:00
Scott Wittenburg
428f831899
Pipelines: DAG Pruning (#20435)
Pipelines: DAG pruning

During the pipeline generation staging process we check each spec against all configured mirrors to determine whether it is up to date on any of the mirrors.  By default, and with the --prune-dag argument to "spack ci generate", any spec already up to date on at least one remote mirror is omitted from the generated pipeline.  To generate jobs for up to date specs instead of omitting them, use the --no-prune-dag argument.  To speed up the pipeline generation process, pass the --check-index-only argument.  This will cause spack to check only remote buildcache indices and avoid directly fetching any spec.yaml files from mirrors.  The drawback is that if the remote buildcache index is out of date, spec rebuild jobs may be scheduled unnecessarily.

This change removes the final-stage-rebuild-index block from gitlab-ci section of spack.yaml.  Now rebuilding the buildcache index of the mirror specified in the spack.yaml is the default, unless "rebuild-index: False" is set.  Spack assigns the generated rebuild-index job runner attributes from an optional new "service-job-attributes" block, which is also used as the source of runner attributes for another generated non-build job, a no-op job, which spack generates to avoid gitlab errors when DAG pruning results in empty pipelines.
2021-02-16 09:12:37 -07:00
Adam J. Stewart
7ccb9992a6
Procedure to deprecate old versions of software (#20767)
* Procedure to deprecate old versions of software

* Add documentation

* Fix bug in logic

* Update tab completion

* Deprecate legacy packages

* Deprecate old mxnet as well

* More explicit docs
2021-02-09 13:51:18 -05:00
Massimiliano Culpo
694d633a2c
spack external find: allow to search by tags (#21407)
This commit adds an option to the `external find`
command that allows it to search by tags. In this
way group of executables with common purposes can
be grouped under a single name and a simple command
can be used to detect all of them.

As an example introduce the 'build-tools' tag to
search for common development tools on a system
2021-02-04 13:17:32 +01:00
Adam J. Stewart
aa8e026242
spack setup: remove the command for v0.17.0 (#20277)
spack setup was deprecated in 0.16 and will be removed in 0.17

Follow-up to #18240
2021-01-27 09:24:09 +01:00
Vanessasaurus
67ce1939a3
spack python: allow use of IPython (#20329)
This adds a -i option to "spack python" which allows use of the
IPython interpreter; it can be used with "spack python -i ipython".
This assumes it is available in the Python instance used to run
Spack (i.e. that you can "import IPython").
2021-01-05 16:54:47 -08:00
Todd Gamblin
a8ccb8e116 copyrights: update all files with license headers for 2021
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
- [x] update all licensed files to say 2013-2021 using
      `spack license update-copyright-year`
- [x] appease mypy with some additions to package.py that needed
      for oneapi.py
2021-01-02 12:12:00 -08:00
Todd Gamblin
78f39bdfee commands: add spack license update-copyright-year
This adds a new subcommand to `spack license` that automatically updates
the copyright year in files that should have a license header.

- [x] add `spack license update-copyright-year` command
- [x] add test
2021-01-02 12:12:00 -08:00
Tom Scogland
857749a9ba
add mypy to style checks; rename spack flake8 to spack style (#20384)
I lost my mind a bit after getting the completion stuff working and
decided to get Mypy working for spack as well. This adds a 
`.mypy.ini` that checks all of the spack and llnl modules, though
not yet packages, and fixes all of the identified missing types and
type issues for the spack library.

In addition to these changes, this includes:

* rename `spack flake8` to `spack style`

Aliases flake8 to style, and just runs flake8 as before, but with
a warning.  The style command runs both `flake8` and `mypy`,
in sequence. Added --no-<tool> options to turn off one or the
other, they are on by default.  Fixed two issues caught by the tools.

* stub typing module for python2.x

We don't support typing in Spack for python 2.x. To allow 2.x to
support `import typing` and `from typing import ...` without a
try/except dance to support old versions, this adds a stub module
*just* for python 2.x.  Doing it this way means we can only reliably
use all type hints in python3.7+, and mypi.ini has been updated to
reflect that.

* add non-default black check to spack style

This is a first step to requiring black.  It doesn't enforce it by
default, but it will check it if requested.  Currently enforcing the
line length of 79 since that's what flake8 requires, but it's a bit odd
for a black formatted project to be quite that narrow.  All settings are
in the style command since spack has no pyproject.toml and I don't
want to add one until more discussion happens. Also re-format
`style.py` since it no longer passed the black style check
with the new length.

* use style check in github action

Update the style and docs action to use `spack style`, adding in mypy
and black to the action even if it isn't running black right now.
2020-12-22 21:39:10 -08:00
Tom Scogland
c1e4f3e131
Refactor flake8 handling and tool compatibility (#20376)
This PR does three related things to try to improve developer tooling quality of life:

1. Adds new options to `.flake8` so it applies the rules of both `.flake8` and `.flake_package` based on paths in the repository.
2. Adds a re-factoring of the `spack flake8` logic into a flake8 plugin so using flake8 directly, or through editor or language server integration, only reports errors that `spack flake8` would.
3. Allows star import of `spack.pkgkit` in packages, since this is now the thing that needs to be imported for completion to work correctly in package files, it's nice to be able to do that.

I'm sorely tempted to sed over the whole repository and put `from spack.pkgkit import *` in every package, but at least being allowed to do it on a per-package basis helps.

As an example of what the result of this is:

```
~/Workspace/Projects/spack/spack develop* ⇣
❯ flake8 --format=pylint ./var/spack/repos/builtin/packages/kripke/package.py
./var/spack/repos/builtin/packages/kripke/package.py:6: [F403] 'from spack.pkgkit import *' used; unable to detect undefined names
./var/spack/repos/builtin/packages/kripke/package.py:25: [E501] line too long (88 > 79 characters)

~/Workspace/Projects/spack/spack refactor-flake8*
1 ❯ flake8 --format=spack ./var/spack/repos/builtin/packages/kripke/package.py

~/Workspace/Projects/spack/spack refactor-flake8*
❯ flake8 ./var/spack/repos/builtin/packages/kripke/package.py
```

* qa/flake8: update .flake8, spack formatter plugin

Adds:
* Modern flake8 settings for per-path/glob error ignores, allows
  packages to use the same `.flake8` as the rest of spack
* A spack formatter plugin to flake8 that implements the behavior of
  `spack flake8` for direct invocations.  Makes integration with
  developer tooling nicer, linting with flake8 reports only errors that
  `spack flake8` would report.  Using pyls and pyls-flake8, or any other
  non-format-dependent flake8 integration, now works with spack's rules.

* qa/flake8: allow star import of spack.pkgkit

To get working completion of directives and spack components it's
necessary to import the contents of spack.pkgkit.  At the moment doing
this makes flake8 displeased.  For now, allow spack.pkgkit and spack
both, next step is to ban spack * and require spack.pkgkit *.

* first cut at refactoring spack flake8

This version still copies all of the files to be checked as befire, and
some other things that probably aren't necessary, but it relies on the
spack formatter plugin to implement the ignore logic.

* keep flake8 from rejecting itself

* remove separate packages flake8 config

* fix failures from too many files

I ran into this in the PR converting pkgkit to std.  The solution in
that branch does not work in all cases as it turns out, and all the
workarounds I tried to use generated configs to get a single invocation
of flake8 with a filename optoion to work failed.  It's an astonishingly
frustrating config option.

Regardless, this removes all temporary file creation from the command
and relies on the plugin instead.  To work around the huge number of
files in spack and still allow the command to control what gets checked,
it scans files in batches of 100.  This is a completely arbitrary number
but was chosen to be safely under common line-length limits.  One
side-effect of this is that every 100 files the command will produce
output, rather than only at the end, which doesn't seem like a terrible
thing.
2020-12-22 09:28:46 -08:00
Tom Scogland
71c77fa8fa
minimal zsh completion (#20253)
Since zsh can load bash completion files natively, seems reasonable to just turn this on.
The only changes are to switch from `type -t` which zsh doesn't support to using `type`
with a regex and adding a new arm to the sourcing of the completions to allow it to work
for zsh as well as bash.

Could use more bash/dash/etc testing probably, but everything I've thought to try has
worked so far.

Notes:
* unit-test zsh support, fix issues
Specifically fixed word splitting in completion-test, use a different
method to apply sh emulation to zsh loaded bash completion, and fixed
an incompatibility in regex operator quoting requirements.

* compinit now ignores insecure directories
Completion isn't meant to be enabled in non-interactive environments, so
by default compinit will ask the user if they want to ignore insecure
directories or load them anyway.  To pass the spack unit tests in GH
actions, this prompt must be disabled, so ignore explicitly until a
better solution can be found.

* debug functions test also requires bash emulation
COMP_WORDS is a bash-ism that zsh doesn't natively support, turn on
emulation for just that section of tests to allow the comparison to
work.  Does not change the behavior of the functions themselves since
they are already pinned to sh emulation elsewhere.

* propagate change to .in file

* fix comment and update script based on .in
2020-12-18 17:26:15 -08:00
vvolkl
ed258ca9e9
Add "spack versions --new" flag to only show new versions (#20030)
* [cmd versions] add spack versions --new flag to only fetch new versions

format

[cmd versions] rename --latest to --newest and add --remote-only

[cmd versions] add tests for --remote-only and --new

format

[cmd versions] update shell tab completion

[cmd versions] remove test for --remote-only --new which gives empty output

[cmd versions] final rename

format

* add brillig mock package

* add test for spack versions --new

* [brillig] format

* [versions] increase test coverage

* Update lib/spack/spack/cmd/versions.py

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* Update lib/spack/spack/cmd/versions.py

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2020-12-07 09:29:10 -06:00
eugeneswalker
badf3368ad
allow install of build-deps from cache via --include-build-deps switch (#19955)
* allow install of build-deps from cache via --include-build-deps switch

* make clear that --include-build-deps is useful for CI pipeline troubleshooting
2020-12-03 15:27:01 -08:00
Michael Kuhn
20367e472d
cmd: add spack mark command (#16662)
This adds a new `mark` command that can be used to mark packages as either
explicitly or implicitly installed. Apart from fixing the package
database after installing a dependency manually, it can be used to
implement upgrade workflows as outlined in #13385.

The following commands demonstrate how the `mark` and `gc` commands can be
used to only keep the current version of a package installed:
```console
$ spack install pkgA
$ spack install pkgB
$ git pull # Imagine new versions for pkgA and/or pkgB are introduced
$ spack mark -i -a
$ spack install pkgA
$ spack install pkgB
$ spack gc
```

If there is no new version for a package, `install` will simply mark it as
explicitly installed and `gc` will not remove it.

Co-authored-by: Greg Becker <becker33@llnl.gov>
2020-11-18 03:20:56 -08:00
Greg Becker
77b2e578ec
spack test (#15702)
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.

Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.

This handles the following trickier aspects of testing with direct
support in Spack's package API:

- [x] Caching source or intermediate build files at build time for
      use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
      packages).

See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-11-18 02:39:02 -08:00
Todd Gamblin
0ed019d4ef concretizer: first working version with pyclingo interface
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
      ultimately hurt performance)
2020-11-17 10:04:13 -08:00
Todd Gamblin
4092c90b57
commands: add spack tutorial command (#19808)
Added a command to set up Spack for our tutorial at
https://spack-tutorial.readthedocs.io.

The command does some common operations we need first-time users to do.
Specifically:

- checks out a particular branch of Spack
- deletes spurious configuration in `~/.spack` that might be
  left over from prior parts of the tutorial
- adds a mirror and trusts its public key
2020-11-09 12:47:08 +01:00
Scott Wittenburg
31f57e56bb
Binary caching: use full hashes (#19209)
* "spack install" now has a "--require-full-hash-match" option, which
  forces Spack to skip an available binary package when the full hash
  doesn't match. Normally only a DAG-hash match is required, which
  ensures equivalent Specs, but does not account for changing logic
  inside the associated package.
* Add a local binary cache index which tracks specs that have a binary
  install available in a remote binary cache. It is updated with
  "spack buildcache list" or for a given spec when a binary package
  is retrieved for that Spec.
2020-10-30 12:53:33 -07:00
elsagermann
4750d479a0
Add testing option to dev-build command (#17293)
* ADD: testing to dev-build command

* RM: mutally exclusive group for testing in parser

* FIX: test option to subparser and not testing

* ADD: spack-completion.bash

* RM: local devbuildcosmo cmd

* FIX: bad merge --drop-in -b --before options forgotten

* FIX: --test place in spack-completion.bash

* FIX: typo

* FIX: blank line removing

* FIX: trailing white space

Co-authored-by: Elsa Germann <egermann@tsa-ln002.cm.cluster>
2020-10-18 23:17:07 -05:00
Greg Becker
7a6268593c
Environments: specify packages for developer builds (#15256)
* allow environments to specify dev-build packages

* spack develop and spack undevelop commands

* never pull dev-build packges from bincache

* reinstall dev_specs when code has changed; reinstall dependents too

* preserve dev info paths and versions in concretization as special variant

* move install overwrite transaction into installer

* move dev-build argument handling to package.do_install

now that specs are dev-aware, package.do_install can add
necessary args (keep_stage=True, use_cache=False) to dev
builds. This simplifies driving logic in cmd and env._install

* allow 'any' as wildcard for variants

* spec: allow anonymous dependencies

raise an error when constraining by or normalizing an anonymous dep
refactor concretize_develop to remove dev_build variant
refactor tests to check for ^dev_path=any instead of +dev_build

* fix variant class hierarchy
2020-10-15 17:23:16 -07:00
Scott Wittenburg
438f80d19e
Revert binary distribution cache manager (#19158)
This reverts #18359 and follow-on PRs intended to address issues with
#18359 because that PR changes the hash of all specs. A future PR will
reintroduce the changes.

* Revert "Fix location in spec.yaml where we look for full_hash (#19132)"
* Revert "Fix fetch of spec.yaml files from buildcache (#19101)"
* Revert "Merge pull request #18359 from scottwittenburg/add-binary-distribution-cache-manager"
2020-10-05 16:02:37 -07:00
Scott Wittenburg
a44135dccf
Update buildcache key index when we update the package index (#19117)
This changes makes sure that when we run the pipeline job that updates
the buildcache package index on the remote mirror, we also update the
key index.  The public keys corresponding to the signing keys used to
sign the package was pushed to the mirror as a part of creating the
buildcache index, so this is just ensuring those keys are reflected
in the key index.

Also, this change makes sure the "spack buildcache update-index"
job runs even when there may have been pipeline failures, since we
would like the index always to reflect the true state of the mirror.
2020-10-02 11:00:42 -06:00
Scott Wittenburg
075c3e0d92
Merge pull request #18359 from scottwittenburg/add-binary-distribution-cache-manager
Add binary distribution cache manager
2020-09-30 16:37:35 -06:00
Omar Padron
2d93154119
Streamline key management for build caches (#17792)
* Rework spack.util.web.list_url()

list_url() now accepts an optional recursive argument (default: False)
for controlling whether to only return files within the prefix url or to
return all files whose path starts with the prefix url.  Allows for the
most effecient implementation for the given prefix url scheme.  For
example, only recursive queries are supported for S3 prefixes, so the
returned list is trimmed down if recursive == False, but the native
search is returned as-is when recursive == True.  Suitable
implementations for each case are also used for file system URLs.

* Switch to using an explicit index for public keys

Switches to maintaining a build cache's keys under build_cache/_pgp.
Within this directory is an index.json file listing all the available
keys and a <fingerprint>.pub file for each such key.

 - Adds spack.binary_distribution.generate_key_index()
   - (re)generates a build cache's key index

 - Modifies spack.binary_distribution.build_tarball()
   - if tarball is signed, automatically pushes the key used for signing
     along with the tarball
   - if regenerate_index == True, automatically (re)generates the build
     cache's key index along with the build cache's package index; as in
     spack.binary_distribution.generate_key_index()

 - Modifies spack.binary_distribution.get_keys()
   - a build cache's key index is now used instead of programmatic
     listing

 - Adds spack.binary_distribution.push_keys()
   - publishes keys from Spack's keyring to a given list of mirrors

 - Adds new spack subcommand: spack gpg publish
   - publishes keys from Spack's keyring to a given list of mirrors

 - Modifies spack.util.gpg.Gpg.signing_keys()
   - Accepts optional positional arguments for filtering the set of keys
     returned

 - Adds spack.util.gpg.Gpg.public_keys()
   - As spack.util.gpg.Gpg.signing_keys(), except public keys are
     returned

 - Modifies spack.util.gpg.Gpg.export_keys()
   - Fixes an issue where GnuPG would prompt for user input if trying to
     overwrite an existing file

 - Modifies spack.util.gpg.Gpg.untrust()
   - Fixes an issue where GnuPG would fail for input that were not key
     fingerprints

 - Modifies spack.util.web.url_exists()
   - Fixes an issue where url_exists() would throw instead of returning
     False

* rework gpg module/fix error with very long GNUPGHOME dir

* add a shim for functools.cached_property

* handle permission denied error in gpg util

* fix tests/make gpgconf optional if no socket dir is available
2020-09-25 12:54:24 -04:00
Scott Wittenburg
ace52bd476 Provide your own script, before_script, and after_script 2020-09-14 10:37:42 -06:00
Richarda Butler
d721bd8070
commands: update help for spack install --yes-to-all (#18367)
`spack install --yes-to-all` doesn't actually make the build non-interactive,
but that is why people typically use it. This documents that you must also
specify `--no-checksum` for a fully non-interactive build.
2020-09-08 13:18:25 -07:00
Robert Blake
ea57171712
Make spack environment configurations writable from spack external and spack compiler find (#18165)
* spack config: default modification scope can be an environment

The previous model was that environments are the highest priority config
scope for config reading operations, but were not considered for config
writing operations. Now, the active environment is the highest priority
config scope for both reading and writing operations.

Now spack config add, spack external find and spack compiler set environment 
configuration in the environment by default if an environment is active. This is a
change in default behavior for these routines, but better matches the mental
model for an environment taking precedence over the user's default config file.

* add scope argument to 'spack external find' to choose non-default scope

* Increase testing for config modifications on environments

Co-authored-by: Gregory Becker <becker33@llnl.gov>
2020-09-05 01:12:26 -07:00
Massimiliano Culpo
c0d490ffbe Simplify the detection protocol for packages
Packages can implement “detect_version” to support detection
of external instances of a package. This is generally easier
than implementing “determine_spec_details”. The API for
determine_version is similar: for example you can return
“None” to indicate that an executable is not an instance
of a package.

Users may implement a “determine_variants” method for a package.
When doing external detection, executables are grouped by version
and each group results in a single invocation of “determine_variants”
for the associated spec. The method returns a string specifying
the variants for the package. The method may additionally return
a dictionary representing extra attributes for the package.

These will be stored in the spec yaml and can be retrieved
from self.spec.extra_attributes

The Spack GCC package has been updated with an implementation
of “determine_variants” which adds the following extra
attributes to the package: c, cxx, fortran
2020-08-10 11:59:05 -07:00
Massimiliano Culpo
193e8333fa Update packages.yaml format and support configuration updates
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).

With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
2020-08-10 11:59:05 -07:00
Adam J. Stewart
207e496162
spack create: ask how many to download (#17373) 2020-07-08 09:38:42 +02:00
Johannes Blaschke
1d55adfd2b
Add fish shell support (#9279)
* share/spack/setup-env.fish file to setup environment in fish shell

* setup-env.fish testing script

* Update share/spack/setup-env.fish

Co-Authored-By: Elsa Gonsiorowski, PhD <gonsie@me.com>

* Update share/spack/qa/setup-env-test.fish

Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>

* updates completions using `spack commands --update-completion`

* added stderr-nocaret warning

* added fish shell tests to CI system


Co-authored-by: becker33 <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Elsa Gonsiorowski, PhD <gonsie@me.com>
2020-06-30 14:26:27 -05:00
Greg Becker
d71fdc9719
remove three commands that have been deprecated since v0.13.0 (#17291)
* remove three commands that have been deprecated since v0.13.0
2020-06-29 11:15:56 -05:00
Scott Wittenburg
dfac09eade
Use json for buildcache index (#15002)
* Start moving toward a json buildcache index

* Add spec and database index schemas

* Add a schema for buildcache spec.yaml files

* Provide a mode for database class to generate buildcache index

* Update db and ci tests to validate object w/ new schema

* Remove unused temporary upload-s3 command

* Use database class to generate buildcache index

* Do not generate index with each buildcache creation

* Make buildcache index mode into a couple of constructor args to Database class

* Use keyword args for  _createtarball 

* Parse new json index when we get specs from buildcache

Now that only one index file per mirror needs to be fetched in
order to have all the concrete specs for binaries available on the
mirror, we can just fetch and refresh the cached specs every time
instead of needing to use the '-f' flag to force re-reading.
2020-06-26 17:05:56 -05:00
Omar Padron
7c54aa2eb0
add workaround for gitlab ci needs limit (#17219)
* add workaround for gitlab ci needs limit

* fix style/address review comments

* convert filter obj to list

* update command completion

* remove dict comprehension

* add workaround tests

* fix sorting issue between disparate types

* add indeces to format
2020-06-25 14:27:20 -04:00
Greg Becker
b26e93af3d
spack config: new subcommands add/remove (#13920)
spack config add <value>: add nested value value to the configuration scope specified
spack config remove/rm: remove specified configuration from the relevant scope
2020-06-25 09:38:01 +02:00
Tamara Dahlgren
48d3e8d350
features: Add install failure tracking removal through spack clean (#15314)
* Add ability to force removal of install failure tracking data through spack clean

* Add clean failures option to packaging guide
2020-06-24 20:28:53 -05:00
Tamara Dahlgren
96932d65a8 Added support for --fail-fast install option to terminate on first failure 2020-06-23 10:22:41 -07:00
Omar Padron
224dc95159
Pre ci optimization (#16372)
* add initial optimization script

* integrate optimization in spack ci

* make optimization opt-in

* fix import error

* flake8 fixes

* update command completion

* work around vermin errors

* fix sphynx errors
2020-06-22 13:19:47 -04:00
Massimiliano Culpo
5b272e3ff3
commands: use a single ThreadPool for spack versions (#16749)
This fixes a fork bomb in `spack versions`. Recursive generation of pools
to scrape URLs in `_spider` was creating large numbers of processes.
Instead of recursively creating process pools, we now use a single
`ThreadPool` with a concurrency limit.

More on the issue: having ~10 users running at the same time spack
versions on front-end nodes caused kernel lockup due to the high number
of sockets opened (sys-admin reports ~210k distributed over 3 nodes).
Users were internal, so they had ulimit -n set to ~70k.

The forking behavior could be observed by just running:

    $ spack versions boost

and checking the number of processes spawned. Number of processes
per se was not the issue, but each one of them opens a socket
which can stress `iptables`.

In the original issue the kernel watchdog was reporting:

    Message from syslogd@login03 at May 19 12:01:30 ...
    kernel:Watchdog CPU:110 Hard LOCKUP
    Message from syslogd@login03 at May 19 12:01:31 ...
    kernel:watchdog: BUG: soft lockup - CPU#110 stuck for 23s! [python3:2756]
    Message from syslogd@login03 at May 19 12:01:31 ...
    kernel:watchdog: BUG: soft lockup - CPU#94 stuck for 22s! [iptables:5603]
2020-06-05 00:08:32 -07:00
Peter Scheibel
24775697f5
Mirrors: add option to exclude packages from "mirror create" (#14154)
* add an --exclude-file option to 'spack mirror create' which allows a user to specify a file of specs to exclude when creating a mirror. this is anticipated to be useful especially when using the '--all' option

* allow specifying number of versions when mirroring all packages

* when mirroring all specs within an environment, include dependencies of root specs

* add '--exclude-specs' option to allow user to specify that specs should be excluded on the command line

* add test for excluding specs
2020-06-03 17:43:51 -07:00
Greg Becker
3347ef2de4
Feature: add option to create view by copying/relocating files (#16480)
* add subcommand `spack view copy/relocate`

* update bash completions

* add copy/relocate commands to view tests

* allow copied views to be removed
2020-06-03 09:45:13 -07:00
Scott Wittenburg
e0572a7d96 Pipelines: Support DAG scheduling and dynamic child pipelines
This change also adds a code path through the spack ci pipelines
infrastructure which supports PR testing on the Spack repository.
Gitlab pipelines run as a result of a PR (either creation or pushing
to a PR branch) will only verify that the packages in the environment
build without error.  When the PR branch is merged to develop,
another pipeline will run which results in the generated binaries
getting pushed to the binary mirror.
2020-05-14 21:11:07 -07:00
Ben Bergen
37e307e8cd
Added alias and bash completion for spacktivate (#16472) 2020-05-13 12:02:38 -06:00
Massimiliano Culpo
43c9ad3421
Remove 'spack bootstrap' and associated docs (#15179)
fixes #15145

This commit removes the outdated `spack bootstrap`
command and any reference to it in the documentation
and unit tests.
2020-05-11 10:55:18 -07:00
iarspider
08f449ae9a
"spack checksum" QoL (#14311)
* Non-interactive mode for spack checksum; allow passing 'package@version' to spack checksum

* Flake8 fixes

* Update checksum.py

Fix typo

* Update spack-completion script

* Automatically set non-interactive mode if more than one version passed

* Update lib/spack/spack/cmd/checksum.py

Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>

* Add documentation and update spack-completion

* Flake8

* Rename option

* Update spack-completion

* Update lib/spack/spack/cmd/checksum.py

Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>

* Update checksum.py

* Update stage.py

* Update create.py

Use batch mode when adding a new package

Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-05-07 18:34:36 -05:00
Peter Scheibel
b030a81a5f
Automatically find externals (#15158)
Add a `spack external find` command that tries to populate
`packages.yaml` with external packages from the user's `$PATH`. This
focuses on finding build dependencies. Currently, support has only been
added for `cmake`.

For a package to be discoverable with `spack external find`, it must define:
  * an `executables` class attribute containing a list of
    regular expressions that match executable names.
  * a `determine_spec_details(prefix, specs_in_prefix)` method

Spack will call `determine_spec_details()` once for each prefix where
executables are found, passing in the path to the prefix and the path to
all found executables. The package is responsible for invoking the
executables and figuring out what type of installation(s) are in the
prefix, and returning one or more specs (each with version, variants or
whatever else the user decides to include in the spec).

The found specs and prefixes will be added to the user's `packages.yaml`
file. Providing the `--not-buildable` option will mark all generated
entries in `packages.yaml` as `buildable: False`
2020-05-05 17:37:34 -07:00
Axel Huebl
d0dfa1ea4d
dev-build: --drop-in <shell> (#14887)
* dev-build: --drop-in <shell>

Add a `--drop-in <shell>` option to `spack dev-build`.
This option will automatically run a
`spack build-env <spec> -- <shell>` at the end of a `dev-build`, e.g.
to quickly drop-and-devel into a build phase of a package.

Example usage:
```
spack dev-build --before cmake --drop-in bash openpmd-api@develop
```

* build_env: drop in unit test

Co-authored-by: Greg Becker <becker33@llnl.gov>
2020-05-01 09:37:21 -07:00
Axel Huebl
00d83cd79d
dev-build: stop before phase (#14699)
Add `-b,--before` option to dev-build command to stop before the phase in question.
2020-04-28 09:55:57 -07:00
G-Ragghianti
bae4f91bfe
Add option "--first" for "spack load" (#15622)
* Implemented --first option for "spack load"

* added test for "spack load --first"

Co-authored-by: gragghia <gragghia@localhost.localdomain>
2020-04-03 13:33:20 -07:00