Commit Graph

653 Commits

Author SHA1 Message Date
Todd Gamblin
24c01d57cf
imports: sort imports everywhere in Spack (#24695)
* fix remaining flake8 errors

* imports: sort imports everywhere in Spack

We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.

This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
2021-07-08 22:12:30 +00:00
Massimiliano Culpo
3d11716e54
Add when context manager to group common constraints in packages (#24650)
This PR adds a context manager that permit to group the common part of a `when=` argument and add that to the context:
```python
class Gcc(AutotoolsPackage):
    with when('+nvptx'):
        depends_on('cuda')
        conflicts('@:6', msg='NVPTX only supported in gcc 7 and above')
        conflicts('languages=ada')
        conflicts('languages=brig')
        conflicts('languages=go')
```
The above snippet is equivalent to:
```python
class Gcc(AutotoolsPackage):
    depends_on('cuda', when='+nvptx')
    conflicts('@:6', when='+nvptx', msg='NVPTX only supported in gcc 7 and above')
    conflicts('languages=ada', when='+nvptx')
    conflicts('languages=brig', when='+nvptx')
    conflicts('languages=go', when='+nvptx')
```
which needs a repetition of the `when='+nvptx'` argument. The context manager might help improving readability and permits to group together directives related to the same semantic aspect (e.g. all the directives needed to model the behavior of `gcc` when `+nvptx` is active). 

Modifications:

- [x] Added a `when` context manager to be used with package directives
- [x] Add unit tests and documentation for the new feature
- [x] Modified `cp2k` and `gcc` to show the use of the context manager
2021-07-02 08:43:15 -07:00
Dylan Simon
963b931309
docs: link projections docs to spec format (#24478) 2021-06-27 08:38:28 +00:00
Adrien Bernede
6b852bc170
Doc: Note on required changes after merge of reproducible builds (#24347)
* Suggestion of a note for conversion of existing pipelines.

* Wording

* Fix format in .rst note

* Wording
2021-06-25 11:02:26 -06:00
Erik Schnetter
e3b220f699
Implement CVS fetcher (#23212)
Spack packages can now fetch versions from CVS repositories. Note
this fetch mechanism is unsafe unless using :extssh:. Most public
CVS repositories use an insecure protocol implemented as part of CVS.
2021-06-22 09:51:31 -07:00
Massimiliano Culpo
32f1aa607c
Add an audit system to Spack (#23053)
Add a new "spack audit" command. This command can check for issues
with configuration or with packages and is intended to help a
user debug a failed Spack build. 

In some cases the reported issues are always errors but are too
costly to check for (e.g. packages that specify missing variants on
dependencies). In other cases the issues may be legitimate but
uncommon usage of Spack and we want to be sure the user intended the
behavior (e.g. duplicate compiler definitions).

Audits are grouped by theme, and for now the two themes are packages
and configuration. For example you can run all available audits
on packages with "spack audit packages". It is intended that in
the future users will be able to define their own audits.

The package audits are good candidates for running in package_sanity
(i.e. they could catch bugs in user-submitted packages before they
are merged) but that is left for a later PR.
2021-06-18 07:52:08 -06:00
Vanessasaurus
e7ac422982
Adding support for spack monitor with containerize (#23777)
This should get us most of the way there to support using monitor during a spack container build, for both Singularity and Docker. Some quick notes:

### Docker
Docker works by way of BUILDKIT and being able to specify --secret. What this means is that you can prefix a line with a mount of type secret as follows:

```bash
# Install the software, remove unnecessary deps
RUN --mount=type=secret,id=su --mount=type=secret,id=st cd /opt/spack-environment && spack env activate . && export SPACKMON_USER=$(cat /run/secrets/su) && export SPACKMON_TOKEN=$(cat /run/secrets/st) && spack install --monitor --fail-fast && spack gc -y
```
Where the id for one or more secrets corresponds to the file mounted at `/run/secrets/<name>`. So, for example, to build this container with su (spackmon user) and sv (spackmon token) defined I would export them on my host and do:

```bash
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container . 
```
And when we add `env` to the secret definition that tells the build to look for the secret with id "st" in the environment variable `SPACKMON_TOKEN` for example.

If the user is building locally with a local spack monitor, we also need to set the `--network` to be the host, otherwise you can't connect to it (a la isolation of course.)

## Singularity

Singularity doesn't have as nice an ability to clearly specify secrets, so (hoping this eventually gets implemented) what I'm doing now is providing the user instructions to write the credentials to a file, add it to the container to source, and remove when done.

## Tags

Note that the tags PR https://github.com/spack/spack/pull/23712 will need to be merged before `--monitor-tags` will actually work because I'm checking for the attribute (that doesn't exist yet):

```bash
"tags": getattr(args, "monitor_tags", None)
```
So when that PR is merged to update the argument group, it will work here, and I can either update the PR here to not check if the attribute is there (it will be) or open another one in the case this PR is already merged. 

Finally, I added a bunch of documetation for how to use monitor with containerize. I say "mostly working" because I can't do a full test run with this new version until the container base is built with the updated spack (the request to the monitor server for an env install was missing so I had to add it here).

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-06-17 17:15:22 -07:00
Vanessasaurus
53dae0040a
adding spack upload command (#24321)
this will first support uploads for spack monitor, and eventually could be
used for other kinds of spack uploads

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-06-15 14:36:02 -07:00
Vanessasaurus
5521aae4f7
extending example for buildcaches (#22504)
* extending example for buildcaches

I was attempting to create a local build cache from a directory, and I found the
docs for both buildcaches and mirrors, but did not connect the docs that the
url variable could be the local filesystem variable. I am extending the docs for
buildcaches with an example of creating and interacting with one on the filesystem
because I suspect other users will run into this need and possibly not find what
they are looking for.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

* adding as follows to spack mirror list

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2021-06-14 21:46:27 -07:00
Vanessasaurus
39cdd085c9
adding more description to binary caches (#23934)
It is currently kind of confusing to the reader to distinguish spack buildcache install
and spack install, and it is not clear how to use a build cache once a mirror is added.
Hopefully this little big of description can help (and I hope I got it right!)

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-06-14 13:17:35 -07:00
Adam J. Stewart
11f370e7be
setup-env: allow users to skip module function setup (#24236)
* setup-env: allow users to skip module function setup

* Add documentation on SPACK_SKIP_MODULES
2021-06-11 19:19:24 +00:00
Adam J. Stewart
3d0bad465b
Docs: fix missing backtick in Environments docs (#24109) 2021-06-07 19:05:09 +02:00
Vanessasaurus
6f534acbef
adding support for export of private gpg key (#22557)
This PR allows users to `--export`, `--export-secret`, or both to  export GPG keys
from Spack. The docs are updated that include a warning that this usually does not
need to be done.

This addresses an issue brought up in slack, and also represented in #14721.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-28 23:32:57 -07:00
Greg Becker
7490d63c38
Separable module configuration -- without the bugs this time (#23703)
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).

This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.

As part of this change, the module roots configuration moved from the config section to inside each module configuration.

Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
2021-05-28 14:12:05 -07:00
Scott Wittenburg
91f66ea0a4
Pipelines: reproducible builds (#22887)
### Overview

The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment.  The two key changes here which aim to improve reproducibility are: 

1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts.  This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.  

To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:

- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build

#### Some highlights

One consequence of this change will be much smaller pipeline yaml files.  By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.

Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace.  With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows.  If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.

There are some changes to be aware of in how pipelines should be set up after this PR:

#### Pipeline generation

Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile.  This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts.  If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:

```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
     - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
     - spack env activate --without-view .
     - spack ci generate --check-index-only
+      --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
       --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
   artifacts:
     paths:
-      - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+      - "${CI_PROJECT_DIR}/jobs_scratch_dir"
   tags: ["spack", "public", "medium", "x86_64"]
   interruptible: true
```

Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`.  This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.

#### Rebuild jobs

Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts.  When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs.  The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`.  The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.

When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment.   If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate.  No other changes to existing custom rebuild scripts should be required as a result of this PR. 

As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI.  The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`.  If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:

```
To reproduce this build locally, run:
  spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
  export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```

When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you  can copy-paste to run a local reproducer of the CI job.

This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.

This  PR erelies on 
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
2021-05-28 09:38:07 -07:00
Vanessasaurus
3cef5663d8
adding json export for spack blame (#23417)
I would like to be able to export (and save and then load programatically)
spack blame metadata, so this commit adds a spack blame --json argument,
along with developer docs for it

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-25 12:40:08 -06:00
Vanessasaurus
b44bb952eb
first set of work to allow for saving local results with spack monitor (#23804)
This work will come in two phases. The first here is to allow saving of a local result
with spack monitor, and the second will add a spack monitor command so the user can
do spack monitor upload.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-25 11:29:34 -07:00
Robert Cohn
c5389c430b
Fix cross references in inteloneapipackage doc (#23744)
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2021-05-25 09:57:49 -07:00
Tamara Dahlgren
929d1de3e5
Stand-alone/Smoke tests: copy cached test sources to test stage (#23713) 2021-05-25 07:24:32 -07:00
Tamara Dahlgren
bed1644d52
Fix packaging guide table's build system links (#23879) 2021-05-25 07:13:00 +02:00
HDF-EOS Tools Information Center
00963149e1
Fix hyperlink formatting in docs (#23846) 2021-05-25 07:09:07 +02:00
Seth R. Johnson
d8cbd37aaa
Fix makefile filter suggestions (#23856)
Bash has a builtin `fc` that will override the compiler if you use "fc",
so it's better to use the full spack-supplied compiler path.

Additionally, the filter regex in the docs was wrong: it replaced the
entire assignment operation with the RHS.
2021-05-22 18:47:43 +00:00
Glenn Johnson
71b9e67b3c
Modification to R environment (#23623)
* Modification to R environment

This PR modifies how the R environmnet is presented, and fixes
installing the standalone Rmath library.

- The Rmath build and install methods are combined into one
- Set parallel=False when installing Rmath
- remove the run environment that set up variables for libraries and
  headers that are not really needed, and pollute the environment.

* Add setup_run_environment back

- Add back the setup_run_environment with LD_LIBRARY_PATH and
  PKG_CONFIG_PATH.
- Adjust documentation to reflect the current code.
2021-05-21 09:34:15 +02:00
vsoch
f2b362b5b3 adding support to tag a build
This will be useful to run multiple build experiments and organize by name

Signed-off-by: vsoch <vsoch@users.noreply.github.com>
2021-05-19 07:01:18 -07:00
Tamara Dahlgren
5bd42d1b31
docs/packaging guide: Reference test stage directory (#23707) 2021-05-18 17:56:38 -07:00
Tamara Dahlgren
0368f8ae51
Updates and format tweaks to the release documentation (#22053) 2021-05-11 10:39:25 -07:00
Tamara Dahlgren
066d33b4b3
Documentation: Refinement of "Checking an installation" (#22210)
There have been a lot of questions and some confusion recently surrounding Spack installation test capabilities so this PR is intended to clean up and refine the documentation for "Checking an installation".

It aims to better distinguish between checks that are performed during an installation (i.e., build-time tests) and those that can be done days and weeks after the software has been installed (i.e., install (or smoke) tests).
2021-05-11 10:37:48 -07:00
Tamara Dahlgren
2450ee0fb0
bugfix: correct force_autoreconf method syntax (#23546) 2021-05-10 19:31:11 +00:00
Scott Wittenburg
91de23ce65 install cmd: --no-add in an env installs w/out concretize and add
In an active concretize environment, support installing one or more
cli specs only if they are already present in the environment.  The
`--no-add` option is the default for root specs, but optional for
dependency specs.  I.e. if you `spack install <depspec>` in an
environment, the dependency-only spec `depspec` will be added as a
root of the environment before being installed.  In addition,
`spack install --no-add <spec>` fails if it does not find an
unambiguous match for `spec`.
2021-05-07 10:07:53 -07:00
Harmen Stoppels
3f4c9aeca7
Read colorization from environment variable, if command line is not set (#23130) 2021-04-28 13:03:25 +00:00
George Hartzell
6d789a5835
docs: be more precise on what spack add ... does (#23204)
This is as much a question as it is a minor fine-tuning of the docs.  I've been known to add things to an environment by editing the `spack.yaml` file directly.  When I read the previous version of this sentence, I was afraid that `spack add` was actually doing *two* things, modifying the `spack.yaml` and updating something else that defined the roots of the Environment.  A bit of experimentation suggests that editing the `spack.yaml` file is sufficient to change the roots.

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-04-23 13:29:19 +00:00
Vanessasaurus
e6de04d149
docs: spack does not have a variant debug for libelf (#23021)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-04-16 09:29:42 +02:00
Vanessasaurus
7f91c1a510
Merge pull request #21930 from vsoch/add/spack-monitor
This provides initial support for [spack monitor](https://github.com/spack/spack-monitor), a web application that stores information and analysis about Spack installations.  Spack can now contact a monitor server and upload analysis -- even after a build is already done.

Specifically, this adds:
- [x] monitor options for `spack install`
- [x] `spack analyze` command
- [x] hook architecture for analyzers
- [x] separate build logs (in addition to the existing combined log)
- [x] docs for spack analyze
- [x] reworked developer docs, with hook docs
- [x] analyzers for:
  - [x] config args
  - [x] environment variables
  - [x] installed files
  - [x] libabigail

There is a lot more information in the docs contained in this PR, so consult those for full details on this feature.

Additional tests will be added in a future PR.
2021-04-15 00:38:36 -07:00
Tiziano Müller
dee030618f
Documentation: update intel-parallel-studio instructions (#22248)
* Clarify stub compiler definition in compilers.yaml
* Update explanation of why stub compiler definition is needed
* Add note about required module definition when using Spack-installed
  intel-parallel-studio as intel-compiler
* Add suggestion about updating package config preferences based on
  choice of variants when installing intel-parallel-studio to avoid
  reinstallation
2021-04-13 13:31:14 -07:00
Robert Cohn
c8b4414230
[oneapi] fix mkl deps, externally installed, and docs (#22607) 2021-04-07 10:31:08 -06:00
Harmen Stoppels
bbc666a1d2
meson: added variants, changed defaults for the build system (#22715)
- Use debugoptimized as default build type, just like RelWithDebInfo for cmake
- Do not strip by default, and add a default_library variant which conveniently support both shared and static
2021-04-06 17:57:31 +02:00
Zack Galbreath
7cc2db1b4e
Check against a list of known-broken specs in ci generate (#22690)
* Strip leading slash from S3 key in url_exists()

* Check against a list of known-broken specs in `ci generate`
2021-04-02 17:40:47 -06:00
Harmen Stoppels
69d123a1a0
Document unzip (#22723) 2021-04-02 20:56:24 +00:00
Harmen Stoppels
1db6cd5d16
Make -j flag less exceptional (#22360)
* Make -j flag less exceptional

The -j flag in spack behaves differently from make, ctest, ninja, etc,
because it caps the number of jobs to an arbitrary number 16.
Spack will behave like other tools if `spack install` uses a reasonable
default, and `spack install -j <num>` *overrides* that default.

This will be particularly useful for Spack usage outside of a traditional
HPC context and for HPC centers that encourage users to compile on
login nodes with many cores instead of on compute nodes, which has
become increasingly common as individual nodes have more cores.

This maintains the existing default value of min(num_cpus, 16). However, 
as it is right now, Spack does a poor job at determining the number of 
cpus on linux, since it doesn't take cgroups into account. This is
particularly problematic when using distributed builds with slurm. This PR
also introduces `spack.util.cpus.cpus_available()` to consolidate
knowledge on determining the number of available cores, and improves
core detection for linux. This should also improve core detection for Docker/
Kubernetes, which also use cgroups.
2021-03-30 12:03:50 -07:00
Frédéric Simonis
38841ad746
Add doc for mirror of env (#22525) 2021-03-24 20:55:15 +00:00
Robert Cohn
f57626a7c4
Oneapi packages: update URLs, environment management, and dependencies (#22202)
* Replace URL computation in base IntelOneApiPackage class with
  defining URLs in component packages (this is expected to be
  simpler for now)
* Add component_dir property that all oneAPI component packages must
  define. This property names a directory that should exist after
  installation completes (useful for making sure the install was
  successful) and also defines the search location for the
  component's environment update script.
* Add needed dependencies for components (e.g. intel-oneapi-dnn
  requires intel-oneapi-tbb). The compilers provided by
  intel-oneapi-compilers need some components under certain
  circumstances (e.g. when enabling SYCL support) but these were
  omitted since the libraries should only be linked when a
  dependent package requests that feature
* Remove individual setup_run_environment implementations and use
  IntelOneApiPackage superclass method which sources vars.sh 
  (located in a subdirectory of component_dir)
* Add documentation for IntelOneApiPackge build system

Co-authored-by: Vasily Danilin <vasily.danilin@yandex.ru>
2021-03-22 17:35:45 -07:00
Greg Becker
f4b56620e5
Document cli syntax for environment scopes (#20344)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-03-21 10:14:13 +00:00
Wouter Deconinck
c9ba95cc5c
containerize: fix typo in documentation (#22331)
Before this fix, `spack containerize` complains that `centos/7` is invalid
(should have been `centos:7`)
2021-03-16 21:02:26 +00:00
Glenn Johnson
b5916451fd
Improve R package creation (#21861)
* Improve R package creation

This PR adds the `list_url` attribute to CRAN R packages when using
`spack create`. It also adds the `git` attribute to R Bioconductor
packages upon creation.

* Switch over to using cran/bioc attributes

The cran/bioc entries are set to have the '=' line up with homepage
entry, but homepage does not need to exist in the package file. If it
does not, that could affect the alignment.

* Do not have to split bioc

* Edit R package documentation

Explain Bioconductor packages and add `cran` and `bioc` attributes.

* Update lib/spack/docs/build_systems/rpackage.rst

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update lib/spack/docs/build_systems/rpackage.rst

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Simplify the cran attribute

The version can be faked so that the cran attribute is simply equal to
the CRAN package name.

* Edit the docs to reflect new `cran` attribute format

* Use the first element of self.versions() for url

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-03-05 21:19:15 +00:00
Massimiliano Culpo
10e9e142b7
Bootstrap clingo from sources (#21446)
* Allow the bootstrapping of clingo from sources

Allow python builds with system python as external
for MacOS

* Ensure consistent configuration when bootstrapping clingo

This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.

* Github actions: test clingo with bootstrapping from sources

* Add command to inspect and clean the bootstrap store

 Prevent users to set the install tree root to the bootstrap store

* clingo: documented how to bootstrap from sources

Co-authored-by: Gregory Becker <becker33@llnl.gov>
2021-03-03 09:37:46 -08:00
Tom Payerle
f5e65e94e6
documentation: correct precedence of included configs in environment spack.yaml (#18663)
fixes #17993
2021-02-19 13:31:47 +00:00
Tom Payerle
b448b639e6
Documentation fix: build_system configure_args for #21760 (#21761)
Corrects the signature for configure_args (and therefore configure_vars)
in documentation on RPackage build system to match the code
See issue #21760
2021-02-18 18:08:48 +00:00
Scott Wittenburg
428f831899
Pipelines: DAG Pruning (#20435)
Pipelines: DAG pruning

During the pipeline generation staging process we check each spec against all configured mirrors to determine whether it is up to date on any of the mirrors.  By default, and with the --prune-dag argument to "spack ci generate", any spec already up to date on at least one remote mirror is omitted from the generated pipeline.  To generate jobs for up to date specs instead of omitting them, use the --no-prune-dag argument.  To speed up the pipeline generation process, pass the --check-index-only argument.  This will cause spack to check only remote buildcache indices and avoid directly fetching any spec.yaml files from mirrors.  The drawback is that if the remote buildcache index is out of date, spec rebuild jobs may be scheduled unnecessarily.

This change removes the final-stage-rebuild-index block from gitlab-ci section of spack.yaml.  Now rebuilding the buildcache index of the mirror specified in the spack.yaml is the default, unless "rebuild-index: False" is set.  Spack assigns the generated rebuild-index job runner attributes from an optional new "service-job-attributes" block, which is also used as the source of runner attributes for another generated non-build job, a no-op job, which spack generates to avoid gitlab errors when DAG pruning results in empty pipelines.
2021-02-16 09:12:37 -07:00
Tamara Dahlgren
851490bd54
Correct the reference to the staged examples files (#21557) 2021-02-13 08:14:38 -08:00
Chuck Atkins
5a771bc8ad
Introduce a SPACK_PYTHON environment variable (#21222)
The SPACK_PYTHON environment variable can be set to a python interpreter to be
used by the spack command.  This allows the spack command itself to use a
consistent and separate interpreter from whatever python might be used for package
building.
2021-02-12 10:52:44 -08:00