Pipelines: reproducible builds (#22887)

### Overview

The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment.  The two key changes here which aim to improve reproducibility are: 

1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts.  This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.  

To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:

- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build

#### Some highlights

One consequence of this change will be much smaller pipeline yaml files.  By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.

Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace.  With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows.  If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.

There are some changes to be aware of in how pipelines should be set up after this PR:

#### Pipeline generation

Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile.  This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts.  If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:

```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
     - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
     - spack env activate --without-view .
     - spack ci generate --check-index-only
+      --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
       --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
   artifacts:
     paths:
-      - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+      - "${CI_PROJECT_DIR}/jobs_scratch_dir"
   tags: ["spack", "public", "medium", "x86_64"]
   interruptible: true
```

Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`.  This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.

#### Rebuild jobs

Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts.  When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs.  The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`.  The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.

When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment.   If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate.  No other changes to existing custom rebuild scripts should be required as a result of this PR. 

As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI.  The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`.  If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:

```
To reproduce this build locally, run:
  spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
  export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```

When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you  can copy-paste to run a local reproducer of the CI job.

This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.

This  PR erelies on 
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
This commit is contained in:
Scott Wittenburg 2021-05-28 10:38:07 -06:00 committed by GitHub
parent 4262de6a32
commit 91f66ea0a4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
15 changed files with 1674 additions and 554 deletions

View File

@ -30,52 +30,18 @@ at least one `runner <https://docs.gitlab.com/runner/>`_. Then the basic steps
for setting up a build pipeline are as follows: for setting up a build pipeline are as follows:
#. Create a repository on your gitlab instance #. Create a repository on your gitlab instance
#. Add a ``spack.yaml`` at the root containing your pipeline environment (see #. Add a ``spack.yaml`` at the root containing your pipeline environment
below for details)
#. Add a ``.gitlab-ci.yml`` at the root containing two jobs (one to generate #. Add a ``.gitlab-ci.yml`` at the root containing two jobs (one to generate
the pipeline dynamically, and one to run the generated jobs), similar to the pipeline dynamically, and one to run the generated jobs).
this one:
.. code-block:: yaml
stages: [generate, build]
generate-pipeline:
stage: generate
tags:
- <custom-tag>
script:
- spack env activate --without-view .
- spack ci generate
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
artifacts:
paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
build-jobs:
stage: build
trigger:
include:
- artifact: "jobs_scratch_dir/pipeline.yml"
job: generate-pipeline
strategy: depend
#. Add any secrets required by the CI process to environment variables using the
CI web ui
#. Push a commit containing the ``spack.yaml`` and ``.gitlab-ci.yml`` mentioned above #. Push a commit containing the ``spack.yaml`` and ``.gitlab-ci.yml`` mentioned above
to the gitlab repository to the gitlab repository
The ``<custom-tag>``, above, is used to pick one of your configured runners to See the :ref:`functional_example` section for a minimal working example. See also
run the pipeline generation phase (this is implemented in the ``spack ci generate`` the :ref:`custom_Workflow` section for a link to an example of a custom workflow
command, which assumes the runner has an appropriate version of spack installed based on spack pipelines.
and configured for use). Of course, there are many ways to customize the process.
You can configure CDash reporting on the progress of your builds, set up S3 buckets
to mirror binaries built by the pipeline, clone a custom spack repository/ref for
use by the pipeline, and more.
While it is possible to set up pipelines on gitlab.com, the builds there are While it is possible to set up pipelines on gitlab.com, as illustrated above, the
limited to 60 minutes and generic hardware. It is also possible to builds there are limited to 60 minutes and generic hardware. It is also possible to
`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_ `hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_) Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
@ -88,21 +54,127 @@ dynamically generated
Note that the use of dynamic child pipelines requires running Gitlab version Note that the use of dynamic child pipelines requires running Gitlab version
``>= 12.9``. ``>= 12.9``.
.. _functional_example:
------------------
Functional Example
------------------
The simplest fully functional standalone example of a working pipeline can be
examined live at this example `project <https://gitlab.com/scott.wittenburg/spack-pipeline-demo>`_
on gitlab.com.
Here's the ``.gitlab-ci.yml`` file from that example that builds and runs the
pipeline:
.. code-block:: yaml
stages: [generate, build]
variables:
SPACK_REPO: https://github.com/scottwittenburg/spack.git
SPACK_REF: pipelines-reproducible-builds
generate-pipeline:
stage: generate
tags:
- docker
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
script:
- spack env activate --without-view .
- spack -d ci generate
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
artifacts:
paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir"
build-jobs:
stage: build
trigger:
include:
- artifact: "jobs_scratch_dir/pipeline.yml"
job: generate-pipeline
strategy: depend
The key thing to note above is that there are two jobs: The first job to run,
``generate-pipeline``, runs the ``spack ci generate`` command to generate a
dynamic child pipeline and write it to a yaml file, which is then picked up
by the second job, ``build-jobs``, and used to trigger the downstream pipeline.
And here's the spack environment built by the pipeline represented as a
``spack.yaml`` file:
.. code-block:: yaml
spack:
view: false
concretization: separately
definitions:
- pkgs:
- zlib
- bzip2
- arch:
- '%gcc@7.5.0 arch=linux-ubuntu18.04-x86_64'
specs:
- matrix:
- - $pkgs
- - $arch
mirrors: { "mirror": "s3://spack-public/mirror" }
gitlab-ci:
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
- . "./spack/share/spack/setup-env.sh"
script:
- pushd ${SPACK_CONCRETE_ENV_DIR} && spack env activate --without-view . && popd
- spack -d ci rebuild
mappings:
- match: ["os=ubuntu18.04"]
runner-attributes:
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
tags:
- docker
enable-artifacts-buildcache: True
rebuild-index: False
The elements of this file important to spack ci pipelines are described in more
detail below, but there are a couple of things to note about the above working
example:
Normally ``enable-artifacts-buildcache`` is not recommended in production as it
results in large binary artifacts getting transferred back and forth between
gitlab and the runners. But in this example on gitlab.com where there is no
shared, persistent file system, and where no secrets are stored for giving
permission to write to an S3 bucket, ``enabled-buildcache-artifacts`` is the only
way to propagate binaries from jobs to their dependents.
Also, it is usually a good idea to let the pipeline generate a final "rebuild the
buildcache index" job, so that subsequent pipeline generation can quickly determine
which specs are up to date and which need to be rebuilt (it's a good idea for other
reasons as well, but those are out of scope for this discussion). In this case we
have disabled it (using ``rebuild-index: False``) because the index would only be
generated in the artifacts mirror anyway, and consequently would not be available
during subesequent pipeline runs.
----------------------------------- -----------------------------------
Spack commands supporting pipelines Spack commands supporting pipelines
----------------------------------- -----------------------------------
Spack provides a command ``ci`` with two sub-commands: ``spack ci generate`` generates Spack provides a command ``ci`` command with a few sub-commands supporting spack
a pipeline (a .gitlab-ci.yml file) from a spack environment, and ``spack ci rebuild`` ci pipelines. These commands are covered in more detail in this section.
checks a spec against a remote mirror and possibly rebuilds it from source and updates
the binary mirror with the latest built package. Both ``spack ci ...`` commands must
be run from within the same environment, as each one makes use of the environment for
different purposes. Additionally, some options to the commands (or conditions present
in the spack environment file) may require particular environment variables to be
set in order to function properly. Examples of these are typically secrets
needed for pipeline operation that should not be visible in a spack environment
file. These environment variables are described in more detail
:ref:`ci_environment_variables`.
.. _cmd-spack-ci: .. _cmd-spack-ci:
@ -121,6 +193,17 @@ pipeline jobs.
Concretizes the specs in the active environment, stages them (as described in Concretizes the specs in the active environment, stages them (as described in
:ref:`staging_algorithm`), and writes the resulting ``.gitlab-ci.yml`` to disk. :ref:`staging_algorithm`), and writes the resulting ``.gitlab-ci.yml`` to disk.
During concretization of the environment, ``spack ci generate`` also writes a
``spack.lock`` file which is then provided to generated child jobs and made
available in all generated job artifacts to aid in reproducing failed builds
in a local environment. This means there are two artifacts that need to be
exported in your pipeline generation job (defined in your ``.gitlab-ci.yml``).
The first is the output yaml file of ``spack ci generate``, and the other is
the directory containing the concrete environment files. In the
:ref:`functional_example` section, we only mentioned one path in the
``artifacts`` ``paths`` list because we used ``--artifacts-root`` as the
top level directory containing both the generated pipeline yaml and the
concrete environment.
Using ``--prune-dag`` or ``--no-prune-dag`` configures whether or not jobs are Using ``--prune-dag`` or ``--no-prune-dag`` configures whether or not jobs are
generated for specs that are already up to date on the mirror. If enabling generated for specs that are already up to date on the mirror. If enabling
@ -128,6 +211,16 @@ DAG pruning using ``--prune-dag``, more information may be required in your
``spack.yaml`` file, see the :ref:`noop_jobs` section below regarding ``spack.yaml`` file, see the :ref:`noop_jobs` section below regarding
``service-job-attributes``. ``service-job-attributes``.
The optional ``--check-index-only`` argument can be used to speed up pipeline
generation by telling spack to consider only remote buildcache indices when
checking the remote mirror to determine if each spec in the DAG is up to date
or not. The default behavior is for spack to fetch the index and check it,
but if the spec is not found in the index, to also perform a direct check for
the spec on the mirror. If the remote buildcache index is out of date, which
can easily happen if it is not updated frequently, this behavior ensures that
spack has a way to know for certain about the status of any concrete spec on
the remote mirror, but can slow down pipeline generation significantly.
The ``--optimize`` argument is experimental and runs the generated pipeline The ``--optimize`` argument is experimental and runs the generated pipeline
document through a series of optimization passes designed to reduce the size document through a series of optimization passes designed to reduce the size
of the generated file. of the generated file.
@ -143,19 +236,64 @@ The optional ``--output-file`` argument should be an absolute path (including
file name) to the generated pipeline, and if not given, the default is file name) to the generated pipeline, and if not given, the default is
``./.gitlab-ci.yml``. ``./.gitlab-ci.yml``.
While optional, the ``--artifacts-root`` argument is used to determine where
the concretized environment directory should be located. This directory will
be created by ``spack ci generate`` and will contain the ``spack.yaml`` and
generated ``spack.lock`` which are then passed to all child jobs as an
artifact. This directory will also be the root directory for all artifacts
generated by jobs in the pipeline.
.. _cmd-spack-ci-rebuild: .. _cmd-spack-ci-rebuild:
^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
``spack ci rebuild`` ``spack ci rebuild``
^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
This sub-command is responsible for ensuring a single spec from the release The purpose of the ``spack ci rebuild`` is straightforward: take its assigned
environment is up to date on the remote mirror configured in the environment, spec job, check whether the target mirror already has a binary for that spec,
and as such, corresponds to a single job in the ``.gitlab-ci.yml`` file. and if not, build the spec from source and push the binary to the mirror. To
accomplish this in a reproducible way, the sub-command prepares a ``spack install``
command line to build a single spec in the DAG, saves that command in a
shell script, ``install.sh``, in the current working directory, and then runs
it to install the spec. The shell script is also exported as an artifact to
aid in reproducing the build outside of the CI environment.
Rather than taking command-line arguments, this sub-command expects information If it was necessary to install the spec from source, ``spack ci rebuild`` will
to be communicated via environment variables, which will typically come via the also subsequently create a binary package for the spec and try to push it to the
``.gitlab-ci.yml`` job as ``variables``. mirror.
The ``spack ci rebuild`` sub-command mainly expects its "input" to come either
from environment variables or from the ``gitlab-ci`` section of the ``spack.yaml``
environment file. There are two main sources of the environment variables, some
are written into ``.gitlab-ci.yml`` by ``spack ci generate``, and some are
provided by the GitLab CI runtime.
.. _cmd-spack-ci-rebuild-index:
^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack ci rebuild-index``
^^^^^^^^^^^^^^^^^^^^^^^^^^
This is a convenience command to rebuild the buildcache index associated with
the mirror in the active, gitlab-enabled environment (specifying the mirror
url or name is not required).
.. _cmd-spack-ci-reproduce-build:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack ci reproduce-build``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Given the url to a gitlab pipeline rebuild job, downloads and unzips the
artifacts into a local directory (which can be specified with the optional
``--working-dir`` argument), then finds the target job in the generated
pipeline to extract details about how it was run. Assuming the job used a
docker image, the command prints a ``docker run`` command line and some basic
instructions on how to reproduce the build locally.
Note that jobs failing in the pipeline will print messages giving the
arguments you can pass to ``spack ci reproduce-build`` in order to reproduce
a particular build locally.
------------------------------------ ------------------------------------
A pipeline-enabled spack environment A pipeline-enabled spack environment
@ -364,8 +502,9 @@ scheduled on that runner. This allows users to do any custom preparation or
cleanup tasks that fit their particular workflow, as well as completely cleanup tasks that fit their particular workflow, as well as completely
customize the rebuilding of a spec if they so choose. Spack will not generate customize the rebuilding of a spec if they so choose. Spack will not generate
a ``before_script`` or ``after_script`` for jobs, but if you do not provide a ``before_script`` or ``after_script`` for jobs, but if you do not provide
a custom ``script``, spack will generate one for you that assumes your a custom ``script``, spack will generate one for you that assumes the concrete
``spack.yaml`` is at the root of the repository, activates that environment for environment directory is located within your ``--artifacts_root`` (or if not
provided, within your ``$CI_PROJECT_DIR``), activates that environment for
you, and invokes ``spack ci rebuild``. you, and invokes ``spack ci rebuild``.
.. _staging_algorithm: .. _staging_algorithm:
@ -490,14 +629,15 @@ Using a custom spack in your pipeline
If your runners will not have a version of spack ready to invoke, or if for some If your runners will not have a version of spack ready to invoke, or if for some
other reason you want to use a custom version of spack to run your pipelines, other reason you want to use a custom version of spack to run your pipelines,
this section provides an example of how you could take advantage of this section provides an example of how you could take advantage of
user-provided pipeline scripts to accomplish this fairly simply. First, you user-provided pipeline scripts to accomplish this fairly simply. First, consider
could use the GitLab user interface to create CI environment variables specifying the source and version of spack you want to use with variables, either
containing the url and branch or tag you want to use (calling them, for written directly into your ``.gitlab-ci.yml``, or provided by CI variables defined
example, ``SPACK_REPO`` and ``SPACK_REF``), then refer to those in a custom shell in the gitlab UI or from some upstream pipeline. Let's say you choose the variable
script invoked both from your pipeline generation job, as well as in your rebuild names ``SPACK_REPO`` and ``SPACK_REF`` to refer to the particular fork of spack
and branch you want for running your pipeline. You can then refer to those in a
custom shell script invoked both from your pipeline generation job and your rebuild
jobs. Here's the ``generate-pipeline`` job from the top of this document, jobs. Here's the ``generate-pipeline`` job from the top of this document,
updated to invoke a custom shell script that will clone and source a custom updated to clone and source a custom spack:
spack:
.. code-block:: yaml .. code-block:: yaml
@ -505,34 +645,24 @@ spack:
tags: tags:
- <some-other-tag> - <some-other-tag>
before_script: before_script:
- ./cloneSpack.sh - git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
script: script:
- spack env activate --without-view . - spack env activate --without-view .
- spack ci generate - spack ci generate --check-index-only
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml" --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
after_script: after_script:
- rm -rf ./spack - rm -rf ./spack
artifacts: artifacts:
paths: paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml" - "${CI_PROJECT_DIR}/jobs_scratch_dir"
And the ``cloneSpack.sh`` script could contain: That takes care of getting the desired version of spack when your pipeline is
generated by ``spack ci generate``. You also want your generated rebuild jobs
.. code-block:: bash (all of them) to clone that version of spack, so next you would update your
``spack.yaml`` from above as follows:
#!/bin/bash
git clone ${SPACK_REPO}
pushd ./spack
git checkout ${SPACK_REF}
popd
. "./spack/share/spack/setup-env.sh"
spack --version
Finally, you would also want your generated rebuild jobs to clone that version
of spack, so you would update your ``spack.yaml`` from above as follows:
.. code-block:: yaml .. code-block:: yaml
@ -547,17 +677,17 @@ of spack, so you would update your ``spack.yaml`` from above as follows:
- spack-kube - spack-kube
image: spack/ubuntu-bionic image: spack/ubuntu-bionic
before_script: before_script:
- ./cloneSpack.sh - git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
script: script:
- spack env activate --without-view . - spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
- spack -d ci rebuild - spack -d ci rebuild
after_script: after_script:
- rm -rf ./spack - rm -rf ./spack
Now all of the generated rebuild jobs will use the same shell script to clone Now all of the generated rebuild jobs will use the same shell script to clone
spack before running their actual workload. Note in the above example the spack before running their actual workload.
provision of a custom ``script`` section. The reason for this is to run
``spack ci rebuild`` in debug mode to get more information when builds fail.
Now imagine you have long pipelines with many specs to be built, and you Now imagine you have long pipelines with many specs to be built, and you
are pointing to a spack repository and branch that has a tendency to change are pointing to a spack repository and branch that has a tendency to change
@ -571,13 +701,32 @@ simply contains the human-readable value produced by ``spack -V`` at pipeline
generation time, the ``SPACK_CHECKOUT_VERSION`` variable can be used in a generation time, the ``SPACK_CHECKOUT_VERSION`` variable can be used in a
``git checkout`` command to make sure all child jobs checkout the same version ``git checkout`` command to make sure all child jobs checkout the same version
of spack used to generate the pipeline. To take advantage of this, you could of spack used to generate the pipeline. To take advantage of this, you could
simply replace ``git checkout ${SPACK_REF}`` in the example ``cloneSpack.sh`` simply replace ``git checkout ${SPACK_REF}`` in the example ``spack.yaml``
script above with ``git checkout ${SPACK_CHECKOUT_VERSION}``. above with ``git checkout ${SPACK_CHECKOUT_VERSION}``.
On the other hand, if you're pointing to a spack repository and branch under your On the other hand, if you're pointing to a spack repository and branch under your
control, there may be no benefit in using the captured ``SPACK_CHECKOUT_VERSION``, control, there may be no benefit in using the captured ``SPACK_CHECKOUT_VERSION``,
and you can instead just clone using the project CI variables you set (in the and you can instead just clone using the variables you define (``SPACK_REPO``
earlier example these were ``SPACK_REPO`` and ``SPACK_REF``). and ``SPACK_REF`` in the example aboves).
.. _custom_workflow:
---------------
Custom Workflow
---------------
There are many ways to take advantage of spack CI pipelines to achieve custom
workflows for building packages or other resources. One example of a custom
pipelines workflow is the spack tutorial container
`repo <https://github.com/spack/spack-tutorial-container>`_. This project uses
GitHub (for source control), GitLab (for automated spack ci pipelines), and
DockerHub automated builds to build Docker images (complete with fully populate
binary mirror) used by instructors and participants of a spack tutorial.
Take a look a the repo to see how it is accomplished using spack CI pipelines,
and see the following markdown files at the root of the repository for
descriptions and documentation describing the workflow: ``DESCRIPTION.md``,
``DOCKERHUB_SETUP.md``, ``GITLAB_SETUP.md``, and ``UPDATING.md``.
.. _ci_environment_variables: .. _ci_environment_variables:
@ -594,28 +743,33 @@ environment variables used by the pipeline infrastructure are described here.
AWS_ACCESS_KEY_ID AWS_ACCESS_KEY_ID
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
Needed when binary mirror is an S3 bucket. Optional. Only needed when binary mirror is an S3 bucket.
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
AWS_SECRET_ACCESS_KEY AWS_SECRET_ACCESS_KEY
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
Needed when binary mirror is an S3 bucket. Optional. Only needed when binary mirror is an S3 bucket.
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
S3_ENDPOINT_URL S3_ENDPOINT_URL
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
Needed when binary mirror is an S3 bucket that is *not* on AWS. Optional. Only needed when binary mirror is an S3 bucket that is *not* on AWS.
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
CDASH_AUTH_TOKEN CDASH_AUTH_TOKEN
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
Needed in order to report build groups to CDash. Optional. Only needed in order to report build groups to CDash.
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
SPACK_SIGNING_KEY SPACK_SIGNING_KEY
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
Needed to sign/verify binary packages from the remote binary mirror. Optional. Only needed if you want ``spack ci rebuild`` to trust the key you
store in this variable, in which case, it will subsequently be used to sign and
verify binary packages (when installing or creating buildcaches). You could
also have already trusted a key spack know about, or if no key is present anywhere,
spack will install specs using ``--no-check-signature`` and create buildcaches
using ``-u`` (for unsigned binaries).

View File

@ -10,8 +10,9 @@
import os import os
import re import re
import shutil import shutil
import stat
import tempfile import tempfile
import zlib import zipfile
from six import iteritems from six import iteritems
from six.moves.urllib.error import HTTPError, URLError from six.moves.urllib.error import HTTPError, URLError
@ -19,6 +20,7 @@
from six.moves.urllib.request import build_opener, HTTPHandler, Request from six.moves.urllib.request import build_opener, HTTPHandler, Request
import llnl.util.tty as tty import llnl.util.tty as tty
import llnl.util.filesystem as fs
import spack import spack
import spack.binary_distribution as bindist import spack.binary_distribution as bindist
@ -27,10 +29,12 @@
import spack.config as cfg import spack.config as cfg
import spack.environment as ev import spack.environment as ev
from spack.error import SpackError from spack.error import SpackError
import spack.hash_types as ht
import spack.main import spack.main
import spack.mirror
import spack.paths
import spack.repo import spack.repo
from spack.spec import Spec from spack.spec import Spec
import spack.util.executable as exe
import spack.util.spack_yaml as syaml import spack.util.spack_yaml as syaml
import spack.util.web as web_util import spack.util.web as web_util
import spack.util.gpg as gpg_util import spack.util.gpg as gpg_util
@ -197,10 +201,7 @@ def format_root_spec(spec, main_phase, strip_compiler):
return '{0}@{1} arch={2}'.format( return '{0}@{1} arch={2}'.format(
spec.name, spec.version, spec.architecture) spec.name, spec.version, spec.architecture)
else: else:
spec_yaml = spec.to_yaml(hash=ht.build_hash).encode('utf-8') return spec.build_hash()
return str(base64.b64encode(zlib.compress(spec_yaml)).decode('utf-8'))
# return '{0}@{1}%{2} arch={3}'.format(
# spec.name, spec.version, spec.compiler, spec.architecture)
def spec_deps_key(s): def spec_deps_key(s):
@ -513,28 +514,14 @@ def format_job_needs(phase_name, strip_compilers, dep_jobs,
return needs_list return needs_list
def add_pr_mirror(url): def generate_gitlab_ci_yaml(env, print_summary, output_file,
cfg_scope = cfg.default_modify_scope() prune_dag=False, check_index_only=False,
mirrors = cfg.get('mirrors', scope=cfg_scope) run_optimizer=False, use_dependencies=False,
items = [(n, u) for n, u in mirrors.items()] artifacts_root=None):
items.insert(0, ('ci_pr_mirror', url))
cfg.set('mirrors', syaml.syaml_dict(items), scope=cfg_scope)
def remove_pr_mirror():
cfg_scope = cfg.default_modify_scope()
mirrors = cfg.get('mirrors', scope=cfg_scope)
mirrors.pop('ci_pr_mirror')
cfg.set('mirrors', mirrors, scope=cfg_scope)
def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
check_index_only=False, run_optimizer=False,
use_dependencies=False):
# FIXME: What's the difference between one that opens with 'spack'
# and one that opens with 'env'? This will only handle the former.
with spack.concretize.disable_compiler_existence_check(): with spack.concretize.disable_compiler_existence_check():
env.concretize() with env.write_transaction():
env.concretize()
env.write()
yaml_root = ev.config_dict(env.yaml) yaml_root = ev.config_dict(env.yaml)
@ -559,6 +546,9 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
tty.verbose("Using CDash auth token from environment") tty.verbose("Using CDash auth token from environment")
cdash_auth_token = os.environ.get('SPACK_CDASH_AUTH_TOKEN') cdash_auth_token = os.environ.get('SPACK_CDASH_AUTH_TOKEN')
generate_job_name = os.environ.get('CI_JOB_NAME', None)
parent_pipeline_id = os.environ.get('CI_PIPELINE_ID', None)
is_pr_pipeline = ( is_pr_pipeline = (
os.environ.get('SPACK_IS_PR_PIPELINE', '').lower() == 'true' os.environ.get('SPACK_IS_PR_PIPELINE', '').lower() == 'true'
) )
@ -574,6 +564,7 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
ci_mirrors = yaml_root['mirrors'] ci_mirrors = yaml_root['mirrors']
mirror_urls = [url for url in ci_mirrors.values()] mirror_urls = [url for url in ci_mirrors.values()]
remote_mirror_url = mirror_urls[0]
# Check for a list of "known broken" specs that we should not bother # Check for a list of "known broken" specs that we should not bother
# trying to build. # trying to build.
@ -624,7 +615,32 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
# Add this mirror if it's enabled, as some specs might be up to date # Add this mirror if it's enabled, as some specs might be up to date
# here and thus not need to be rebuilt. # here and thus not need to be rebuilt.
if pr_mirror_url: if pr_mirror_url:
add_pr_mirror(pr_mirror_url) spack.mirror.add(
'ci_pr_mirror', pr_mirror_url, cfg.default_modify_scope())
pipeline_artifacts_dir = artifacts_root
if not pipeline_artifacts_dir:
proj_dir = os.environ.get('CI_PROJECT_DIR', os.getcwd())
pipeline_artifacts_dir = os.path.join(proj_dir, 'jobs_scratch_dir')
pipeline_artifacts_dir = os.path.abspath(pipeline_artifacts_dir)
concrete_env_dir = os.path.join(
pipeline_artifacts_dir, 'concrete_environment')
# Now that we've added the mirrors we know about, they should be properly
# reflected in the environment manifest file, so copy that into the
# concrete environment directory, along with the spack.lock file.
if not os.path.exists(concrete_env_dir):
os.makedirs(concrete_env_dir)
shutil.copyfile(env.manifest_path,
os.path.join(concrete_env_dir, 'spack.yaml'))
shutil.copyfile(env.lock_path,
os.path.join(concrete_env_dir, 'spack.lock'))
job_log_dir = os.path.join(pipeline_artifacts_dir, 'logs')
job_repro_dir = os.path.join(pipeline_artifacts_dir, 'reproduction')
local_mirror_dir = os.path.join(pipeline_artifacts_dir, 'mirror')
user_artifacts_dir = os.path.join(pipeline_artifacts_dir, 'user_data')
# Speed up staging by first fetching binary indices from all mirrors # Speed up staging by first fetching binary indices from all mirrors
# (including the per-PR mirror we may have just added above). # (including the per-PR mirror we may have just added above).
@ -641,7 +657,7 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
finally: finally:
# Clean up PR mirror if enabled # Clean up PR mirror if enabled
if pr_mirror_url: if pr_mirror_url:
remove_pr_mirror() spack.mirror.remove('ci_pr_mirror', cfg.default_modify_scope())
all_job_names = [] all_job_names = []
output_object = {} output_object = {}
@ -705,10 +721,16 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
except AttributeError: except AttributeError:
image_name = build_image image_name = build_image
job_script = [ job_script = ['spack env activate --without-view .']
'spack env activate --without-view .',
'spack ci rebuild', if artifacts_root:
] job_script.insert(0, 'cd {0}'.format(concrete_env_dir))
job_script.extend([
'spack ci rebuild --prepare',
'./install.sh'
])
if 'script' in runner_attribs: if 'script' in runner_attribs:
job_script = [s for s in runner_attribs['script']] job_script = [s for s in runner_attribs['script']]
@ -735,9 +757,9 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
job_vars = { job_vars = {
'SPACK_ROOT_SPEC': format_root_spec( 'SPACK_ROOT_SPEC': format_root_spec(
root_spec, main_phase, strip_compilers), root_spec, main_phase, strip_compilers),
'SPACK_JOB_SPEC_DAG_HASH': release_spec.dag_hash(),
'SPACK_JOB_SPEC_PKG_NAME': release_spec.name, 'SPACK_JOB_SPEC_PKG_NAME': release_spec.name,
'SPACK_COMPILER_ACTION': compiler_action, 'SPACK_COMPILER_ACTION': compiler_action
'SPACK_IS_PR_PIPELINE': str(is_pr_pipeline),
} }
job_dependencies = [] job_dependencies = []
@ -836,6 +858,12 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
if prune_dag and not rebuild_spec: if prune_dag and not rebuild_spec:
continue continue
if artifacts_root:
job_dependencies.append({
'job': generate_job_name,
'pipeline': '{0}'.format(parent_pipeline_id)
})
job_vars['SPACK_SPEC_NEEDS_REBUILD'] = str(rebuild_spec) job_vars['SPACK_SPEC_NEEDS_REBUILD'] = str(rebuild_spec)
if enable_cdash_reporting: if enable_cdash_reporting:
@ -856,12 +884,14 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
variables.update(job_vars) variables.update(job_vars)
artifact_paths = [ artifact_paths = [
'jobs_scratch_dir', job_log_dir,
'cdash_report', job_repro_dir,
user_artifacts_dir
] ]
if enable_artifacts_buildcache: if enable_artifacts_buildcache:
bc_root = 'local_mirror/build_cache' bc_root = os.path.join(
local_mirror_dir, 'build_cache')
artifact_paths.extend([os.path.join(bc_root, p) for p in [ artifact_paths.extend([os.path.join(bc_root, p) for p in [
bindist.tarball_name(release_spec, '.spec.yaml'), bindist.tarball_name(release_spec, '.spec.yaml'),
bindist.tarball_name(release_spec, '.cdashid'), bindist.tarball_name(release_spec, '.cdashid'),
@ -987,6 +1017,11 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
] ]
final_job['when'] = 'always' final_job['when'] = 'always'
if artifacts_root:
final_job['variables'] = {
'SPACK_CONCRETE_ENV_DIR': concrete_env_dir
}
output_object['rebuild-index'] = final_job output_object['rebuild-index'] = final_job
output_object['stages'] = stage_names output_object['stages'] = stage_names
@ -1007,8 +1042,15 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False,
version_to_clone = spack_version version_to_clone = spack_version
output_object['variables'] = { output_object['variables'] = {
'SPACK_ARTIFACTS_ROOT': pipeline_artifacts_dir,
'SPACK_CONCRETE_ENV_DIR': concrete_env_dir,
'SPACK_VERSION': spack_version, 'SPACK_VERSION': spack_version,
'SPACK_CHECKOUT_VERSION': version_to_clone, 'SPACK_CHECKOUT_VERSION': version_to_clone,
'SPACK_REMOTE_MIRROR_URL': remote_mirror_url,
'SPACK_JOB_LOG_DIR': job_log_dir,
'SPACK_JOB_REPRO_DIR': job_repro_dir,
'SPACK_LOCAL_MIRROR_DIR': local_mirror_dir,
'SPACK_IS_PR_PIPELINE': str(is_pr_pipeline)
} }
if pr_mirror_url: if pr_mirror_url:
@ -1131,7 +1173,8 @@ def configure_compilers(compiler_action, scope=None):
return None return None
def get_concrete_specs(root_spec, job_name, related_builds, compiler_action): def get_concrete_specs(env, root_spec, job_name, related_builds,
compiler_action):
spec_map = { spec_map = {
'root': None, 'root': None,
'deps': {}, 'deps': {},
@ -1153,8 +1196,7 @@ def get_concrete_specs(root_spec, job_name, related_builds, compiler_action):
# again. The reason we take this path in the first case (bootstrapped # again. The reason we take this path in the first case (bootstrapped
# compiler), is that we can't concretize a spec at this point if we're # compiler), is that we can't concretize a spec at this point if we're
# going to ask spack to "install_missing_compilers". # going to ask spack to "install_missing_compilers".
concrete_root = Spec.from_yaml( concrete_root = env.specs_by_hash[root_spec]
str(zlib.decompress(base64.b64decode(root_spec)).decode('utf-8')))
spec_map['root'] = concrete_root spec_map['root'] = concrete_root
spec_map[job_name] = concrete_root[job_name] spec_map[job_name] = concrete_root[job_name]
@ -1205,7 +1247,7 @@ def register_cdash_build(build_name, base_url, project, site, track):
def relate_cdash_builds(spec_map, cdash_base_url, job_build_id, cdash_project, def relate_cdash_builds(spec_map, cdash_base_url, job_build_id, cdash_project,
cdashids_mirror_url): cdashids_mirror_urls):
if not job_build_id: if not job_build_id:
return return
@ -1221,7 +1263,19 @@ def relate_cdash_builds(spec_map, cdash_base_url, job_build_id, cdash_project,
for dep_pkg_name in dep_map: for dep_pkg_name in dep_map:
tty.debug('Fetching cdashid file for {0}'.format(dep_pkg_name)) tty.debug('Fetching cdashid file for {0}'.format(dep_pkg_name))
dep_spec = dep_map[dep_pkg_name] dep_spec = dep_map[dep_pkg_name]
dep_build_id = read_cdashid_from_mirror(dep_spec, cdashids_mirror_url) dep_build_id = None
for url in cdashids_mirror_urls:
try:
if url:
dep_build_id = read_cdashid_from_mirror(dep_spec, url)
break
except web_util.SpackWebError:
tty.debug('Did not find cdashid for {0} on {1}'.format(
dep_pkg_name, url))
else:
raise SpackError('Did not find cdashid for {0} anywhere'.format(
dep_pkg_name))
payload = { payload = {
"project": cdash_project, "project": cdash_project,
@ -1335,3 +1389,310 @@ def copy_stage_logs_to_artifacts(job_spec, job_log_dir):
msg = ('Unable to copy build logs from stage to artifacts ' msg = ('Unable to copy build logs from stage to artifacts '
'due to exception: {0}').format(inst) 'due to exception: {0}').format(inst)
tty.error(msg) tty.error(msg)
def download_and_extract_artifacts(url, work_dir):
tty.msg('Fetching artifacts from: {0}\n'.format(url))
headers = {
'Content-Type': 'application/zip',
}
token = os.environ.get('GITLAB_PRIVATE_TOKEN', None)
if token:
headers['PRIVATE-TOKEN'] = token
opener = build_opener(HTTPHandler)
request = Request(url, headers=headers)
request.get_method = lambda: 'GET'
response = opener.open(request)
response_code = response.getcode()
if response_code != 200:
msg = 'Error response code ({0}) in reproduce_ci_job'.format(
response_code)
raise SpackError(msg)
artifacts_zip_path = os.path.join(work_dir, 'artifacts.zip')
if not os.path.exists(work_dir):
os.makedirs(work_dir)
with open(artifacts_zip_path, 'wb') as out_file:
shutil.copyfileobj(response, out_file)
zip_file = zipfile.ZipFile(artifacts_zip_path)
zip_file.extractall(work_dir)
zip_file.close()
os.remove(artifacts_zip_path)
def get_spack_info():
git_path = os.path.join(spack.paths.prefix, ".git")
if os.path.exists(git_path):
git = exe.which("git")
if git:
with fs.working_dir(spack.paths.prefix):
git_log = git("log", "-1",
output=str, error=os.devnull,
fail_on_error=False)
return git_log
return 'no git repo, use spack {0}'.format(spack.spack_version)
def setup_spack_repro_version(repro_dir, checkout_commit, merge_commit=None):
# figure out the path to the spack git version being used for the
# reproduction
print('checkout_commit: {0}'.format(checkout_commit))
print('merge_commit: {0}'.format(merge_commit))
dot_git_path = os.path.join(spack.paths.prefix, ".git")
if not os.path.exists(dot_git_path):
tty.error('Unable to find the path to your local spack clone')
return False
spack_git_path = spack.paths.prefix
git = exe.which("git")
if not git:
tty.error("reproduction of pipeline job requires git")
return False
# Check if we can find the tested commits in your local spack repo
with fs.working_dir(spack_git_path):
git("log", "-1", checkout_commit, output=str, error=os.devnull,
fail_on_error=False)
if git.returncode != 0:
tty.error('Missing commit: {0}'.format(checkout_commit))
return False
if merge_commit:
git("log", "-1", merge_commit, output=str, error=os.devnull,
fail_on_error=False)
if git.returncode != 0:
tty.error('Missing commit: {0}'.format(merge_commit))
return False
# Next attempt to clone your local spack repo into the repro dir
with fs.working_dir(repro_dir):
clone_out = git("clone", spack_git_path,
output=str, error=os.devnull,
fail_on_error=False)
if git.returncode != 0:
tty.error('Unable to clone your local spac repo:')
tty.msg(clone_out)
return False
# Finally, attempt to put the cloned repo into the same state used during
# the pipeline build job
repro_spack_path = os.path.join(repro_dir, 'spack')
with fs.working_dir(repro_spack_path):
co_out = git("checkout", checkout_commit,
output=str, error=os.devnull,
fail_on_error=False)
if git.returncode != 0:
tty.error('Unable to checkout {0}'.format(checkout_commit))
tty.msg(co_out)
return False
if merge_commit:
merge_out = git("-c", "user.name=cirepro", "-c",
"user.email=user@email.org", "merge",
"--no-edit", merge_commit,
output=str, error=os.devnull,
fail_on_error=False)
if git.returncode != 0:
tty.error('Unable to merge {0}'.format(merge_commit))
tty.msg(merge_out)
return False
return True
def reproduce_ci_job(url, work_dir):
download_and_extract_artifacts(url, work_dir)
lock_file = fs.find(work_dir, 'spack.lock')[0]
concrete_env_dir = os.path.dirname(lock_file)
tty.debug('Concrete environment directory: {0}'.format(
concrete_env_dir))
yaml_files = fs.find(work_dir, ['*.yaml', '*.yml'])
tty.debug('yaml files:')
for yaml_file in yaml_files:
tty.debug(' {0}'.format(yaml_file))
pipeline_yaml = None
pipeline_variables = None
# Try to find the dynamically generated pipeline yaml file in the
# reproducer. If the user did not put it in the artifacts root,
# but rather somewhere else and exported it as an artifact from
# that location, we won't be able to find it.
for yf in yaml_files:
with open(yf) as y_fd:
yaml_obj = syaml.load(y_fd)
if 'variables' in yaml_obj and 'stages' in yaml_obj:
pipeline_yaml = yaml_obj
pipeline_variables = pipeline_yaml['variables']
if pipeline_yaml:
tty.debug('\n{0} is likely your pipeline file'.format(yf))
# Find the install script in the unzipped artifacts and make it executable
install_script = fs.find(work_dir, 'install.sh')[0]
st = os.stat(install_script)
os.chmod(install_script, st.st_mode | stat.S_IEXEC)
# Find the repro details file. This just includes some values we wrote
# during `spack ci rebuild` to make reproduction easier. E.g. the job
# name is written here so we can easily find the configuration of the
# job from the generated pipeline file.
repro_file = fs.find(work_dir, 'repro.json')[0]
repro_details = None
with open(repro_file) as fd:
repro_details = json.load(fd)
repro_dir = os.path.dirname(repro_file)
rel_repro_dir = repro_dir.replace(work_dir, '').lstrip(os.path.sep)
# Find the spack info text file that should contain the git log
# of the HEAD commit used during the CI build
spack_info_file = fs.find(work_dir, 'spack_info.txt')[0]
with open(spack_info_file) as fd:
spack_info = fd.read()
# Access the specific job configuration
job_name = repro_details['job_name']
job_yaml = None
if job_name in pipeline_yaml:
job_yaml = pipeline_yaml[job_name]
if job_yaml:
tty.debug('Found job:')
tty.debug(job_yaml)
job_image = None
setup_result = False
if 'image' in job_yaml:
job_image_elt = job_yaml['image']
if 'name' in job_image_elt:
job_image = job_image_elt['name']
else:
job_image = job_image_elt
tty.msg('Job ran with the following image: {0}'.format(job_image))
# Because we found this job was run with a docker image, so we will try
# to print a "docker run" command that bind-mounts the directory where
# we extracted the artifacts.
# Destination of bind-mounted reproduction directory. It makes for a
# more faithful reproducer if everything appears to run in the same
# absolute path used during the CI build.
mount_as_dir = '/work'
if pipeline_variables:
artifacts_root = pipeline_variables['SPACK_ARTIFACTS_ROOT']
mount_as_dir = os.path.dirname(artifacts_root)
mounted_repro_dir = os.path.join(mount_as_dir, rel_repro_dir)
# We will also try to clone spack from your local checkout and
# reproduce the state present during the CI build, and put that into
# the bind-mounted reproducer directory.
# Regular expressions for parsing that HEAD commit. If the pipeline
# was on the gitlab spack mirror, it will have been a merge commit made by
# gitub and pushed by the sync script. If the pipeline was run on some
# environment repo, then the tested spack commit will likely have been
# a regular commit.
commit_1 = None
commit_2 = None
commit_regex = re.compile(r"commit\s+([^\s]+)")
merge_commit_regex = re.compile(r"Merge\s+([^\s]+)\s+into\s+([^\s]+)")
# Try the more specific merge commit regex first
m = merge_commit_regex.search(spack_info)
if m:
# This was a merge commit and we captured the parents
commit_1 = m.group(1)
commit_2 = m.group(2)
else:
# Not a merge commit, just get the commit sha
m = commit_regex.search(spack_info)
if m:
commit_1 = m.group(1)
setup_result = False
if commit_1:
if commit_2:
setup_result = setup_spack_repro_version(
work_dir, commit_2, merge_commit=commit_1)
else:
setup_result = setup_spack_repro_version(work_dir, commit_1)
if not setup_result:
setup_msg = """
This can happen if the spack you are using to run this command is not a git
repo, or if it is a git repo, but it does not have the commits needed to
recreate the tested merge commit. If you are trying to reproduce a spack
PR pipeline job failure, try fetching the latest develop commits from
mainline spack and make sure you have the most recent commit of the PR
branch in your local spack repo. Then run this command again.
Alternatively, you can also manually clone spack if you know the version
you want to test.
"""
tty.error('Failed to automatically setup the tested version of spack '
'in your local reproduction directory.')
print(setup_msg)
# In cases where CI build was run on a shell runner, it might be useful
# to see what tags were applied to the job so the user knows what shell
# runner was used. But in that case in general, we cannot do nearly as
# much to set up the reproducer.
job_tags = None
if 'tags' in job_yaml:
job_tags = job_yaml['tags']
tty.msg('Job ran with the following tags: {0}'.format(job_tags))
inst_list = []
# Finally, print out some instructions to reproduce the build
if job_image:
inst_list.append('\nRun the following command:\n\n')
inst_list.append(' $ docker run --rm -v {0}:{1} -ti {2}\n'.format(
work_dir, mount_as_dir, job_image))
inst_list.append('\nOnce inside the container:\n\n')
else:
inst_list.append('\nOnce on the tagged runner:\n\n')
if not setup_result:
inst_list.append(' - Clone spack and acquire tested commit\n')
inst_list.append('{0}'.format(spack_info))
spack_root = '<spack-clone-path>'
else:
spack_root = '{0}/spack'.format(mount_as_dir)
inst_list.append(' - Activate the environment\n\n')
inst_list.append(' $ source {0}/share/spack/setup-env.sh\n'.format(
spack_root))
inst_list.append(
' $ spack env activate --without-view {0}\n\n'.format(
mounted_repro_dir if job_image else repro_dir))
inst_list.append(' - Run the install script\n\n')
inst_list.append(' $ {0}\n'.format(
os.path.join(mounted_repro_dir, 'install.sh')
if job_image else install_script))
print(''.join(inst_list))

View File

@ -3,9 +3,13 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import json
import os import os
import shutil import shutil
import stat
import subprocess
import sys import sys
import tempfile
from six.moves.urllib.parse import urlencode from six.moves.urllib.parse import urlencode
@ -13,17 +17,21 @@
import spack.binary_distribution as bindist import spack.binary_distribution as bindist
import spack.ci as spack_ci import spack.ci as spack_ci
import spack.config as cfg
import spack.cmd.buildcache as buildcache import spack.cmd.buildcache as buildcache
import spack.environment as ev import spack.environment as ev
import spack.hash_types as ht import spack.hash_types as ht
import spack.util.executable as exe import spack.mirror
import spack.util.url as url_util import spack.util.url as url_util
import spack.util.web as web_util
description = "manage continuous integration pipelines" description = "manage continuous integration pipelines"
section = "build" section = "build"
level = "long" level = "long"
CI_REBUILD_INSTALL_BASE_ARGS = ['spack', '-d', '-v']
def get_env_var(variable_name): def get_env_var(variable_name):
if variable_name in os.environ: if variable_name in os.environ:
@ -75,18 +83,32 @@ def setup_parser(subparser):
to determine whether a given spec is up to date on mirrors. In the latter to determine whether a given spec is up to date on mirrors. In the latter
case, specs might be needlessly rebuilt if remote buildcache indices are out case, specs might be needlessly rebuilt if remote buildcache indices are out
of date.""") of date.""")
generate.add_argument(
'--artifacts-root', default=None,
help="""Path to root of artifacts directory. If provided, concrete
environment files (spack.yaml, spack.lock) will be generated under this
path and their location sent to generated child jobs via the custom job
variable SPACK_CONCRETE_ENVIRONMENT_PATH.""")
generate.set_defaults(func=ci_generate) generate.set_defaults(func=ci_generate)
# Check a spec against mirror. Rebuild, create buildcache and push to
# mirror (if necessary).
rebuild = subparsers.add_parser('rebuild', help=ci_rebuild.__doc__)
rebuild.set_defaults(func=ci_rebuild)
# Rebuild the buildcache index associated with the mirror in the # Rebuild the buildcache index associated with the mirror in the
# active, gitlab-enabled environment. # active, gitlab-enabled environment.
index = subparsers.add_parser('rebuild-index', help=ci_reindex.__doc__) index = subparsers.add_parser('rebuild-index', help=ci_reindex.__doc__)
index.set_defaults(func=ci_reindex) index.set_defaults(func=ci_reindex)
# Handle steps of a ci build/rebuild
rebuild = subparsers.add_parser('rebuild', help=ci_rebuild.__doc__)
rebuild.set_defaults(func=ci_rebuild)
# Facilitate reproduction of a failed CI build job
reproduce = subparsers.add_parser('reproduce-build',
help=ci_reproduce.__doc__)
reproduce.add_argument('job_url', help='Url of job artifacts bundle')
reproduce.add_argument('--working-dir', help="Where to unpack artifacts",
default=os.path.join(os.getcwd(), 'ci_reproduction'))
reproduce.set_defaults(func=ci_reproduce)
def ci_generate(args): def ci_generate(args):
"""Generate jobs file from a spack environment file containing CI info. """Generate jobs file from a spack environment file containing CI info.
@ -103,6 +125,7 @@ def ci_generate(args):
use_dependencies = args.dependencies use_dependencies = args.dependencies
prune_dag = args.prune_dag prune_dag = args.prune_dag
index_only = args.index_only index_only = args.index_only
artifacts_root = args.artifacts_root
if not output_file: if not output_file:
output_file = os.path.abspath(".gitlab-ci.yml") output_file = os.path.abspath(".gitlab-ci.yml")
@ -116,7 +139,7 @@ def ci_generate(args):
spack_ci.generate_gitlab_ci_yaml( spack_ci.generate_gitlab_ci_yaml(
env, True, output_file, prune_dag=prune_dag, env, True, output_file, prune_dag=prune_dag,
check_index_only=index_only, run_optimizer=run_optimizer, check_index_only=index_only, run_optimizer=run_optimizer,
use_dependencies=use_dependencies) use_dependencies=use_dependencies, artifacts_root=artifacts_root)
if copy_yaml_to: if copy_yaml_to:
copy_to_dir = os.path.dirname(copy_yaml_to) copy_to_dir = os.path.dirname(copy_yaml_to)
@ -125,51 +148,31 @@ def ci_generate(args):
shutil.copyfile(output_file, copy_yaml_to) shutil.copyfile(output_file, copy_yaml_to)
def ci_rebuild(args): def ci_reindex(args):
"""This command represents a gitlab-ci job, corresponding to a single """Rebuild the buildcache index associated with the mirror in the
release spec. As such it must first decide whether or not the spec it active, gitlab-enabled environment. """
has been assigned to build is up to date on the remote binary mirror. env = ev.get_env(args, 'ci rebuild-index', required=True)
If it is not (i.e. the full_hash of the spec as computed locally does
not match the one stored in the metadata on the mirror), this script
will build the package, create a binary cache for it, and then push all
related files to the remote binary mirror. This script also
communicates with a remote CDash instance to share status on the package
build process.
The spec to be built by this job is represented by essentially two
pieces of information: 1) a root spec (possibly already concrete, but
maybe still needing to be concretized) and 2) a package name used to
index that root spec (once the root is, for certain, concrete)."""
env = ev.get_env(args, 'ci rebuild', required=True)
yaml_root = ev.config_dict(env.yaml) yaml_root = ev.config_dict(env.yaml)
# The following environment variables should defined in the CI if 'mirrors' not in yaml_root or len(yaml_root['mirrors'].values()) < 1:
# infrastructre (or some other external source) in the case that the tty.die('spack ci rebuild-index requires an env containing a mirror')
# remote mirror is an S3 bucket. The AWS keys are used to upload
# buildcache entries to S3 using the boto3 api.
#
# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
# S3_ENDPOINT_URL (only needed for non-AWS S3 implementations)
#
# If present, we will import the SPACK_SIGNING_KEY using the
# "spack gpg trust" command, so it can be used both for verifying
# dependency buildcache entries and signing the buildcache entry we create
# for our target pkg.
#
# SPACK_SIGNING_KEY
ci_artifact_dir = get_env_var('CI_PROJECT_DIR') ci_mirrors = yaml_root['mirrors']
ci_pipeline_id = get_env_var('CI_PIPELINE_ID') mirror_urls = [url for url in ci_mirrors.values()]
signing_key = get_env_var('SPACK_SIGNING_KEY') remote_mirror_url = mirror_urls[0]
root_spec = get_env_var('SPACK_ROOT_SPEC')
job_spec_pkg_name = get_env_var('SPACK_JOB_SPEC_PKG_NAME')
compiler_action = get_env_var('SPACK_COMPILER_ACTION')
cdash_build_name = get_env_var('SPACK_CDASH_BUILD_NAME')
related_builds = get_env_var('SPACK_RELATED_BUILDS_CDASH')
pr_env_var = get_env_var('SPACK_IS_PR_PIPELINE')
pr_mirror_url = get_env_var('SPACK_PR_MIRROR_URL')
buildcache.update_index(remote_mirror_url, update_keys=True)
def ci_rebuild(args):
"""Check a single spec against the remote mirror, and rebuild it from
source if the mirror does not contain the full hash match of the spec
as computed locally. """
env = ev.get_env(args, 'ci rebuild', required=True)
# Make sure the environment is "gitlab-enabled", or else there's nothing
# to do.
yaml_root = ev.config_dict(env.yaml)
gitlab_ci = None gitlab_ci = None
if 'gitlab-ci' in yaml_root: if 'gitlab-ci' in yaml_root:
gitlab_ci = yaml_root['gitlab-ci'] gitlab_ci = yaml_root['gitlab-ci']
@ -177,6 +180,37 @@ def ci_rebuild(args):
if not gitlab_ci: if not gitlab_ci:
tty.die('spack ci rebuild requires an env containing gitlab-ci cfg') tty.die('spack ci rebuild requires an env containing gitlab-ci cfg')
# Grab the environment variables we need. These either come from the
# pipeline generation step ("spack ci generate"), where they were written
# out as variables, or else provided by GitLab itself.
pipeline_artifacts_dir = get_env_var('SPACK_ARTIFACTS_ROOT')
job_log_dir = get_env_var('SPACK_JOB_LOG_DIR')
repro_dir = get_env_var('SPACK_JOB_REPRO_DIR')
local_mirror_dir = get_env_var('SPACK_LOCAL_MIRROR_DIR')
concrete_env_dir = get_env_var('SPACK_CONCRETE_ENV_DIR')
ci_pipeline_id = get_env_var('CI_PIPELINE_ID')
ci_job_name = get_env_var('CI_JOB_NAME')
signing_key = get_env_var('SPACK_SIGNING_KEY')
root_spec = get_env_var('SPACK_ROOT_SPEC')
job_spec_pkg_name = get_env_var('SPACK_JOB_SPEC_PKG_NAME')
compiler_action = get_env_var('SPACK_COMPILER_ACTION')
cdash_build_name = get_env_var('SPACK_CDASH_BUILD_NAME')
related_builds = get_env_var('SPACK_RELATED_BUILDS_CDASH')
pr_env_var = get_env_var('SPACK_IS_PR_PIPELINE')
dev_env_var = get_env_var('SPACK_IS_DEVELOP_PIPELINE')
pr_mirror_url = get_env_var('SPACK_PR_MIRROR_URL')
remote_mirror_url = get_env_var('SPACK_REMOTE_MIRROR_URL')
# Debug print some of the key environment variables we should have received
tty.debug('pipeline_artifacts_dir = {0}'.format(pipeline_artifacts_dir))
tty.debug('root_spec = {0}'.format(root_spec))
tty.debug('remote_mirror_url = {0}'.format(remote_mirror_url))
tty.debug('job_spec_pkg_name = {0}'.format(job_spec_pkg_name))
tty.debug('compiler_action = {0}'.format(compiler_action))
# Query the environment manifest to find out whether we're reporting to a
# CDash instance, and if so, gather some information from the manifest to
# support that task.
enable_cdash = False enable_cdash = False
if 'cdash' in yaml_root: if 'cdash' in yaml_root:
enable_cdash = True enable_cdash = True
@ -188,6 +222,7 @@ def ci_rebuild(args):
eq_idx = proj_enc.find('=') + 1 eq_idx = proj_enc.find('=') + 1
cdash_project_enc = proj_enc[eq_idx:] cdash_project_enc = proj_enc[eq_idx:]
cdash_site = ci_cdash['site'] cdash_site = ci_cdash['site']
cdash_id_path = os.path.join(repro_dir, 'cdash_id.txt')
tty.debug('cdash_base_url = {0}'.format(cdash_base_url)) tty.debug('cdash_base_url = {0}'.format(cdash_base_url))
tty.debug('cdash_project = {0}'.format(cdash_project)) tty.debug('cdash_project = {0}'.format(cdash_project))
tty.debug('cdash_project_enc = {0}'.format(cdash_project_enc)) tty.debug('cdash_project_enc = {0}'.format(cdash_project_enc))
@ -196,32 +231,17 @@ def ci_rebuild(args):
tty.debug('related_builds = {0}'.format(related_builds)) tty.debug('related_builds = {0}'.format(related_builds))
tty.debug('job_spec_buildgroup = {0}'.format(job_spec_buildgroup)) tty.debug('job_spec_buildgroup = {0}'.format(job_spec_buildgroup))
remote_mirror_url = None # Is this a pipeline run on a spack PR or a merge to develop? It might
if 'mirrors' in yaml_root: # be neither, e.g. a pipeline run on some environment repository.
ci_mirrors = yaml_root['mirrors']
mirror_urls = [url for url in ci_mirrors.values()]
remote_mirror_url = mirror_urls[0]
if not remote_mirror_url:
tty.die('spack ci rebuild requires an env containing a mirror')
tty.debug('ci_artifact_dir = {0}'.format(ci_artifact_dir))
tty.debug('root_spec = {0}'.format(root_spec))
tty.debug('remote_mirror_url = {0}'.format(remote_mirror_url))
tty.debug('job_spec_pkg_name = {0}'.format(job_spec_pkg_name))
tty.debug('compiler_action = {0}'.format(compiler_action))
cdash_report_dir = os.path.join(ci_artifact_dir, 'cdash_report')
temp_dir = os.path.join(ci_artifact_dir, 'jobs_scratch_dir')
job_log_dir = os.path.join(temp_dir, 'logs')
spec_dir = os.path.join(temp_dir, 'specs')
local_mirror_dir = os.path.join(ci_artifact_dir, 'local_mirror')
build_cache_dir = os.path.join(local_mirror_dir, 'build_cache')
spack_is_pr_pipeline = True if pr_env_var == 'True' else False spack_is_pr_pipeline = True if pr_env_var == 'True' else False
spack_is_develop_pipeline = True if dev_env_var == 'True' else False
# Figure out what is our temporary storage mirror: Is it artifacts
# buildcache? Or temporary-storage-url-prefix? In some cases we need to
# force something or pipelines might not have a way to propagate build
# artifacts from upstream to downstream jobs.
pipeline_mirror_url = None pipeline_mirror_url = None
temp_storage_url_prefix = None temp_storage_url_prefix = None
if 'temporary-storage-url-prefix' in gitlab_ci: if 'temporary-storage-url-prefix' in gitlab_ci:
temp_storage_url_prefix = gitlab_ci['temporary-storage-url-prefix'] temp_storage_url_prefix = gitlab_ci['temporary-storage-url-prefix']
@ -245,208 +265,319 @@ def ci_rebuild(args):
pipeline_mirror_url) pipeline_mirror_url)
tty.debug(mirror_msg) tty.debug(mirror_msg)
# Clean out scratch directory from last stage # Whatever form of root_spec we got, use it to get a map giving us concrete
if os.path.exists(temp_dir): # specs for this job and all of its dependencies.
shutil.rmtree(temp_dir) spec_map = spack_ci.get_concrete_specs(
env, root_spec, job_spec_pkg_name, related_builds, compiler_action)
job_spec = spec_map[job_spec_pkg_name]
job_spec_yaml_file = '{0}.yaml'.format(job_spec_pkg_name)
job_spec_yaml_path = os.path.join(repro_dir, job_spec_yaml_file)
# To provide logs, cdash reports, etc for developer download/perusal,
# these things have to be put into artifacts. This means downstream
# jobs that "need" this job will get those artifacts too. So here we
# need to clean out the artifacts we may have got from upstream jobs.
cdash_report_dir = os.path.join(pipeline_artifacts_dir, 'cdash_report')
if os.path.exists(cdash_report_dir): if os.path.exists(cdash_report_dir):
shutil.rmtree(cdash_report_dir) shutil.rmtree(cdash_report_dir)
os.makedirs(job_log_dir) if os.path.exists(job_log_dir):
os.makedirs(spec_dir) shutil.rmtree(job_log_dir)
job_spec_yaml_path = os.path.join( if os.path.exists(repro_dir):
spec_dir, '{0}.yaml'.format(job_spec_pkg_name)) shutil.rmtree(repro_dir)
job_log_file = os.path.join(job_log_dir, 'pipeline_log.txt')
# Now that we removed them if they existed, create the directories we
# need for storing artifacts. The cdash_report directory will be
# created internally if needed.
os.makedirs(job_log_dir)
os.makedirs(repro_dir)
# Copy the concrete environment files to the repro directory so we can
# expose them as artifacts and not conflict with the concrete environment
# files we got as artifacts from the upstream pipeline generation job.
# Try to cast a slightly wider net too, and hopefully get the generated
# pipeline yaml. If we miss it, the user will still be able to go to the
# pipeline generation job and get it from there.
target_dirs = [
concrete_env_dir,
pipeline_artifacts_dir
]
for dir_to_list in target_dirs:
for file_name in os.listdir(dir_to_list):
src_file = os.path.join(dir_to_list, file_name)
if os.path.isfile(src_file):
dst_file = os.path.join(repro_dir, file_name)
shutil.copyfile(src_file, dst_file)
# If signing key was provided via "SPACK_SIGNING_KEY", then try to
# import it.
if signing_key:
spack_ci.import_signing_key(signing_key)
# Depending on the specifics of this job, we might need to turn on the
# "config:install_missing compilers" option (to build this job spec
# with a bootstrapped compiler), or possibly run "spack compiler find"
# (to build a bootstrap compiler or one of its deps in a
# compiler-agnostic way), or maybe do nothing at all (to build a spec
# using a compiler already installed on the target system).
spack_ci.configure_compilers(compiler_action)
# Write this job's spec yaml into the reproduction directory, and it will
# also be used in the generated "spack install" command to install the spec
tty.debug('job concrete spec path: {0}'.format(job_spec_yaml_path))
with open(job_spec_yaml_path, 'w') as fd:
fd.write(job_spec.to_yaml(hash=ht.build_hash))
# Write the concrete root spec yaml into the reproduction directory
root_spec_yaml_path = os.path.join(repro_dir, 'root.yaml')
with open(root_spec_yaml_path, 'w') as fd:
fd.write(spec_map['root'].to_yaml(hash=ht.build_hash))
# Write some other details to aid in reproduction into an artifact
repro_file = os.path.join(repro_dir, 'repro.json')
repro_details = {
'job_name': ci_job_name,
'job_spec_yaml': job_spec_yaml_file,
'root_spec_yaml': 'root.yaml'
}
with open(repro_file, 'w') as fd:
fd.write(json.dumps(repro_details))
# Write information about spack into an artifact in the repro dir
spack_info = spack_ci.get_spack_info()
spack_info_file = os.path.join(repro_dir, 'spack_info.txt')
with open(spack_info_file, 'w') as fd:
fd.write('\n{0}\n'.format(spack_info))
# If we decided there should be a temporary storage mechanism, add that
# mirror now so it's used when we check for a full hash match already
# built for this spec.
if pipeline_mirror_url:
spack.mirror.add(spack_ci.TEMP_STORAGE_MIRROR_NAME,
pipeline_mirror_url,
cfg.default_modify_scope())
cdash_build_id = None cdash_build_id = None
cdash_build_stamp = None cdash_build_stamp = None
with open(job_log_file, 'w') as log_fd: # Check configured mirrors for a built spec with a matching full hash
os.dup2(log_fd.fileno(), sys.stdout.fileno()) matches = bindist.get_mirrors_for_spec(
os.dup2(log_fd.fileno(), sys.stderr.fileno()) job_spec, full_hash_match=True, index_only=False)
current_directory = os.getcwd() if matches:
tty.debug('Current working directory: {0}, Contents:'.format( # Got a full hash match on at least one configured mirror. All
current_directory)) # matches represent the fully up-to-date spec, so should all be
directory_list = os.listdir(current_directory) # equivalent. If artifacts mirror is enabled, we just pick one
for next_entry in directory_list: # of the matches and download the buildcache files from there to
tty.debug(' {0}'.format(next_entry)) # the artifacts, so they're available to be used by dependent
# jobs in subsequent stages.
tty.msg('No need to rebuild {0}, found full hash match at: '.format(
job_spec_pkg_name))
for match in matches:
tty.msg(' {0}'.format(match['mirror_url']))
if enable_artifacts_mirror:
matching_mirror = matches[0]['mirror_url']
build_cache_dir = os.path.join(local_mirror_dir, 'build_cache')
tty.debug('Getting {0} buildcache from {1}'.format(
job_spec_pkg_name, matching_mirror))
tty.debug('Downloading to {0}'.format(build_cache_dir))
buildcache.download_buildcache_files(
job_spec, build_cache_dir, True, matching_mirror)
tty.debug('job concrete spec path: {0}'.format(job_spec_yaml_path)) # Now we are done and successful
sys.exit(0)
if signing_key: # No full hash match anywhere means we need to rebuild spec
spack_ci.import_signing_key(signing_key)
# Start with spack arguments
install_args = [base_arg for base_arg in CI_REBUILD_INSTALL_BASE_ARGS]
config = cfg.get('config')
if not config['verify_ssl']:
install_args.append('-k')
install_args.extend([
'install',
'--keep-stage',
'--require-full-hash-match',
])
can_verify = spack_ci.can_verify_binaries()
verify_binaries = can_verify and spack_is_pr_pipeline is False
if not verify_binaries:
install_args.append('--no-check-signature')
# If CDash reporting is enabled, we first register this build with
# the specified CDash instance, then relate the build to those of
# its dependencies.
if enable_cdash:
tty.debug('CDash: Registering build')
(cdash_build_id,
cdash_build_stamp) = spack_ci.register_cdash_build(
cdash_build_name, cdash_base_url, cdash_project,
cdash_site, job_spec_buildgroup)
cdash_upload_url = '{0}/submit.php?project={1}'.format(
cdash_base_url, cdash_project_enc)
install_args.extend([
'--cdash-upload-url', cdash_upload_url,
'--cdash-build', cdash_build_name,
'--cdash-site', cdash_site,
'--cdash-buildstamp', cdash_build_stamp,
])
tty.debug('CDash: Relating build with dependency builds')
spack_ci.relate_cdash_builds(
spec_map, cdash_base_url, cdash_build_id, cdash_project,
[pipeline_mirror_url, pr_mirror_url, remote_mirror_url])
# store the cdash build id on disk for later
with open(cdash_id_path, 'w') as fd:
fd.write(cdash_build_id)
# A compiler action of 'FIND_ANY' means we are building a bootstrap
# compiler or one of its deps.
# TODO: when compilers are dependencies, we should include --no-add
if compiler_action != 'FIND_ANY':
install_args.append('--no-add')
# TODO: once we have the concrete spec registry, use the DAG hash
# to identify the spec to install, rather than the concrete spec
# yaml file.
install_args.extend(['-f', job_spec_yaml_path])
tty.debug('Installing {0} from source'.format(job_spec.name))
tty.debug('spack install arguments: {0}'.format(
install_args))
# Write the install command to a shell script
with open('install.sh', 'w') as fd:
fd.write('#!/bin/bash\n\n')
fd.write('\n# spack install command\n')
fd.write(' '.join(['"{0}"'.format(i) for i in install_args]))
fd.write('\n')
st = os.stat('install.sh')
os.chmod('install.sh', st.st_mode | stat.S_IEXEC)
install_copy_path = os.path.join(repro_dir, 'install.sh')
shutil.copyfile('install.sh', install_copy_path)
# Run the generated install.sh shell script as if it were being run in
# a login shell.
try:
install_process = subprocess.Popen(['bash', '-l', './install.sh'])
install_process.wait()
install_exit_code = install_process.returncode
except (ValueError, subprocess.CalledProcessError, OSError) as inst:
tty.error('Encountered error running install script')
tty.error(inst)
# Now do the post-install tasks
tty.debug('spack install exited {0}'.format(install_exit_code))
# If a spec fails to build in a spack develop pipeline, we add it to a
# list of known broken full hashes. This allows spack PR pipelines to
# avoid wasting compute cycles attempting to build those hashes.
if install_exit_code != 0 and spack_is_develop_pipeline:
if 'broken-specs-url' in gitlab_ci:
broken_specs_url = gitlab_ci['broken-specs-url']
dev_fail_hash = job_spec.full_hash()
broken_spec_path = url_util.join(broken_specs_url, dev_fail_hash)
tmpdir = tempfile.mkdtemp()
empty_file_path = os.path.join(tmpdir, 'empty.txt')
try:
with open(empty_file_path, 'w') as efd:
efd.write('')
web_util.push_to_url(
empty_file_path,
broken_spec_path,
keep_original=False,
extra_args={'ContentType': 'text/plain'})
except Exception as err:
# If we got some kind of S3 (access denied or other connection
# error), the first non boto-specific class in the exception
# hierarchy is Exception. Just print a warning and return
msg = 'Error writing to broken specs list {0}: {1}'.format(
broken_spec_path, err)
tty.warn(msg)
finally:
shutil.rmtree(tmpdir)
# We generated the "spack install ..." command to "--keep-stage", copy
# any logs from the staging directory to artifacts now
spack_ci.copy_stage_logs_to_artifacts(job_spec, job_log_dir)
# Create buildcache on remote mirror, either on pr-specific mirror or
# on the main mirror defined in the gitlab-enabled spack environment
if spack_is_pr_pipeline:
buildcache_mirror_url = pr_mirror_url
else:
buildcache_mirror_url = remote_mirror_url
# If the install succeeded, create a buildcache entry for this job spec
# and push it to one or more mirrors. If the install did not succeed,
# print out some instructions on how to reproduce this build failure
# outside of the pipeline environment.
if install_exit_code == 0:
can_sign = spack_ci.can_sign_binaries() can_sign = spack_ci.can_sign_binaries()
sign_binaries = can_sign and spack_is_pr_pipeline is False sign_binaries = can_sign and spack_is_pr_pipeline is False
can_verify = spack_ci.can_verify_binaries() # Create buildcache in either the main remote mirror, or in the
verify_binaries = can_verify and spack_is_pr_pipeline is False # per-PR mirror, if this is a PR pipeline
spack_ci.push_mirror_contents(
env, job_spec, job_spec_yaml_path, buildcache_mirror_url,
cdash_build_id, sign_binaries)
spack_ci.configure_compilers(compiler_action) # Create another copy of that buildcache in the per-pipeline
# temporary storage mirror (this is only done if either
# artifacts buildcache is enabled or a temporary storage url
# prefix is set)
spack_ci.push_mirror_contents(
env, job_spec, job_spec_yaml_path, pipeline_mirror_url,
cdash_build_id, sign_binaries)
else:
tty.debug('spack install exited non-zero, will not create buildcache')
spec_map = spack_ci.get_concrete_specs( api_root_url = get_env_var('CI_API_V4_URL')
root_spec, job_spec_pkg_name, related_builds, compiler_action) ci_project_id = get_env_var('CI_PROJECT_ID')
ci_job_id = get_env_var('CI_JOB_ID')
job_spec = spec_map[job_spec_pkg_name] repro_job_url = '{0}/projects/{1}/jobs/{2}/artifacts'.format(
api_root_url, ci_project_id, ci_job_id)
tty.debug('Here is the concrete spec: {0}'.format(job_spec)) # Control characters cause this to be printed in blue so it stands out
reproduce_msg = """
with open(job_spec_yaml_path, 'w') as fd: \033[34mTo reproduce this build locally, run:
fd.write(job_spec.to_yaml(hash=ht.build_hash))
tty.debug('Done writing concrete spec') spack ci reproduce-build {0} [--working-dir <dir>]
# DEBUG If this project does not have public pipelines, you will need to first:
with open(job_spec_yaml_path) as fd:
tty.debug('Wrote spec file, read it back. Contents:')
tty.debug(fd.read())
# DEBUG the root spec export GITLAB_PRIVATE_TOKEN=<generated_token>
root_spec_yaml_path = os.path.join(spec_dir, 'root.yaml')
with open(root_spec_yaml_path, 'w') as fd:
fd.write(spec_map['root'].to_yaml(hash=ht.build_hash))
# TODO: Refactor the spack install command so it's easier to use from ... then follow the printed instructions.\033[0;0m
# python modules. Currently we use "exe.which('spack')" to make it
# easier to install packages from here, but it introduces some
# problems, e.g. if we want the spack command to have access to the
# mirrors we're configuring, then we have to use the "spack" command
# to add the mirrors too, which in turn means that any code here *not*
# using the spack command does *not* have access to the mirrors.
spack_cmd = exe.which('spack')
mirrors_to_check = {
'ci_remote_mirror': remote_mirror_url,
}
def add_mirror(mirror_name, mirror_url): """.format(repro_job_url)
m_args = ['mirror', 'add', mirror_name, mirror_url]
tty.debug('Adding mirror: spack {0}'.format(m_args))
mirror_add_output = spack_cmd(*m_args)
# Workaround: Adding the mirrors above, using "spack_cmd" makes
# sure they're available later when we use "spack_cmd" to install
# the package. But then we also need to add them to this dict
# below, so they're available in this process (we end up having to
# pass them to "bindist.get_mirrors_for_spec()")
mirrors_to_check[mirror_name] = mirror_url
tty.debug('spack mirror add output: {0}'.format(mirror_add_output))
# Configure mirrors print(reproduce_msg)
if pr_mirror_url:
add_mirror('ci_pr_mirror', pr_mirror_url)
if pipeline_mirror_url: # Tie job success/failure to the success/failure of building the spec
add_mirror(spack_ci.TEMP_STORAGE_MIRROR_NAME, pipeline_mirror_url) sys.exit(install_exit_code)
tty.debug('listing spack mirrors:')
spack_cmd('mirror', 'list')
spack_cmd('config', 'blame', 'mirrors')
# Checks all mirrors for a built spec with a matching full hash
matches = bindist.get_mirrors_for_spec(
job_spec, full_hash_match=True, mirrors_to_check=mirrors_to_check,
index_only=False)
if matches:
# Got at full hash match on at least one configured mirror. All
# matches represent the fully up-to-date spec, so should all be
# equivalent. If artifacts mirror is enabled, we just pick one
# of the matches and download the buildcache files from there to
# the artifacts, so they're available to be used by dependent
# jobs in subsequent stages.
tty.debug('No need to rebuild {0}'.format(job_spec_pkg_name))
if enable_artifacts_mirror:
matching_mirror = matches[0]['mirror_url']
tty.debug('Getting {0} buildcache from {1}'.format(
job_spec_pkg_name, matching_mirror))
tty.debug('Downloading to {0}'.format(build_cache_dir))
buildcache.download_buildcache_files(
job_spec, build_cache_dir, True, matching_mirror)
else:
# No full hash match anywhere means we need to rebuild spec
# Build up common install arguments
install_args = [
'-d', '-v', '-k', 'install',
'--keep-stage',
'--require-full-hash-match',
]
if not verify_binaries:
install_args.append('--no-check-signature')
# Add arguments to create + register a new build on CDash (if
# enabled)
if enable_cdash:
tty.debug('Registering build with CDash')
(cdash_build_id,
cdash_build_stamp) = spack_ci.register_cdash_build(
cdash_build_name, cdash_base_url, cdash_project,
cdash_site, job_spec_buildgroup)
cdash_upload_url = '{0}/submit.php?project={1}'.format(
cdash_base_url, cdash_project_enc)
install_args.extend([
'--cdash-upload-url', cdash_upload_url,
'--cdash-build', cdash_build_name,
'--cdash-site', cdash_site,
'--cdash-buildstamp', cdash_build_stamp,
])
install_args.append(job_spec_yaml_path)
tty.debug('Installing {0} from source'.format(job_spec.name))
try:
tty.debug('spack install arguments: {0}'.format(
install_args))
spack_cmd(*install_args)
finally:
spack_ci.copy_stage_logs_to_artifacts(job_spec, job_log_dir)
# Create buildcache on remote mirror, either on pr-specific
# mirror or on mirror defined in spack environment
if spack_is_pr_pipeline:
buildcache_mirror_url = pr_mirror_url
else:
buildcache_mirror_url = remote_mirror_url
# Create buildcache in either the main remote mirror, or in the
# per-PR mirror, if this is a PR pipeline
spack_ci.push_mirror_contents(
env, job_spec, job_spec_yaml_path, buildcache_mirror_url,
cdash_build_id, sign_binaries)
# Create another copy of that buildcache in the per-pipeline
# temporary storage mirror (this is only done if either artifacts
# buildcache is enabled or a temporary storage url prefix is set)
spack_ci.push_mirror_contents(
env, job_spec, job_spec_yaml_path, pipeline_mirror_url,
cdash_build_id, sign_binaries)
# Relate this build to its dependencies on CDash (if enabled)
if enable_cdash:
spack_ci.relate_cdash_builds(
spec_map, cdash_base_url, cdash_build_id, cdash_project,
pipeline_mirror_url or pr_mirror_url or remote_mirror_url)
def ci_reindex(args): def ci_reproduce(args):
"""Rebuild the buildcache index associated with the mirror in the job_url = args.job_url
active, gitlab-enabled environment. """ work_dir = args.working_dir
env = ev.get_env(args, 'ci rebuild-index', required=True)
yaml_root = ev.config_dict(env.yaml)
if 'mirrors' not in yaml_root or len(yaml_root['mirrors'].values()) < 1: spack_ci.reproduce_ci_job(job_url, work_dir)
tty.die('spack ci rebuild-index requires an env containing a mirror')
ci_mirrors = yaml_root['mirrors']
mirror_urls = [url for url in ci_mirrors.values()]
remote_mirror_url = mirror_urls[0]
buildcache.update_index(remote_mirror_url, update_keys=True)
def ci(parser, args): def ci(parser, args):

View File

@ -130,50 +130,12 @@ def setup_parser(subparser):
def mirror_add(args): def mirror_add(args):
"""Add a mirror to Spack.""" """Add a mirror to Spack."""
url = url_util.format(args.url) url = url_util.format(args.url)
spack.mirror.add(args.name, url, args.scope)
mirrors = spack.config.get('mirrors', scope=args.scope)
if not mirrors:
mirrors = syaml_dict()
if args.name in mirrors:
tty.die("Mirror with name %s already exists." % args.name)
items = [(n, u) for n, u in mirrors.items()]
items.insert(0, (args.name, url))
mirrors = syaml_dict(items)
spack.config.set('mirrors', mirrors, scope=args.scope)
def mirror_remove(args): def mirror_remove(args):
"""Remove a mirror by name.""" """Remove a mirror by name."""
name = args.name spack.mirror.remove(args.name, args.scope)
mirrors = spack.config.get('mirrors', scope=args.scope)
if not mirrors:
mirrors = syaml_dict()
if name not in mirrors:
tty.die("No mirror with name %s" % name)
old_value = mirrors.pop(name)
spack.config.set('mirrors', mirrors, scope=args.scope)
debug_msg_url = "url %s"
debug_msg = ["Removed mirror %s with"]
values = [name]
try:
fetch_value = old_value['fetch']
push_value = old_value['push']
debug_msg.extend(("fetch", debug_msg_url, "and push", debug_msg_url))
values.extend((fetch_value, push_value))
except TypeError:
debug_msg.append(debug_msg_url)
values.append(old_value)
tty.debug(" ".join(debug_msg) % tuple(values))
tty.msg("Removed mirror %s." % name)
def mirror_set_url(args): def mirror_set_url(args):

View File

@ -455,6 +455,51 @@ def create(path, specs, skip_unstable_versions=False):
return mirror_stats.stats() return mirror_stats.stats()
def add(name, url, scope):
"""Add a named mirror in the given scope"""
mirrors = spack.config.get('mirrors', scope=scope)
if not mirrors:
mirrors = syaml_dict()
if name in mirrors:
tty.die("Mirror with name %s already exists." % name)
items = [(n, u) for n, u in mirrors.items()]
items.insert(0, (name, url))
mirrors = syaml_dict(items)
spack.config.set('mirrors', mirrors, scope=scope)
def remove(name, scope):
"""Remove the named mirror in the given scope"""
mirrors = spack.config.get('mirrors', scope=scope)
if not mirrors:
mirrors = syaml_dict()
if name not in mirrors:
tty.die("No mirror with name %s" % name)
old_value = mirrors.pop(name)
spack.config.set('mirrors', mirrors, scope=scope)
debug_msg_url = "url %s"
debug_msg = ["Removed mirror %s with"]
values = [name]
try:
fetch_value = old_value['fetch']
push_value = old_value['push']
debug_msg.extend(("fetch", debug_msg_url, "and push", debug_msg_url))
values.extend((fetch_value, push_value))
except TypeError:
debug_msg.append(debug_msg_url)
values.append(old_value)
tty.debug(" ".join(debug_msg) % tuple(values))
tty.msg("Removed mirror %s." % name)
class MirrorStats(object): class MirrorStats(object):
def __init__(self): def __init__(self):
self.present = {} self.present = {}

View File

@ -3,16 +3,19 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import json
import os import os
import pytest import pytest
from six.moves.urllib.error import URLError
import llnl.util.filesystem as fs
import spack.ci as ci import spack.ci as ci
import spack.environment as ev
import spack.error
import spack.main as spack_main import spack.main as spack_main
import spack.config as cfg import spack.config as cfg
import spack.paths as spack_paths import spack.paths as spack_paths
import spack.spec as spec import spack.spec as spec
import spack.util.web as web_util
import spack.util.gpg import spack.util.gpg
import spack.ci_optimization as ci_opt import spack.ci_optimization as ci_opt
@ -88,70 +91,155 @@ def assert_present(config):
assert_present(last_config) assert_present(last_config)
def test_get_concrete_specs(config, mock_packages): def test_get_concrete_specs(config, mutable_mock_env_path, mock_packages):
root_spec = ( e = ev.create('test1')
'eJztkk1uwyAQhfc5BbuuYjWObSKuUlURYP5aDBjjBPv0RU7iRI6qpKuqUtnxzZvRwHud' e.add('dyninst')
'YxSt1oCMyuVoBdI5MN8paxDYZK/ZbkLYU3kqAuA0Dtz6BgGtTB8XdG87BCgzwXbwXArY' e.concretize()
'CxYQiLtqXxUTpLZxSjN/mWlwwxAQlJ7v8wpFtsvK1UXSOUyTjvRKB2Um7LBPhZD0l1md'
'xJ7VCATfszOiXGOR9np7vwDn7lCMS8SXQNf3RCtyBTVzzNTMUMXmfWrFeR+UngEAEncS'
'ASjKwZcid7ERNldthBxjX46mMD2PsJnlYXDs2rye3l+vroOkJJ54SXgZPklLRQmx61sm'
'cgKNVFRO0qlpf2pojq1Ro7OG56MY+Bgc1PkIo/WkaT8OVcrDYuvZkJdtBl/+XCZ+NQBJ'
'oKg1h6X/VdXRoyE2OWeH6lCXZdHGrauUZAWFw/YJ/0/39OefN3F4Kle3cXjYsF684ZqG'
'Tbap/uPwbRx+YPStIQ8bvgA7G6YE'
)
dep_builds = 'diffutils;libiconv' dyninst_hash = None
spec_map = ci.get_concrete_specs(root_spec, 'bzip2', dep_builds, 'NONE') hash_dict = {}
assert('root' in spec_map and 'deps' in spec_map) with e as active_env:
for s in active_env.all_specs():
hash_dict[s.name] = s.build_hash()
if s.name == 'dyninst':
dyninst_hash = s.build_hash()
nonconc_root_spec = 'archive-files' assert(dyninst_hash)
dep_builds = ''
spec_map = ci.get_concrete_specs(
nonconc_root_spec, 'archive-files', dep_builds, 'FIND_ANY')
assert('root' in spec_map and 'deps' in spec_map) dep_builds = 'libdwarf;libelf'
assert('archive-files' in spec_map) spec_map = ci.get_concrete_specs(
active_env, dyninst_hash, 'dyninst', dep_builds, 'NONE')
assert('root' in spec_map and 'deps' in spec_map)
concrete_root = spec_map['root']
assert(concrete_root.build_hash() == dyninst_hash)
concrete_deps = spec_map['deps']
for key, obj in concrete_deps.items():
assert(obj.build_hash() == hash_dict[key])
s = spec.Spec('dyninst')
print('nonconc spec name: {0}'.format(s.name))
spec_map = ci.get_concrete_specs(
active_env, s.name, s.name, dep_builds, 'FIND_ANY')
assert('root' in spec_map and 'deps' in spec_map)
class FakeWebResponder(object):
def __init__(self, response_code=200, content_to_read=[]):
self._resp_code = response_code
self._content = content_to_read
self._read = [False for c in content_to_read]
def open(self, request):
return self
def getcode(self):
return self._resp_code
def read(self, length=None):
if len(self._content) <= 0:
return None
if not self._read[-1]:
return_content = self._content[-1]
if length:
self._read[-1] = True
else:
self._read.pop()
self._content.pop()
return return_content
self._read.pop()
self._content.pop()
return None
@pytest.mark.maybeslow @pytest.mark.maybeslow
def test_register_cdash_build(): def test_register_cdash_build(monkeypatch):
build_name = 'Some pkg' build_name = 'Some pkg'
base_url = 'http://cdash.fake.org' base_url = 'http://cdash.fake.org'
project = 'spack' project = 'spack'
site = 'spacktests' site = 'spacktests'
track = 'Experimental' track = 'Experimental'
with pytest.raises(URLError): response_obj = {
ci.register_cdash_build(build_name, base_url, project, site, track) 'buildid': 42
}
fake_responder = FakeWebResponder(
content_to_read=[json.dumps(response_obj)])
monkeypatch.setattr(ci, 'build_opener', lambda handler: fake_responder)
build_id, build_stamp = ci.register_cdash_build(
build_name, base_url, project, site, track)
assert(build_id == 42)
def test_relate_cdash_builds(config, mock_packages): def test_relate_cdash_builds(config, mutable_mock_env_path, mock_packages,
root_spec = ( monkeypatch):
'eJztkk1uwyAQhfc5BbuuYjWObSKuUlURYP5aDBjjBPv0RU7iRI6qpKuqUtnxzZvRwHud' e = ev.create('test1')
'YxSt1oCMyuVoBdI5MN8paxDYZK/ZbkLYU3kqAuA0Dtz6BgGtTB8XdG87BCgzwXbwXArY' e.add('dyninst')
'CxYQiLtqXxUTpLZxSjN/mWlwwxAQlJ7v8wpFtsvK1UXSOUyTjvRKB2Um7LBPhZD0l1md' e.concretize()
'xJ7VCATfszOiXGOR9np7vwDn7lCMS8SXQNf3RCtyBTVzzNTMUMXmfWrFeR+UngEAEncS'
'ASjKwZcid7ERNldthBxjX46mMD2PsJnlYXDs2rye3l+vroOkJJ54SXgZPklLRQmx61sm'
'cgKNVFRO0qlpf2pojq1Ro7OG56MY+Bgc1PkIo/WkaT8OVcrDYuvZkJdtBl/+XCZ+NQBJ'
'oKg1h6X/VdXRoyE2OWeH6lCXZdHGrauUZAWFw/YJ/0/39OefN3F4Kle3cXjYsF684ZqG'
'Tbap/uPwbRx+YPStIQ8bvgA7G6YE'
)
dep_builds = 'diffutils;libiconv' dyninst_hash = None
spec_map = ci.get_concrete_specs(root_spec, 'bzip2', dep_builds, 'NONE') hash_dict = {}
cdash_api_url = 'http://cdash.fake.org'
job_build_id = '42' with e as active_env:
cdash_project = 'spack' for s in active_env.all_specs():
cdashids_mirror_url = 'https://my.fake.mirror' hash_dict[s.name] = s.build_hash()
if s.name == 'dyninst':
dyninst_hash = s.build_hash()
assert(dyninst_hash)
dep_builds = 'libdwarf;libelf'
spec_map = ci.get_concrete_specs(
active_env, dyninst_hash, 'dyninst', dep_builds, 'NONE')
assert('root' in spec_map and 'deps' in spec_map)
cdash_api_url = 'http://cdash.fake.org'
job_build_id = '42'
cdash_project = 'spack'
cdashids_mirror_url = 'https://my.fake.mirror'
dep_cdash_ids = {
'libdwarf': 1,
'libelf': 2
}
monkeypatch.setattr(ci, 'read_cdashid_from_mirror',
lambda s, u: dep_cdash_ids.pop(s.name))
fake_responder = FakeWebResponder(
content_to_read=['libdwarf', 'libelf'])
monkeypatch.setattr(ci, 'build_opener', lambda handler: fake_responder)
with pytest.raises(web_util.SpackWebError):
ci.relate_cdash_builds(spec_map, cdash_api_url, job_build_id, ci.relate_cdash_builds(spec_map, cdash_api_url, job_build_id,
cdash_project, cdashids_mirror_url) cdash_project, [cdashids_mirror_url])
# Just make sure passing None for build id doesn't throw exceptions assert(not dep_cdash_ids)
ci.relate_cdash_builds(spec_map, cdash_api_url, None, cdash_project,
cdashids_mirror_url) dep_cdash_ids = {
'libdwarf': 1,
'libelf': 2
}
fake_responder._resp_code = 400
with pytest.raises(spack.error.SpackError):
ci.relate_cdash_builds(spec_map, cdash_api_url, job_build_id,
cdash_project, [cdashids_mirror_url])
dep_cdash_ids = {}
# Just make sure passing None for build id doesn't result in any
# calls to "read_cdashid_from_mirror"
ci.relate_cdash_builds(spec_map, cdash_api_url, None, cdash_project,
[cdashids_mirror_url])
def test_read_write_cdash_ids(config, tmp_scope, tmpdir, mock_packages): def test_read_write_cdash_ids(config, tmp_scope, tmpdir, mock_packages):
@ -173,6 +261,109 @@ def test_read_write_cdash_ids(config, tmp_scope, tmpdir, mock_packages):
assert(str(read_cdashid) == orig_cdashid) assert(str(read_cdashid) == orig_cdashid)
def test_download_and_extract_artifacts(tmpdir, monkeypatch):
os.environ['GITLAB_PRIVATE_TOKEN'] = 'faketoken'
url = 'https://www.nosuchurlexists.itsfake/artifacts.zip'
working_dir = os.path.join(tmpdir.strpath, 'repro')
test_artifacts_path = os.path.join(
spack_paths.test_path, 'data', 'ci', 'gitlab', 'artifacts.zip')
with open(test_artifacts_path, 'rb') as fd:
fake_responder = FakeWebResponder(content_to_read=[fd.read()])
monkeypatch.setattr(ci, 'build_opener', lambda handler: fake_responder)
ci.download_and_extract_artifacts(url, working_dir)
found_zip = fs.find(working_dir, 'artifacts.zip')
assert(len(found_zip) == 0)
found_install = fs.find(working_dir, 'install.sh')
assert(len(found_install) == 1)
fake_responder._resp_code = 400
with pytest.raises(spack.error.SpackError):
ci.download_and_extract_artifacts(url, working_dir)
def test_setup_spack_repro_version(tmpdir, capfd, last_two_git_commits,
monkeypatch):
c1, c2 = last_two_git_commits
repro_dir = os.path.join(tmpdir.strpath, 'repro')
spack_dir = os.path.join(repro_dir, 'spack')
os.makedirs(spack_dir)
prefix_save = spack.paths.prefix
monkeypatch.setattr(spack.paths, 'prefix', '/garbage')
ret = ci.setup_spack_repro_version(repro_dir, c2, c1)
out, err = capfd.readouterr()
assert(not ret)
assert('Unable to find the path' in err)
monkeypatch.setattr(spack.paths, 'prefix', prefix_save)
monkeypatch.setattr(spack.util.executable, 'which', lambda cmd: None)
ret = ci.setup_spack_repro_version(repro_dir, c2, c1)
out, err = capfd.readouterr()
assert(not ret)
assert('requires git' in err)
class mock_git_cmd(object):
def __init__(self, *args, **kwargs):
self.returncode = 0
self.check = None
def __call__(self, *args, **kwargs):
if self.check:
self.returncode = self.check(*args, **kwargs)
else:
self.returncode = 0
git_cmd = mock_git_cmd()
monkeypatch.setattr(spack.util.executable, 'which', lambda cmd: git_cmd)
git_cmd.check = lambda *a, **k: 1 if len(a) > 2 and a[2] == c2 else 0
ret = ci.setup_spack_repro_version(repro_dir, c2, c1)
out, err = capfd.readouterr()
assert(not ret)
assert('Missing commit: {0}'.format(c2) in err)
git_cmd.check = lambda *a, **k: 1 if len(a) > 2 and a[2] == c1 else 0
ret = ci.setup_spack_repro_version(repro_dir, c2, c1)
out, err = capfd.readouterr()
assert(not ret)
assert('Missing commit: {0}'.format(c1) in err)
git_cmd.check = lambda *a, **k: 1 if a[0] == 'clone' else 0
ret = ci.setup_spack_repro_version(repro_dir, c2, c1)
out, err = capfd.readouterr()
assert(not ret)
assert('Unable to clone' in err)
git_cmd.check = lambda *a, **k: 1 if a[0] == 'checkout' else 0
ret = ci.setup_spack_repro_version(repro_dir, c2, c1)
out, err = capfd.readouterr()
assert(not ret)
assert('Unable to checkout' in err)
git_cmd.check = lambda *a, **k: 1 if 'merge' in a else 0
ret = ci.setup_spack_repro_version(repro_dir, c2, c1)
out, err = capfd.readouterr()
assert(not ret)
assert('Unable to merge {0}'.format(c1) in err)
def test_ci_workarounds(): def test_ci_workarounds():
fake_root_spec = 'x' * 544 fake_root_spec = 'x' * 544
fake_spack_ref = 'x' * 40 fake_spack_ref = 'x' * 40

View File

@ -8,9 +8,11 @@
import os import os
import pytest import pytest
from jsonschema import validate, ValidationError from jsonschema import validate, ValidationError
import shutil
import spack import spack
import spack.ci as ci import spack.ci as ci
import spack.cmd.buildcache as buildcache
import spack.compilers as compilers import spack.compilers as compilers
import spack.config import spack.config
import spack.environment as ev import spack.environment as ev
@ -23,7 +25,6 @@
from spack.schema.gitlab_ci import schema as gitlab_ci_schema from spack.schema.gitlab_ci import schema as gitlab_ci_schema
from spack.spec import Spec, CompilerSpec from spack.spec import Spec, CompilerSpec
from spack.util.mock_package import MockPackageMultiRepo from spack.util.mock_package import MockPackageMultiRepo
import spack.util.executable as exe
import spack.util.spack_yaml as syaml import spack.util.spack_yaml as syaml
import spack.util.gpg import spack.util.gpg
@ -35,7 +36,6 @@
install_cmd = spack.main.SpackCommand('install') install_cmd = spack.main.SpackCommand('install')
uninstall_cmd = spack.main.SpackCommand('uninstall') uninstall_cmd = spack.main.SpackCommand('uninstall')
buildcache_cmd = spack.main.SpackCommand('buildcache') buildcache_cmd = spack.main.SpackCommand('buildcache')
git = exe.which('git', required=True)
pytestmark = pytest.mark.maybeslow pytestmark = pytest.mark.maybeslow
@ -190,10 +190,6 @@ def _validate_needs_graph(yaml_contents, needs_graph, artifacts):
if job_name.startswith(needs_def_name): if job_name.startswith(needs_def_name):
# check job needs against the expected needs definition # check job needs against the expected needs definition
j_needs = job_def['needs'] j_needs = job_def['needs']
print('job {0} needs:'.format(needs_def_name))
print([j['job'] for j in j_needs])
print('expected:')
print([nl for nl in needs_list])
assert all([job_needs['job'][:job_needs['job'].index('/')] assert all([job_needs['job'][:job_needs['job'].index('/')]
in needs_list for job_needs in j_needs]) in needs_list for job_needs in j_needs])
assert(all([nl in assert(all([nl in
@ -402,8 +398,6 @@ def test_ci_generate_with_cdash_token(tmpdir, mutable_mock_env_path,
dir_contents = os.listdir(tmpdir.strpath) dir_contents = os.listdir(tmpdir.strpath)
print(dir_contents)
assert('backup-ci.yml' in dir_contents) assert('backup-ci.yml' in dir_contents)
orig_file = str(tmpdir.join('.gitlab-ci.yml')) orig_file = str(tmpdir.join('.gitlab-ci.yml'))
@ -536,8 +530,6 @@ def test_ci_generate_pkg_with_deps(tmpdir, mutable_mock_env_path,
with open(outputfile) as f: with open(outputfile) as f:
contents = f.read() contents = f.read()
print('generated contents: ')
print(contents)
yaml_contents = syaml.load(contents) yaml_contents = syaml.load(contents)
found = [] found = []
for ci_key in yaml_contents.keys(): for ci_key in yaml_contents.keys():
@ -605,18 +597,14 @@ def test_ci_generate_for_pr_pipeline(tmpdir, mutable_mock_env_path,
with open(outputfile) as f: with open(outputfile) as f:
contents = f.read() contents = f.read()
print('generated contents: ')
print(contents)
yaml_contents = syaml.load(contents) yaml_contents = syaml.load(contents)
assert('rebuild-index' not in yaml_contents) assert('rebuild-index' not in yaml_contents)
for ci_key in yaml_contents.keys(): assert('variables' in yaml_contents)
if ci_key.startswith('(specs) '): pipeline_vars = yaml_contents['variables']
job_object = yaml_contents[ci_key] assert('SPACK_IS_PR_PIPELINE' in pipeline_vars)
job_vars = job_object['variables'] assert(pipeline_vars['SPACK_IS_PR_PIPELINE'] == 'True')
assert('SPACK_IS_PR_PIPELINE' in job_vars)
assert(job_vars['SPACK_IS_PR_PIPELINE'] == 'True')
def test_ci_generate_with_external_pkg(tmpdir, mutable_mock_env_path, def test_ci_generate_with_external_pkg(tmpdir, mutable_mock_env_path,
@ -659,14 +647,23 @@ def test_ci_generate_with_external_pkg(tmpdir, mutable_mock_env_path,
assert not any('externaltool' in key for key in yaml_contents) assert not any('externaltool' in key for key in yaml_contents)
def test_ci_rebuild_basic(tmpdir, mutable_mock_env_path, env_deactivate, @pytest.mark.skipif(not spack.util.gpg.has_gpg(),
install_mockery, mock_packages, reason='This test requires gpg')
mock_gnupghome): def test_ci_rebuild(tmpdir, mutable_mock_env_path, env_deactivate,
install_mockery, mock_packages, monkeypatch,
mock_gnupghome, mock_fetch):
working_dir = tmpdir.join('working_dir') working_dir = tmpdir.join('working_dir')
log_dir = os.path.join(working_dir.strpath, 'logs')
repro_dir = os.path.join(working_dir.strpath, 'repro')
env_dir = working_dir.join('concrete_env')
mirror_dir = working_dir.join('mirror') mirror_dir = working_dir.join('mirror')
mirror_url = 'file://{0}'.format(mirror_dir.strpath) mirror_url = 'file://{0}'.format(mirror_dir.strpath)
broken_specs_url = 's3://some-bucket/naughty-list'
temp_storage_url = 'file:///path/to/per/pipeline/storage'
signing_key_dir = spack_paths.mock_gpg_keys_path signing_key_dir = spack_paths.mock_gpg_keys_path
signing_key_path = os.path.join(signing_key_dir, 'package-signing-key') signing_key_path = os.path.join(signing_key_dir, 'package-signing-key')
with open(signing_key_path) as fd: with open(signing_key_path) as fd:
@ -674,6 +671,135 @@ def test_ci_rebuild_basic(tmpdir, mutable_mock_env_path, env_deactivate,
spack_yaml_contents = """ spack_yaml_contents = """
spack: spack:
definitions:
- packages: [archive-files]
specs:
- $packages
mirrors:
test-mirror: {0}
gitlab-ci:
broken-specs-url: {1}
temporary-storage-url-prefix: {2}
mappings:
- match:
- archive-files
runner-attributes:
tags:
- donotcare
image: donotcare
cdash:
build-group: Not important
url: https://my.fake.cdash
project: Not used
site: Nothing
""".format(mirror_url, broken_specs_url, temp_storage_url)
filename = str(tmpdir.join('spack.yaml'))
with open(filename, 'w') as f:
f.write(spack_yaml_contents)
with tmpdir.as_cwd():
env_cmd('create', 'test', './spack.yaml')
with ev.read('test') as env:
with env.write_transaction():
env.concretize()
env.write()
if not os.path.exists(env_dir.strpath):
os.makedirs(env_dir.strpath)
shutil.copyfile(env.manifest_path,
os.path.join(env_dir.strpath, 'spack.yaml'))
shutil.copyfile(env.lock_path,
os.path.join(env_dir.strpath, 'spack.lock'))
root_spec_build_hash = None
job_spec_dag_hash = None
for h, s in env.specs_by_hash.items():
if s.name == 'archive-files':
root_spec_build_hash = h
job_spec_dag_hash = s.dag_hash()
assert root_spec_build_hash
assert job_spec_dag_hash
def fake_cdash_register(build_name, base_url, project, site, track):
return ('fakebuildid', 'fakestamp')
monkeypatch.setattr(ci, 'register_cdash_build', fake_cdash_register)
monkeypatch.setattr(spack.cmd.ci, 'CI_REBUILD_INSTALL_BASE_ARGS', [
'notcommand'
])
with env_dir.as_cwd():
env_cmd('activate', '--without-view', '--sh', '-d', '.')
# Create environment variables as gitlab would do it
set_env_var('SPACK_ARTIFACTS_ROOT', working_dir.strpath)
set_env_var('SPACK_JOB_LOG_DIR', log_dir)
set_env_var('SPACK_JOB_REPRO_DIR', repro_dir)
set_env_var('SPACK_LOCAL_MIRROR_DIR', mirror_dir.strpath)
set_env_var('SPACK_CONCRETE_ENV_DIR', env_dir.strpath)
set_env_var('CI_PIPELINE_ID', '7192')
set_env_var('SPACK_SIGNING_KEY', signing_key)
set_env_var('SPACK_ROOT_SPEC', root_spec_build_hash)
set_env_var('SPACK_JOB_SPEC_DAG_HASH', job_spec_dag_hash)
set_env_var('SPACK_JOB_SPEC_PKG_NAME', 'archive-files')
set_env_var('SPACK_COMPILER_ACTION', 'NONE')
set_env_var('SPACK_CDASH_BUILD_NAME', '(specs) archive-files')
set_env_var('SPACK_RELATED_BUILDS_CDASH', '')
set_env_var('SPACK_REMOTE_MIRROR_URL', mirror_url)
set_env_var('SPACK_IS_DEVELOP_PIPELINE', 'True')
ci_cmd('rebuild', fail_on_error=False)
expected_repro_files = [
'install.sh',
'root.yaml',
'archive-files.yaml',
'spack.yaml',
'spack.lock'
]
repro_files = os.listdir(repro_dir)
assert(all([f in repro_files for f in expected_repro_files]))
install_script_path = os.path.join(repro_dir, 'install.sh')
install_line = None
with open(install_script_path) as fd:
for line in fd:
if line.startswith('"notcommand"'):
install_line = line
assert(install_line)
def mystrip(s):
return s.strip('"').rstrip('\n').rstrip('"')
install_parts = [mystrip(s) for s in install_line.split(' ')]
assert('--keep-stage' in install_parts)
assert('--require-full-hash-match' in install_parts)
assert('--no-check-signature' not in install_parts)
assert('--no-add' in install_parts)
assert('-f' in install_parts)
flag_index = install_parts.index('-f')
assert('archive-files.yaml' in install_parts[flag_index + 1])
env_cmd('deactivate')
def test_ci_nothing_to_rebuild(tmpdir, mutable_mock_env_path, env_deactivate,
install_mockery, mock_packages, monkeypatch,
mock_fetch):
working_dir = tmpdir.join('working_dir')
mirror_dir = working_dir.join('mirror')
mirror_url = 'file://{0}'.format(mirror_dir.strpath)
spack_yaml_contents = """
spack:
definitions: definitions:
- packages: [archive-files] - packages: [archive-files]
specs: specs:
@ -689,14 +815,11 @@ def test_ci_rebuild_basic(tmpdir, mutable_mock_env_path, env_deactivate,
tags: tags:
- donotcare - donotcare
image: donotcare image: donotcare
cdash:
build-group: Not important
url: https://my.fake.cdash
project: Not used
site: Nothing
""".format(mirror_url) """.format(mirror_url)
print('spack.yaml:\n{0}\n'.format(spack_yaml_contents)) install_cmd('archive-files')
buildcache_cmd('create', '-a', '-f', '-u', '--mirror-url',
mirror_url, 'archive-files')
filename = str(tmpdir.join('spack.yaml')) filename = str(tmpdir.join('spack.yaml'))
with open(filename, 'w') as f: with open(filename, 'w') as f:
@ -704,27 +827,40 @@ def test_ci_rebuild_basic(tmpdir, mutable_mock_env_path, env_deactivate,
with tmpdir.as_cwd(): with tmpdir.as_cwd():
env_cmd('create', 'test', './spack.yaml') env_cmd('create', 'test', './spack.yaml')
with ev.read('test'): with ev.read('test') as env:
root_spec = ('eJyNjsGOwyAMRO/5Ct96alRFFK34ldUqcohJ6BJAQFHUry9Nk66' env.concretize()
'UXNY3v5mxJ3qSojoDBjnqTGelDUVRQZlMIWpnBZya+nJa0Mv1Fg' root_spec_build_hash = None
'G8waRcmAQkimkHWxcF9NRptHyVEoaBkoD5i7ecLVC6yZd/YTtpc' job_spec_dag_hash = None
'SIBg5Tr/mnA6mt9qTZL9CiLr7trk7StJyd/F81jKGoqoe2gVAaH'
'0uT7ZwPeH9A875HaA9MfidHdHxgxjgJuTGVtIrvfHGtynjkGyzi' for h, s in env.specs_by_hash.items():
'xRrkHy94t1lftvv1n4AkVK3kQ') if s.name == 'archive-files':
root_spec_build_hash = h
job_spec_dag_hash = s.dag_hash()
# Create environment variables as gitlab would do it # Create environment variables as gitlab would do it
set_env_var('CI_PROJECT_DIR', working_dir.strpath) set_env_var('SPACK_ARTIFACTS_ROOT', working_dir.strpath)
set_env_var('SPACK_SIGNING_KEY', signing_key) set_env_var('SPACK_JOB_LOG_DIR', 'log_dir')
set_env_var('SPACK_ROOT_SPEC', root_spec) set_env_var('SPACK_JOB_REPRO_DIR', 'repro_dir')
set_env_var('SPACK_LOCAL_MIRROR_DIR', mirror_dir.strpath)
set_env_var('SPACK_CONCRETE_ENV_DIR', tmpdir.strpath)
set_env_var('SPACK_ROOT_SPEC', root_spec_build_hash)
set_env_var('SPACK_JOB_SPEC_DAG_HASH', job_spec_dag_hash)
set_env_var('SPACK_JOB_SPEC_PKG_NAME', 'archive-files') set_env_var('SPACK_JOB_SPEC_PKG_NAME', 'archive-files')
set_env_var('SPACK_COMPILER_ACTION', 'NONE') set_env_var('SPACK_COMPILER_ACTION', 'NONE')
set_env_var('SPACK_CDASH_BUILD_NAME', '(specs) archive-files') set_env_var('SPACK_REMOTE_MIRROR_URL', mirror_url)
set_env_var('SPACK_RELATED_BUILDS_CDASH', '')
rebuild_output = ci_cmd( def fake_dl_method(spec, dest, require_cdashid, m_url=None):
'rebuild', fail_on_error=False, output=str) print('fake download buildcache {0}'.format(spec.name))
print(rebuild_output) monkeypatch.setattr(
buildcache, 'download_buildcache_files', fake_dl_method)
ci_out = ci_cmd('rebuild', output=str)
assert('No need to rebuild archive-files' in ci_out)
assert('fake download buildcache archive-files' in ci_out)
env_cmd('deactivate')
@pytest.mark.disable_clean_stage_check @pytest.mark.disable_clean_stage_check
@ -768,8 +904,6 @@ def test_push_mirror_contents(tmpdir, mutable_mock_env_path, env_deactivate,
image: basicimage image: basicimage
""".format(mirror_url) """.format(mirror_url)
print('spack.yaml:\n{0}\n'.format(spack_yaml_contents))
filename = str(tmpdir.join('spack.yaml')) filename = str(tmpdir.join('spack.yaml'))
with open(filename, 'w') as f: with open(filename, 'w') as f:
f.write(spack_yaml_contents) f.write(spack_yaml_contents)
@ -778,7 +912,7 @@ def test_push_mirror_contents(tmpdir, mutable_mock_env_path, env_deactivate,
env_cmd('create', 'test', './spack.yaml') env_cmd('create', 'test', './spack.yaml')
with ev.read('test') as env: with ev.read('test') as env:
spec_map = ci.get_concrete_specs( spec_map = ci.get_concrete_specs(
'patchelf', 'patchelf', '', 'FIND_ANY') env, 'patchelf', 'patchelf', '', 'FIND_ANY')
concrete_spec = spec_map['patchelf'] concrete_spec = spec_map['patchelf']
spec_yaml = concrete_spec.to_yaml(hash=ht.build_hash) spec_yaml = concrete_spec.to_yaml(hash=ht.build_hash)
yaml_path = str(tmpdir.join('spec.yaml')) yaml_path = str(tmpdir.join('spec.yaml'))
@ -973,8 +1107,6 @@ def test_ci_generate_override_runner_attrs(tmpdir, mutable_mock_env_path,
with open(outputfile) as f: with open(outputfile) as f:
contents = f.read() contents = f.read()
print('generated contents: ')
print(contents)
yaml_contents = syaml.load(contents) yaml_contents = syaml.load(contents)
assert('variables' in yaml_contents) assert('variables' in yaml_contents)
@ -986,7 +1118,6 @@ def test_ci_generate_override_runner_attrs(tmpdir, mutable_mock_env_path,
for ci_key in yaml_contents.keys(): for ci_key in yaml_contents.keys():
if '(specs) b' in ci_key: if '(specs) b' in ci_key:
print('Should not have staged "b" w/out a match')
assert(False) assert(False)
if '(specs) a' in ci_key: if '(specs) a' in ci_key:
# Make sure a's attributes override variables, and all the # Make sure a's attributes override variables, and all the
@ -1123,9 +1254,9 @@ def test_ci_rebuild_index(tmpdir, mutable_mock_env_path, env_deactivate,
with tmpdir.as_cwd(): with tmpdir.as_cwd():
env_cmd('create', 'test', './spack.yaml') env_cmd('create', 'test', './spack.yaml')
with ev.read('test'): with ev.read('test') as env:
spec_map = ci.get_concrete_specs( spec_map = ci.get_concrete_specs(
'callpath', 'callpath', '', 'FIND_ANY') env, 'callpath', 'callpath', '', 'FIND_ANY')
concrete_spec = spec_map['callpath'] concrete_spec = spec_map['callpath']
spec_yaml = concrete_spec.to_yaml(hash=ht.build_hash) spec_yaml = concrete_spec.to_yaml(hash=ht.build_hash)
yaml_path = str(tmpdir.join('spec.yaml')) yaml_path = str(tmpdir.join('spec.yaml'))
@ -1388,8 +1519,6 @@ def test_ci_generate_temp_storage_url(tmpdir, mutable_mock_env_path,
with open(outputfile) as of: with open(outputfile) as of:
pipeline_doc = syaml.load(of.read()) pipeline_doc = syaml.load(of.read())
print(pipeline_doc)
assert('cleanup' in pipeline_doc) assert('cleanup' in pipeline_doc)
cleanup_job = pipeline_doc['cleanup'] cleanup_job = pipeline_doc['cleanup']
@ -1456,3 +1585,110 @@ def test_ci_generate_read_broken_specs_url(tmpdir, mutable_mock_env_path,
ex = '({0})'.format(flattendeps_full_hash) ex = '({0})'.format(flattendeps_full_hash)
assert(ex not in output) assert(ex not in output)
def test_ci_reproduce(tmpdir, mutable_mock_env_path, env_deactivate,
install_mockery, mock_packages, monkeypatch,
last_two_git_commits):
working_dir = tmpdir.join('repro_dir')
image_name = 'org/image:tag'
spack_yaml_contents = """
spack:
definitions:
- packages: [archive-files]
specs:
- $packages
mirrors:
test-mirror: file:///some/fake/mirror
gitlab-ci:
mappings:
- match:
- archive-files
runner-attributes:
tags:
- donotcare
image: {0}
""".format(image_name)
filename = str(tmpdir.join('spack.yaml'))
with open(filename, 'w') as f:
f.write(spack_yaml_contents)
with tmpdir.as_cwd():
env_cmd('create', 'test', './spack.yaml')
with ev.read('test') as env:
with env.write_transaction():
env.concretize()
env.write()
if not os.path.exists(working_dir.strpath):
os.makedirs(working_dir.strpath)
shutil.copyfile(env.manifest_path,
os.path.join(working_dir.strpath, 'spack.yaml'))
shutil.copyfile(env.lock_path,
os.path.join(working_dir.strpath, 'spack.lock'))
root_spec = None
job_spec = None
for h, s in env.specs_by_hash.items():
if s.name == 'archive-files':
root_spec = s
job_spec = s
job_spec_yaml_path = os.path.join(
working_dir.strpath, 'archivefiles.yaml')
with open(job_spec_yaml_path, 'w') as fd:
fd.write(job_spec.to_yaml(hash=ht.full_hash))
root_spec_yaml_path = os.path.join(
working_dir.strpath, 'root.yaml')
with open(root_spec_yaml_path, 'w') as fd:
fd.write(root_spec.to_yaml(hash=ht.full_hash))
artifacts_root = os.path.join(working_dir.strpath, 'scratch_dir')
pipeline_path = os.path.join(artifacts_root, 'pipeline.yml')
ci_cmd('generate', '--output-file', pipeline_path,
'--artifacts-root', artifacts_root)
job_name = ci.get_job_name(
'specs', False, job_spec, 'test-debian6-core2', None)
repro_file = os.path.join(working_dir.strpath, 'repro.json')
repro_details = {
'job_name': job_name,
'job_spec_yaml': 'archivefiles.yaml',
'root_spec_yaml': 'root.yaml'
}
with open(repro_file, 'w') as fd:
fd.write(json.dumps(repro_details))
install_script = os.path.join(working_dir.strpath, 'install.sh')
with open(install_script, 'w') as fd:
fd.write('#!/bin/bash\n\n#fake install\nspack install blah\n')
spack_info_file = os.path.join(
working_dir.strpath, 'spack_info.txt')
with open(spack_info_file, 'w') as fd:
fd.write('\nMerge {0} into {1}\n\n'.format(
last_two_git_commits[1], last_two_git_commits[0]))
def fake_download_and_extract_artifacts(url, work_dir):
pass
monkeypatch.setattr(ci, 'download_and_extract_artifacts',
fake_download_and_extract_artifacts)
rep_out = ci_cmd('reproduce-build',
'https://some.domain/api/v1/projects/1/jobs/2/artifacts',
'--working-dir',
working_dir.strpath,
output=str)
expect_out = 'docker run --rm -v {0}:{0} -ti {1}'.format(
working_dir.strpath, image_name)
assert(expect_out in rep_out)

View File

@ -9,6 +9,7 @@
import re import re
from six.moves import builtins from six.moves import builtins
import time import time
import shutil
import pytest import pytest
@ -1018,6 +1019,10 @@ def test_cache_install_full_hash_match(
uninstall('-y', s.name) uninstall('-y', s.name)
mirror('rm', 'test-mirror') mirror('rm', 'test-mirror')
# Get rid of that libdwarf binary in the mirror so other tests don't try to
# use it and fail because of NoVerifyException
shutil.rmtree(mirror_dir.strpath)
def test_install_env_with_tests_all(tmpdir, mock_packages, mock_fetch, def test_install_env_with_tests_all(tmpdir, mock_packages, mock_fetch,
install_mockery, mutable_mock_env_path): install_mockery, mutable_mock_env_path):

View File

@ -21,7 +21,9 @@ def test_load(install_mockery, mock_fetch, mock_archive, mock_packages):
CMAKE_PREFIX_PATH is the only prefix inspection guaranteed for fake CMAKE_PREFIX_PATH is the only prefix inspection guaranteed for fake
packages, since it keys on the prefix instead of a subdir.""" packages, since it keys on the prefix instead of a subdir."""
install('mpileaks') install_out = install('mpileaks', output=str, fail_on_error=False)
print('spack install mpileaks')
print(install_out)
mpileaks_spec = spack.spec.Spec('mpileaks').concretized() mpileaks_spec = spack.spec.Spec('mpileaks').concretized()
sh_out = load('--sh', '--only', 'package', 'mpileaks') sh_out = load('--sh', '--only', 'package', 'mpileaks')

View File

@ -10,6 +10,7 @@
import json import json
import os import os
import os.path import os.path
import re
import shutil import shutil
import tempfile import tempfile
import xml.etree.ElementTree import xml.etree.ElementTree
@ -19,7 +20,7 @@
import archspec.cpu.microarchitecture import archspec.cpu.microarchitecture
import archspec.cpu.schema import archspec.cpu.schema
from llnl.util.filesystem import mkdirp, remove_linked_tree from llnl.util.filesystem import mkdirp, remove_linked_tree, working_dir
import spack.architecture import spack.architecture
import spack.compilers import spack.compilers
@ -45,6 +46,20 @@
from spack.fetch_strategy import FetchError from spack.fetch_strategy import FetchError
#
# Return list of shas for latest two git commits in local spack repo
#
@pytest.fixture
def last_two_git_commits(scope='session'):
git = spack.util.executable.which('git', required=True)
spack_git_path = spack.paths.prefix
with working_dir(spack_git_path):
git_log_out = git('log', '-n', '2', output=str, error=os.devnull)
regex = re.compile(r"^commit\s([^\s]+$)", re.MULTILINE)
yield regex.findall(git_log_out)
@pytest.fixture(autouse=True) @pytest.fixture(autouse=True)
def clear_recorded_monkeypatches(): def clear_recorded_monkeypatches():
yield yield

Binary file not shown.

View File

@ -229,26 +229,32 @@ def _spec(spec, preferred_mirrors=None):
def test_try_install_from_binary_cache(install_mockery, mock_packages, def test_try_install_from_binary_cache(install_mockery, mock_packages,
monkeypatch, capsys): monkeypatch):
"""Tests SystemExit path for_try_install_from_binary_cache.""" """Tests SystemExit path for_try_install_from_binary_cache.
def _mirrors_for_spec(spec, full_hash_match=False):
spec = spack.spec.Spec('mpi').concretized() This test does not make sense. We tell spack there is a mirror
return [{ with a binary for this spec and then expect it to die because there
'mirror_url': 'notused', are no mirrors configured."""
'spec': spec, # def _mirrors_for_spec(spec, full_hash_match=False):
}] # spec = spack.spec.Spec('mpi').concretized()
# return [{
# 'mirror_url': 'notused',
# 'spec': spec,
# }]
spec = spack.spec.Spec('mpich') spec = spack.spec.Spec('mpich')
spec.concretize() spec.concretize()
monkeypatch.setattr( # monkeypatch.setattr(
spack.binary_distribution, 'get_mirrors_for_spec', _mirrors_for_spec) # spack.binary_distribution, 'get_mirrors_for_spec', _mirrors_for_spec)
with pytest.raises(SystemExit): # with pytest.raises(SystemExit):
inst._try_install_from_binary_cache(spec.package, False, False) # inst._try_install_from_binary_cache(spec.package, False, False)
result = inst._try_install_from_binary_cache(spec.package, False, False)
assert(not result)
captured = capsys.readouterr() # captured = capsys.readouterr()
assert 'add a spack mirror to allow download' in str(captured) # assert 'add a spack mirror to allow download' in str(captured)
def test_installer_repr(install_mockery): def test_installer_repr(install_mockery):

View File

@ -28,10 +28,11 @@ default:
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME} - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view . - spack env activate --without-view .
- spack ci generate --check-index-only - spack ci generate --check-index-only
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml" --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts: artifacts:
paths: paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml" - "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags: ["spack", "public", "medium", "x86_64"] tags: ["spack", "public", "medium", "x86_64"]
interruptible: true interruptible: true

View File

@ -43,16 +43,16 @@ spack:
- argobots - argobots
# - ascent # - ascent
# - axom # - axom
# - bolt - bolt
# - caliper # - caliper
# - darshan-runtime # - darshan-runtime
- darshan-util - darshan-util
# - dyninst # - dyninst
# - faodel - faodel
# - flecsi+cinch # - flecsi+cinch
# - flit # - flit
# - gasnet # - gasnet
# - ginkgo - ginkgo
# - globalarrays # - globalarrays
# - gotcha # - gotcha
# - hdf5 # - hdf5
@ -68,7 +68,7 @@ spack:
# - mercury # - mercury
# - mfem # - mfem
# - mpifileutils@develop~xattr # - mpifileutils@develop~xattr
# - ninja - ninja
# - omega-h # - omega-h
# - openmpi # - openmpi
# - openpmd-api # - openpmd-api
@ -115,14 +115,15 @@ spack:
- - $e4s - - $e4s
- - $arch - - $arch
mirrors: { "mirror": "s3://spack-binaries-develop/e4s-new-cluster" } mirrors: { "mirror": "s3://spack-binaries-develop/e4s" }
gitlab-ci: gitlab-ci:
script: script:
- . "./share/spack/setup-env.sh" - . "./share/spack/setup-env.sh"
- spack --version - spack --version
- cd share/spack/gitlab/cloud_pipelines/stacks/e4s - cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view . - spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- spack -d ci rebuild - spack -d ci rebuild
mappings: mappings:
- match: [cuda, dyninst, hpx, precice, strumpack, sundials, trilinos, vtk-h, vtk-m] - match: [cuda, dyninst, hpx, precice, strumpack, sundials, trilinos, vtk-h, vtk-m]
@ -134,6 +135,7 @@ spack:
image: { "name": "ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01", "entrypoint": [""] } image: { "name": "ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01", "entrypoint": [""] }
tags: ["spack", "public", "large", "x86_64"] tags: ["spack", "public", "large", "x86_64"]
temporary-storage-url-prefix: "s3://spack-binaries-prs/pipeline-storage" temporary-storage-url-prefix: "s3://spack-binaries-prs/pipeline-storage"
broken-specs-url: "s3://spack-binaries-develop/broken-specs"
service-job-attributes: service-job-attributes:
before_script: before_script:
- . "./share/spack/setup-env.sh" - . "./share/spack/setup-env.sh"

View File

@ -495,20 +495,29 @@ _spack_ci() {
then then
SPACK_COMPREPLY="-h --help" SPACK_COMPREPLY="-h --help"
else else
SPACK_COMPREPLY="generate rebuild rebuild-index" SPACK_COMPREPLY="generate rebuild-index rebuild reproduce-build"
fi fi
} }
_spack_ci_generate() { _spack_ci_generate() {
SPACK_COMPREPLY="-h --help --output-file --copy-to --optimize --dependencies --prune-dag --no-prune-dag --check-index-only" SPACK_COMPREPLY="-h --help --output-file --copy-to --optimize --dependencies --prune-dag --no-prune-dag --check-index-only --artifacts-root"
}
_spack_ci_rebuild_index() {
SPACK_COMPREPLY="-h --help"
} }
_spack_ci_rebuild() { _spack_ci_rebuild() {
SPACK_COMPREPLY="-h --help" SPACK_COMPREPLY="-h --help"
} }
_spack_ci_rebuild_index() { _spack_ci_reproduce_build() {
SPACK_COMPREPLY="-h --help" if $list_options
then
SPACK_COMPREPLY="-h --help --working-dir"
else
SPACK_COMPREPLY=""
fi
} }
_spack_clean() { _spack_clean() {