spack buildcache push --tag
: create container image with multiple roots (#41077)
This PR adds a flag `--tag/-t` to `buildcache push`, which you can use like ``` $ spack mirror add my-oci-registry oci://example.com/hello/world $ spack -e my_env buildcache push --base-image ubuntu:22.04 --tag my_custom_tag my-oci-registry ``` and lets users ship a full, installed environment as a minimal container image where each image layer is one Spack package, on top of a base image of choice. The image can then be used as ``` $ docker run -it --rm example.com/hello/world:my_custom_tag ``` Apart from environments, users can also pick arbitrary installed spec from their database, for instance: ``` $ spack buildcache push --base-image ubuntu:22.04 --tag some_specs my-oci-registry gcc@12 cmake $ docker run -it --rm example.com/hello/world:some_specs ``` It has many advantages over `spack containerize`: 1. No external tools required (`docker`, `buildah`, ...) 2. Creates images from locally installed Spack packages (No need to rebuild inside `docker build`, where troubleshooting build failures is notoriously hard) 3. No need for multistage builds (Spack just tarballs existing installations of runtime deps) 4. Reduced storage size / composability: when pushing multiple environments with common specs, container image layers are shared. 5. Automatic build cache: later `spack install` of the env elsewhere speeds up since the containerized environment is a build cache
This commit is contained in:
parent
2fda288cc5
commit
16e27ba4a6
@ -9,34 +9,95 @@
|
|||||||
Container Images
|
Container Images
|
||||||
================
|
================
|
||||||
|
|
||||||
Spack :ref:`environments` are a great tool to create container images, but
|
Spack :ref:`environments` can easily be turned into container images. This page
|
||||||
preparing one that is suitable for production requires some more boilerplate
|
outlines two ways in which this can be done:
|
||||||
than just:
|
|
||||||
|
1. By installing the environment on the host system, and copying the installations
|
||||||
|
into the container image. This approach does not require any tools like Docker
|
||||||
|
or Singularity to be installed.
|
||||||
|
2. By generating a Docker or Singularity recipe that can be used to build the
|
||||||
|
container image. In this approach, Spack builds the software inside the
|
||||||
|
container runtime, not on the host system.
|
||||||
|
|
||||||
|
The first approach is easiest if you already have an installed environment,
|
||||||
|
the second approach gives more control over the container image.
|
||||||
|
|
||||||
|
---------------------------
|
||||||
|
From existing installations
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
If you already have a Spack environment installed on your system, you can
|
||||||
|
share the binaries as an OCI compatible container image. To get started you
|
||||||
|
just have to configure and OCI registry and run ``spack buildcache push``.
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
# Create and install an environment in the current directory
|
||||||
|
spack env create -d .
|
||||||
|
spack -e . add pkg-a pkg-b
|
||||||
|
spack -e . install
|
||||||
|
|
||||||
|
# Configure the registry
|
||||||
|
spack -e . mirror add --oci-username ... --oci-password ... container-registry oci://example.com/name/image
|
||||||
|
|
||||||
|
# Push the image
|
||||||
|
spack -e . buildcache push --base-image ubuntu:22.04 --tag my_env container-registry
|
||||||
|
|
||||||
|
The resulting container image can then be run as follows:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ docker run -it example.com/name/image:my_env
|
||||||
|
|
||||||
|
The image generated by Spack consists of the specified base image with each package from the
|
||||||
|
environment as a separate layer on top. The image is minimal by construction, it only contains the
|
||||||
|
environment roots and its runtime dependencies.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
When using registries like GHCR and Docker Hub, the ``--oci-password`` flag is not
|
||||||
|
the password for your account, but a personal access token you need to generate separately.
|
||||||
|
|
||||||
|
The specified ``--base-image`` should have a libc that is compatible with the host system.
|
||||||
|
For example if your host system is Ubuntu 20.04, you can use ``ubuntu:20.04``, ``ubuntu:22.04``
|
||||||
|
or newer: the libc in the container image must be at least the version of the host system,
|
||||||
|
assuming ABI compatibility. It is also perfectly fine to use a completely different
|
||||||
|
Linux distribution as long as the libc is compatible.
|
||||||
|
|
||||||
|
For convenience, Spack also turns the OCI registry into a :ref:`build cache <binary_caches_oci>`,
|
||||||
|
so that future ``spack install`` of the environment will simply pull the binaries from the
|
||||||
|
registry instead of doing source builds.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
When generating container images in CI, the approach above is recommended when CI jobs
|
||||||
|
already run in a sandboxed environment. You can simply use ``spack`` directly
|
||||||
|
in the CI job and push the resulting image to a registry. Subsequent CI jobs should
|
||||||
|
run faster because Spack can install from the same registry instead of rebuilding from
|
||||||
|
sources.
|
||||||
|
|
||||||
|
---------------------------------------------
|
||||||
|
Generating recipes for Docker and Singularity
|
||||||
|
---------------------------------------------
|
||||||
|
|
||||||
|
Apart from copying existing installations into container images, Spack can also
|
||||||
|
generate recipes for container images. This is useful if you want to run Spack
|
||||||
|
itself in a sandboxed environment instead of on the host system.
|
||||||
|
|
||||||
|
Since recipes need a little bit more boilerplate than
|
||||||
|
|
||||||
.. code-block:: docker
|
.. code-block:: docker
|
||||||
|
|
||||||
COPY spack.yaml /environment
|
COPY spack.yaml /environment
|
||||||
RUN spack -e /environment install
|
RUN spack -e /environment install
|
||||||
|
|
||||||
Additional actions may be needed to minimize the size of the
|
Spack provides a command to generate customizable recipes for container images. Customizations
|
||||||
container, or to update the system software that is installed in the base
|
include minimizing the size of the image, installing packages in the base image using the system
|
||||||
image, or to set up a proper entrypoint to run the image. These tasks are
|
package manager, and setting up a proper entrypoint to run the image.
|
||||||
usually both necessary and repetitive, so Spack comes with a command
|
|
||||||
to generate recipes for container images starting from a ``spack.yaml``.
|
|
||||||
|
|
||||||
.. seealso::
|
~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This page is a reference for generating recipes to build container images.
|
|
||||||
It means that your environment is built from scratch inside the container
|
|
||||||
runtime.
|
|
||||||
|
|
||||||
Since v0.21, Spack can also create container images from existing package installations
|
|
||||||
on your host system. See :ref:`binary_caches_oci` for more information on
|
|
||||||
that topic.
|
|
||||||
|
|
||||||
--------------------
|
|
||||||
A Quick Introduction
|
A Quick Introduction
|
||||||
--------------------
|
~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Consider having a Spack environment like the following:
|
Consider having a Spack environment like the following:
|
||||||
|
|
||||||
@ -47,8 +108,8 @@ Consider having a Spack environment like the following:
|
|||||||
- gromacs+mpi
|
- gromacs+mpi
|
||||||
- mpich
|
- mpich
|
||||||
|
|
||||||
Producing a ``Dockerfile`` from it is as simple as moving to the directory
|
Producing a ``Dockerfile`` from it is as simple as changing directories to
|
||||||
where the ``spack.yaml`` file is stored and giving the following command:
|
where the ``spack.yaml`` file is stored and running the following command:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -114,9 +175,9 @@ configuration are discussed in details in the sections below.
|
|||||||
|
|
||||||
.. _container_spack_images:
|
.. _container_spack_images:
|
||||||
|
|
||||||
--------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
Spack Images on Docker Hub
|
Spack Images on Docker Hub
|
||||||
--------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Docker images with Spack preinstalled and ready to be used are
|
Docker images with Spack preinstalled and ready to be used are
|
||||||
built when a release is tagged, or nightly on ``develop``. The images
|
built when a release is tagged, or nightly on ``develop``. The images
|
||||||
@ -186,9 +247,9 @@ by Spack use them as default base images for their ``build`` stage,
|
|||||||
even though handles to use custom base images provided by users are
|
even though handles to use custom base images provided by users are
|
||||||
available to accommodate complex use cases.
|
available to accommodate complex use cases.
|
||||||
|
|
||||||
---------------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
Creating Images From Environments
|
Configuring the Container Recipe
|
||||||
---------------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Any Spack Environment can be used for the automatic generation of container
|
Any Spack Environment can be used for the automatic generation of container
|
||||||
recipes. Sensible defaults are provided for things like the base image or the
|
recipes. Sensible defaults are provided for things like the base image or the
|
||||||
@ -229,18 +290,18 @@ under the ``container`` attribute of environments:
|
|||||||
|
|
||||||
A detailed description of the options available can be found in the :ref:`container_config_options` section.
|
A detailed description of the options available can be found in the :ref:`container_config_options` section.
|
||||||
|
|
||||||
-------------------
|
~~~~~~~~~~~~~~~~~~~
|
||||||
Setting Base Images
|
Setting Base Images
|
||||||
-------------------
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The ``images`` subsection is used to select both the image where
|
The ``images`` subsection is used to select both the image where
|
||||||
Spack builds the software and the image where the built software
|
Spack builds the software and the image where the built software
|
||||||
is installed. This attribute can be set in different ways and
|
is installed. This attribute can be set in different ways and
|
||||||
which one to use depends on the use case at hand.
|
which one to use depends on the use case at hand.
|
||||||
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
""""""""""""""""""""""""""""""""""""""""
|
||||||
Use Official Spack Images From Dockerhub
|
Use Official Spack Images From Dockerhub
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
""""""""""""""""""""""""""""""""""""""""
|
||||||
|
|
||||||
To generate a recipe that uses an official Docker image from the
|
To generate a recipe that uses an official Docker image from the
|
||||||
Spack organization to build the software and the corresponding official OS image
|
Spack organization to build the software and the corresponding official OS image
|
||||||
@ -445,9 +506,9 @@ responsibility to ensure that:
|
|||||||
Therefore we don't recommend its use in cases that can be otherwise
|
Therefore we don't recommend its use in cases that can be otherwise
|
||||||
covered by the simplified mode shown first.
|
covered by the simplified mode shown first.
|
||||||
|
|
||||||
----------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
Singularity Definition Files
|
Singularity Definition Files
|
||||||
----------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In addition to producing recipes in ``Dockerfile`` format Spack can produce
|
In addition to producing recipes in ``Dockerfile`` format Spack can produce
|
||||||
Singularity Definition Files by just changing the value of the ``format``
|
Singularity Definition Files by just changing the value of the ``format``
|
||||||
@ -468,9 +529,9 @@ attribute:
|
|||||||
The minimum version of Singularity required to build a SIF (Singularity Image Format)
|
The minimum version of Singularity required to build a SIF (Singularity Image Format)
|
||||||
image from the recipes generated by Spack is ``3.5.3``.
|
image from the recipes generated by Spack is ``3.5.3``.
|
||||||
|
|
||||||
------------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
Extending the Jinja2 Templates
|
Extending the Jinja2 Templates
|
||||||
------------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The Dockerfile and the Singularity definition file that Spack can generate are based on
|
The Dockerfile and the Singularity definition file that Spack can generate are based on
|
||||||
a few Jinja2 templates that are rendered according to the environment being containerized.
|
a few Jinja2 templates that are rendered according to the environment being containerized.
|
||||||
@ -591,9 +652,9 @@ The recipe that gets generated contains the two extra instruction that we added
|
|||||||
|
|
||||||
.. _container_config_options:
|
.. _container_config_options:
|
||||||
|
|
||||||
-----------------------
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
Configuration Reference
|
Configuration Reference
|
||||||
-----------------------
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The tables below describe all the configuration options that are currently supported
|
The tables below describe all the configuration options that are currently supported
|
||||||
to customize the generation of container recipes:
|
to customize the generation of container recipes:
|
||||||
@ -690,13 +751,13 @@ to customize the generation of container recipes:
|
|||||||
- Description string
|
- Description string
|
||||||
- No
|
- No
|
||||||
|
|
||||||
--------------
|
~~~~~~~~~~~~~~
|
||||||
Best Practices
|
Best Practices
|
||||||
--------------
|
~~~~~~~~~~~~~~
|
||||||
|
|
||||||
^^^
|
"""
|
||||||
MPI
|
MPI
|
||||||
^^^
|
"""
|
||||||
Due to the dependency on Fortran for OpenMPI, which is the spack default
|
Due to the dependency on Fortran for OpenMPI, which is the spack default
|
||||||
implementation, consider adding ``gfortran`` to the ``apt-get install`` list.
|
implementation, consider adding ``gfortran`` to the ``apt-get install`` list.
|
||||||
|
|
||||||
@ -707,9 +768,9 @@ For execution on HPC clusters, it can be helpful to import the docker
|
|||||||
image into Singularity in order to start a program with an *external*
|
image into Singularity in order to start a program with an *external*
|
||||||
MPI. Otherwise, also add ``openssh-server`` to the ``apt-get install`` list.
|
MPI. Otherwise, also add ``openssh-server`` to the ``apt-get install`` list.
|
||||||
|
|
||||||
^^^^
|
""""
|
||||||
CUDA
|
CUDA
|
||||||
^^^^
|
""""
|
||||||
Starting from CUDA 9.0, Nvidia provides minimal CUDA images based on
|
Starting from CUDA 9.0, Nvidia provides minimal CUDA images based on
|
||||||
Ubuntu. Please see `their instructions <https://hub.docker.com/r/nvidia/cuda/>`_.
|
Ubuntu. Please see `their instructions <https://hub.docker.com/r/nvidia/cuda/>`_.
|
||||||
Avoid double-installing CUDA by adding, e.g.
|
Avoid double-installing CUDA by adding, e.g.
|
||||||
@ -728,9 +789,9 @@ to your ``spack.yaml``.
|
|||||||
Users will either need ``nvidia-docker`` or e.g. Singularity to *execute*
|
Users will either need ``nvidia-docker`` or e.g. Singularity to *execute*
|
||||||
device kernels.
|
device kernels.
|
||||||
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
"""""""""""""""""""""""""
|
||||||
Docker on Windows and OSX
|
Docker on Windows and OSX
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
"""""""""""""""""""""""""
|
||||||
|
|
||||||
On Mac OS and Windows, docker runs on a hypervisor that is not allocated much
|
On Mac OS and Windows, docker runs on a hypervisor that is not allocated much
|
||||||
memory by default, and some spack packages may fail to build due to lack of
|
memory by default, and some spack packages may fail to build due to lack of
|
||||||
|
@ -37,6 +37,7 @@
|
|||||||
import spack.util.crypto
|
import spack.util.crypto
|
||||||
import spack.util.url as url_util
|
import spack.util.url as url_util
|
||||||
import spack.util.web as web_util
|
import spack.util.web as web_util
|
||||||
|
from spack import traverse
|
||||||
from spack.build_environment import determine_number_of_jobs
|
from spack.build_environment import determine_number_of_jobs
|
||||||
from spack.cmd import display_specs
|
from spack.cmd import display_specs
|
||||||
from spack.cmd.common import arguments
|
from spack.cmd.common import arguments
|
||||||
@ -122,7 +123,14 @@ def setup_parser(subparser: argparse.ArgumentParser):
|
|||||||
help="stop pushing on first failure (default is best effort)",
|
help="stop pushing on first failure (default is best effort)",
|
||||||
)
|
)
|
||||||
push.add_argument(
|
push.add_argument(
|
||||||
"--base-image", default=None, help="specify the base image for the buildcache. "
|
"--base-image", default=None, help="specify the base image for the buildcache"
|
||||||
|
)
|
||||||
|
push.add_argument(
|
||||||
|
"--tag",
|
||||||
|
"-t",
|
||||||
|
default=None,
|
||||||
|
help="when pushing to an OCI registry, tag an image containing all root specs and their "
|
||||||
|
"runtime dependencies",
|
||||||
)
|
)
|
||||||
arguments.add_common_arguments(push, ["specs", "jobs"])
|
arguments.add_common_arguments(push, ["specs", "jobs"])
|
||||||
push.set_defaults(func=push_fn)
|
push.set_defaults(func=push_fn)
|
||||||
@ -331,9 +339,9 @@ def push_fn(args):
|
|||||||
)
|
)
|
||||||
|
|
||||||
if args.specs or args.spec_file:
|
if args.specs or args.spec_file:
|
||||||
specs = _matching_specs(spack.cmd.parse_specs(args.specs or args.spec_file))
|
roots = _matching_specs(spack.cmd.parse_specs(args.specs or args.spec_file))
|
||||||
else:
|
else:
|
||||||
specs = spack.cmd.require_active_env("buildcache push").all_specs()
|
roots = spack.cmd.require_active_env(cmd_name="buildcache push").concrete_roots()
|
||||||
|
|
||||||
if args.allow_root:
|
if args.allow_root:
|
||||||
tty.warn(
|
tty.warn(
|
||||||
@ -344,9 +352,9 @@ def push_fn(args):
|
|||||||
|
|
||||||
# Check if this is an OCI image.
|
# Check if this is an OCI image.
|
||||||
try:
|
try:
|
||||||
image_ref = spack.oci.oci.image_from_mirror(mirror)
|
target_image = spack.oci.oci.image_from_mirror(mirror)
|
||||||
except ValueError:
|
except ValueError:
|
||||||
image_ref = None
|
target_image = None
|
||||||
|
|
||||||
push_url = mirror.push_url
|
push_url = mirror.push_url
|
||||||
|
|
||||||
@ -357,7 +365,7 @@ def push_fn(args):
|
|||||||
unsigned = not (args.key or args.signed)
|
unsigned = not (args.key or args.signed)
|
||||||
|
|
||||||
# For OCI images, we require dependencies to be pushed for now.
|
# For OCI images, we require dependencies to be pushed for now.
|
||||||
if image_ref:
|
if target_image:
|
||||||
if "dependencies" not in args.things_to_install:
|
if "dependencies" not in args.things_to_install:
|
||||||
tty.die("Dependencies must be pushed for OCI images.")
|
tty.die("Dependencies must be pushed for OCI images.")
|
||||||
if not unsigned:
|
if not unsigned:
|
||||||
@ -368,7 +376,7 @@ def push_fn(args):
|
|||||||
|
|
||||||
# This is a list of installed, non-external specs.
|
# This is a list of installed, non-external specs.
|
||||||
specs = bindist.specs_to_be_packaged(
|
specs = bindist.specs_to_be_packaged(
|
||||||
specs,
|
roots,
|
||||||
root="package" in args.things_to_install,
|
root="package" in args.things_to_install,
|
||||||
dependencies="dependencies" in args.things_to_install,
|
dependencies="dependencies" in args.things_to_install,
|
||||||
)
|
)
|
||||||
@ -381,11 +389,35 @@ def push_fn(args):
|
|||||||
failed = []
|
failed = []
|
||||||
|
|
||||||
# TODO: unify this logic in the future.
|
# TODO: unify this logic in the future.
|
||||||
if image_ref:
|
if target_image:
|
||||||
|
base_image = ImageReference.from_string(args.base_image) if args.base_image else None
|
||||||
with tempfile.TemporaryDirectory(
|
with tempfile.TemporaryDirectory(
|
||||||
dir=spack.stage.get_stage_root()
|
dir=spack.stage.get_stage_root()
|
||||||
) as tmpdir, _make_pool() as pool:
|
) as tmpdir, _make_pool() as pool:
|
||||||
skipped = _push_oci(args, image_ref, specs, tmpdir, pool)
|
skipped, base_images, checksums = _push_oci(
|
||||||
|
target_image=target_image,
|
||||||
|
base_image=base_image,
|
||||||
|
installed_specs_with_deps=specs,
|
||||||
|
force=args.force,
|
||||||
|
tmpdir=tmpdir,
|
||||||
|
pool=pool,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Apart from creating manifests for each individual spec, we allow users to create a
|
||||||
|
# separate image tag for all root specs and their runtime dependencies.
|
||||||
|
if args.tag:
|
||||||
|
tagged_image = target_image.with_tag(args.tag)
|
||||||
|
# _push_oci may not populate base_images if binaries were already in the registry
|
||||||
|
for spec in roots:
|
||||||
|
_update_base_images(
|
||||||
|
base_image=base_image,
|
||||||
|
target_image=target_image,
|
||||||
|
spec=spec,
|
||||||
|
base_image_cache=base_images,
|
||||||
|
)
|
||||||
|
_put_manifest(base_images, checksums, tagged_image, tmpdir, None, None, *roots)
|
||||||
|
tty.info(f"Tagged {tagged_image}")
|
||||||
|
|
||||||
else:
|
else:
|
||||||
skipped = []
|
skipped = []
|
||||||
|
|
||||||
@ -446,11 +478,11 @@ def push_fn(args):
|
|||||||
# Update the index if requested
|
# Update the index if requested
|
||||||
# TODO: remove update index logic out of bindist; should be once after all specs are pushed
|
# TODO: remove update index logic out of bindist; should be once after all specs are pushed
|
||||||
# not once per spec.
|
# not once per spec.
|
||||||
if image_ref and len(skipped) < len(specs) and args.update_index:
|
if target_image and len(skipped) < len(specs) and args.update_index:
|
||||||
with tempfile.TemporaryDirectory(
|
with tempfile.TemporaryDirectory(
|
||||||
dir=spack.stage.get_stage_root()
|
dir=spack.stage.get_stage_root()
|
||||||
) as tmpdir, _make_pool() as pool:
|
) as tmpdir, _make_pool() as pool:
|
||||||
_update_index_oci(image_ref, tmpdir, pool)
|
_update_index_oci(target_image, tmpdir, pool)
|
||||||
|
|
||||||
|
|
||||||
def _get_spack_binary_blob(image_ref: ImageReference) -> Optional[spack.oci.oci.Blob]:
|
def _get_spack_binary_blob(image_ref: ImageReference) -> Optional[spack.oci.oci.Blob]:
|
||||||
@ -516,17 +548,21 @@ def _archspec_to_gooarch(spec: spack.spec.Spec) -> str:
|
|||||||
def _put_manifest(
|
def _put_manifest(
|
||||||
base_images: Dict[str, Tuple[dict, dict]],
|
base_images: Dict[str, Tuple[dict, dict]],
|
||||||
checksums: Dict[str, spack.oci.oci.Blob],
|
checksums: Dict[str, spack.oci.oci.Blob],
|
||||||
spec: spack.spec.Spec,
|
|
||||||
image_ref: ImageReference,
|
image_ref: ImageReference,
|
||||||
tmpdir: str,
|
tmpdir: str,
|
||||||
|
extra_config: Optional[dict],
|
||||||
|
annotations: Optional[dict],
|
||||||
|
*specs: spack.spec.Spec,
|
||||||
):
|
):
|
||||||
architecture = _archspec_to_gooarch(spec)
|
architecture = _archspec_to_gooarch(specs[0])
|
||||||
|
|
||||||
dependencies = list(
|
dependencies = list(
|
||||||
reversed(
|
reversed(
|
||||||
list(
|
list(
|
||||||
s
|
s
|
||||||
for s in spec.traverse(order="topo", deptype=("link", "run"), root=True)
|
for s in traverse.traverse_nodes(
|
||||||
|
specs, order="topo", deptype=("link", "run"), root=True
|
||||||
|
)
|
||||||
if not s.external
|
if not s.external
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
@ -535,7 +571,7 @@ def _put_manifest(
|
|||||||
base_manifest, base_config = base_images[architecture]
|
base_manifest, base_config = base_images[architecture]
|
||||||
env = _retrieve_env_dict_from_config(base_config)
|
env = _retrieve_env_dict_from_config(base_config)
|
||||||
|
|
||||||
spack.user_environment.environment_modifications_for_specs(spec).apply_modifications(env)
|
spack.user_environment.environment_modifications_for_specs(*specs).apply_modifications(env)
|
||||||
|
|
||||||
# Create an oci.image.config file
|
# Create an oci.image.config file
|
||||||
config = copy.deepcopy(base_config)
|
config = copy.deepcopy(base_config)
|
||||||
@ -547,20 +583,14 @@ def _put_manifest(
|
|||||||
# Set the environment variables
|
# Set the environment variables
|
||||||
config["config"]["Env"] = [f"{k}={v}" for k, v in env.items()]
|
config["config"]["Env"] = [f"{k}={v}" for k, v in env.items()]
|
||||||
|
|
||||||
|
if extra_config:
|
||||||
# From the OCI v1.0 spec:
|
# From the OCI v1.0 spec:
|
||||||
# > Any extra fields in the Image JSON struct are considered implementation
|
# > Any extra fields in the Image JSON struct are considered implementation
|
||||||
# > specific and MUST be ignored by any implementations which are unable to
|
# > specific and MUST be ignored by any implementations which are unable to
|
||||||
# > interpret them.
|
# > interpret them.
|
||||||
# We use this to store the Spack spec, so we can use it to create an index.
|
config.update(extra_config)
|
||||||
spec_dict = spec.to_dict(hash=ht.dag_hash)
|
|
||||||
spec_dict["buildcache_layout_version"] = 1
|
|
||||||
spec_dict["binary_cache_checksum"] = {
|
|
||||||
"hash_algorithm": "sha256",
|
|
||||||
"hash": checksums[spec.dag_hash()].compressed_digest.digest,
|
|
||||||
}
|
|
||||||
config.update(spec_dict)
|
|
||||||
|
|
||||||
config_file = os.path.join(tmpdir, f"{spec.dag_hash()}.config.json")
|
config_file = os.path.join(tmpdir, f"{specs[0].dag_hash()}.config.json")
|
||||||
|
|
||||||
with open(config_file, "w") as f:
|
with open(config_file, "w") as f:
|
||||||
json.dump(config, f, separators=(",", ":"))
|
json.dump(config, f, separators=(",", ":"))
|
||||||
@ -591,48 +621,69 @@ def _put_manifest(
|
|||||||
for s in dependencies
|
for s in dependencies
|
||||||
),
|
),
|
||||||
],
|
],
|
||||||
"annotations": {"org.opencontainers.image.description": spec.format()},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
image_ref_for_spec = image_ref.with_tag(default_tag(spec))
|
if annotations:
|
||||||
|
oci_manifest["annotations"] = annotations
|
||||||
|
|
||||||
# Finally upload the manifest
|
# Finally upload the manifest
|
||||||
upload_manifest_with_retry(image_ref_for_spec, oci_manifest=oci_manifest)
|
upload_manifest_with_retry(image_ref, oci_manifest=oci_manifest)
|
||||||
|
|
||||||
# delete the config file
|
# delete the config file
|
||||||
os.unlink(config_file)
|
os.unlink(config_file)
|
||||||
|
|
||||||
return image_ref_for_spec
|
|
||||||
|
def _update_base_images(
|
||||||
|
*,
|
||||||
|
base_image: Optional[ImageReference],
|
||||||
|
target_image: ImageReference,
|
||||||
|
spec: spack.spec.Spec,
|
||||||
|
base_image_cache: Dict[str, Tuple[dict, dict]],
|
||||||
|
):
|
||||||
|
"""For a given spec and base image, copy the missing layers of the base image with matching
|
||||||
|
arch to the registry of the target image. If no base image is specified, create a dummy
|
||||||
|
manifest and config file."""
|
||||||
|
architecture = _archspec_to_gooarch(spec)
|
||||||
|
if architecture in base_image_cache:
|
||||||
|
return
|
||||||
|
if base_image is None:
|
||||||
|
base_image_cache[architecture] = (
|
||||||
|
default_manifest(),
|
||||||
|
default_config(architecture, "linux"),
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
base_image_cache[architecture] = copy_missing_layers_with_retry(
|
||||||
|
base_image, target_image, architecture
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _push_oci(
|
def _push_oci(
|
||||||
args,
|
*,
|
||||||
image_ref: ImageReference,
|
target_image: ImageReference,
|
||||||
|
base_image: Optional[ImageReference],
|
||||||
installed_specs_with_deps: List[Spec],
|
installed_specs_with_deps: List[Spec],
|
||||||
tmpdir: str,
|
tmpdir: str,
|
||||||
pool: multiprocessing.pool.Pool,
|
pool: multiprocessing.pool.Pool,
|
||||||
) -> List[str]:
|
force: bool = False,
|
||||||
|
) -> Tuple[List[str], Dict[str, Tuple[dict, dict]], Dict[str, spack.oci.oci.Blob]]:
|
||||||
"""Push specs to an OCI registry
|
"""Push specs to an OCI registry
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
args: The command line arguments.
|
image_ref: The target OCI image
|
||||||
image_ref: The image reference.
|
base_image: Optional base image, which will be copied to the target registry.
|
||||||
installed_specs_with_deps: The installed specs to push, excluding externals,
|
installed_specs_with_deps: The installed specs to push, excluding externals,
|
||||||
including deps, ordered from roots to leaves.
|
including deps, ordered from roots to leaves.
|
||||||
|
force: Whether to overwrite existing layers and manifests in the buildcache.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
List[str]: The list of skipped specs (already in the buildcache).
|
A tuple consisting of the list of skipped specs already in the build cache,
|
||||||
|
a dictionary mapping architectures to base image manifests and configs,
|
||||||
|
and a dictionary mapping each spec's dag hash to a blob.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Reverse the order
|
# Reverse the order
|
||||||
installed_specs_with_deps = list(reversed(installed_specs_with_deps))
|
installed_specs_with_deps = list(reversed(installed_specs_with_deps))
|
||||||
|
|
||||||
# The base image to use for the package. When not set, we use
|
|
||||||
# the OCI registry only for storage, and do not use any base image.
|
|
||||||
base_image_ref: Optional[ImageReference] = (
|
|
||||||
ImageReference.from_string(args.base_image) if args.base_image else None
|
|
||||||
)
|
|
||||||
|
|
||||||
# Spec dag hash -> blob
|
# Spec dag hash -> blob
|
||||||
checksums: Dict[str, spack.oci.oci.Blob] = {}
|
checksums: Dict[str, spack.oci.oci.Blob] = {}
|
||||||
|
|
||||||
@ -642,11 +693,11 @@ def _push_oci(
|
|||||||
# Specs not uploaded because they already exist
|
# Specs not uploaded because they already exist
|
||||||
skipped = []
|
skipped = []
|
||||||
|
|
||||||
if not args.force:
|
if not force:
|
||||||
tty.info("Checking for existing specs in the buildcache")
|
tty.info("Checking for existing specs in the buildcache")
|
||||||
to_be_uploaded = []
|
to_be_uploaded = []
|
||||||
|
|
||||||
tags_to_check = (image_ref.with_tag(default_tag(s)) for s in installed_specs_with_deps)
|
tags_to_check = (target_image.with_tag(default_tag(s)) for s in installed_specs_with_deps)
|
||||||
available_blobs = pool.map(_get_spack_binary_blob, tags_to_check)
|
available_blobs = pool.map(_get_spack_binary_blob, tags_to_check)
|
||||||
|
|
||||||
for spec, maybe_blob in zip(installed_specs_with_deps, available_blobs):
|
for spec, maybe_blob in zip(installed_specs_with_deps, available_blobs):
|
||||||
@ -659,46 +710,63 @@ def _push_oci(
|
|||||||
to_be_uploaded = installed_specs_with_deps
|
to_be_uploaded = installed_specs_with_deps
|
||||||
|
|
||||||
if not to_be_uploaded:
|
if not to_be_uploaded:
|
||||||
return skipped
|
return skipped, base_images, checksums
|
||||||
|
|
||||||
tty.info(
|
tty.info(
|
||||||
f"{len(to_be_uploaded)} specs need to be pushed to {image_ref.domain}/{image_ref.name}"
|
f"{len(to_be_uploaded)} specs need to be pushed to "
|
||||||
|
f"{target_image.domain}/{target_image.name}"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Upload blobs
|
# Upload blobs
|
||||||
new_blobs = pool.starmap(
|
new_blobs = pool.starmap(
|
||||||
_push_single_spack_binary_blob, ((image_ref, spec, tmpdir) for spec in to_be_uploaded)
|
_push_single_spack_binary_blob, ((target_image, spec, tmpdir) for spec in to_be_uploaded)
|
||||||
)
|
)
|
||||||
|
|
||||||
# And update the spec to blob mapping
|
# And update the spec to blob mapping
|
||||||
for spec, blob in zip(to_be_uploaded, new_blobs):
|
for spec, blob in zip(to_be_uploaded, new_blobs):
|
||||||
checksums[spec.dag_hash()] = blob
|
checksums[spec.dag_hash()] = blob
|
||||||
|
|
||||||
# Copy base image layers, probably fine to do sequentially.
|
# Copy base images if necessary
|
||||||
for spec in to_be_uploaded:
|
for spec in to_be_uploaded:
|
||||||
architecture = _archspec_to_gooarch(spec)
|
_update_base_images(
|
||||||
# Get base image details, if we don't have them yet
|
base_image=base_image,
|
||||||
if architecture in base_images:
|
target_image=target_image,
|
||||||
continue
|
spec=spec,
|
||||||
if base_image_ref is None:
|
base_image_cache=base_images,
|
||||||
base_images[architecture] = (default_manifest(), default_config(architecture, "linux"))
|
|
||||||
else:
|
|
||||||
base_images[architecture] = copy_missing_layers_with_retry(
|
|
||||||
base_image_ref, image_ref, architecture
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def extra_config(spec: Spec):
|
||||||
|
spec_dict = spec.to_dict(hash=ht.dag_hash)
|
||||||
|
spec_dict["buildcache_layout_version"] = 1
|
||||||
|
spec_dict["binary_cache_checksum"] = {
|
||||||
|
"hash_algorithm": "sha256",
|
||||||
|
"hash": checksums[spec.dag_hash()].compressed_digest.digest,
|
||||||
|
}
|
||||||
|
return spec_dict
|
||||||
|
|
||||||
# Upload manifests
|
# Upload manifests
|
||||||
tty.info("Uploading manifests")
|
tty.info("Uploading manifests")
|
||||||
pushed_image_ref = pool.starmap(
|
pool.starmap(
|
||||||
_put_manifest,
|
_put_manifest,
|
||||||
((base_images, checksums, spec, image_ref, tmpdir) for spec in to_be_uploaded),
|
(
|
||||||
|
(
|
||||||
|
base_images,
|
||||||
|
checksums,
|
||||||
|
target_image.with_tag(default_tag(spec)),
|
||||||
|
tmpdir,
|
||||||
|
extra_config(spec),
|
||||||
|
{"org.opencontainers.image.description": spec.format()},
|
||||||
|
spec,
|
||||||
|
)
|
||||||
|
for spec in to_be_uploaded
|
||||||
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
# Print the image names of the top-level specs
|
# Print the image names of the top-level specs
|
||||||
for spec, ref in zip(to_be_uploaded, pushed_image_ref):
|
for spec in to_be_uploaded:
|
||||||
tty.info(f"Pushed {_format_spec(spec)} to {ref}")
|
tty.info(f"Pushed {_format_spec(spec)} to {target_image.with_tag(default_tag(spec))}")
|
||||||
|
|
||||||
return skipped
|
return skipped, base_images, checksums
|
||||||
|
|
||||||
|
|
||||||
def _config_from_tag(image_ref: ImageReference, tag: str) -> Optional[dict]:
|
def _config_from_tag(image_ref: ImageReference, tag: str) -> Optional[dict]:
|
||||||
|
@ -1949,9 +1949,7 @@ def pytest_runtest_setup(item):
|
|||||||
pytest.skip(*not_on_windows_marker.args)
|
pytest.skip(*not_on_windows_marker.args)
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope="function")
|
class MockPool:
|
||||||
def disable_parallel_buildcache_push(monkeypatch):
|
|
||||||
class MockPool:
|
|
||||||
def map(self, func, args):
|
def map(self, func, args):
|
||||||
return [func(a) for a in args]
|
return [func(a) for a in args]
|
||||||
|
|
||||||
@ -1964,6 +1962,9 @@ def __enter__(self):
|
|||||||
def __exit__(self, *args):
|
def __exit__(self, *args):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def disable_parallel_buildcache_push(monkeypatch):
|
||||||
monkeypatch.setattr(spack.cmd.buildcache, "_make_pool", MockPool)
|
monkeypatch.setattr(spack.cmd.buildcache, "_make_pool", MockPool)
|
||||||
|
|
||||||
|
|
||||||
|
@ -11,6 +11,7 @@
|
|||||||
import os
|
import os
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
|
|
||||||
|
import spack.environment as ev
|
||||||
import spack.oci.opener
|
import spack.oci.opener
|
||||||
from spack.binary_distribution import gzip_compressed_tarfile
|
from spack.binary_distribution import gzip_compressed_tarfile
|
||||||
from spack.main import SpackCommand
|
from spack.main import SpackCommand
|
||||||
@ -20,6 +21,8 @@
|
|||||||
|
|
||||||
buildcache = SpackCommand("buildcache")
|
buildcache = SpackCommand("buildcache")
|
||||||
mirror = SpackCommand("mirror")
|
mirror = SpackCommand("mirror")
|
||||||
|
env = SpackCommand("env")
|
||||||
|
install = SpackCommand("install")
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
@ -53,6 +56,46 @@ def test_buildcache_push_command(mutable_database, disable_parallel_buildcache_p
|
|||||||
assert os.path.exists(os.path.join(spec.prefix, "bin", "mpileaks"))
|
assert os.path.exists(os.path.join(spec.prefix, "bin", "mpileaks"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_buildcache_tag(
|
||||||
|
install_mockery, mock_fetch, mutable_mock_env_path, disable_parallel_buildcache_push
|
||||||
|
):
|
||||||
|
"""Tests whether we can create an OCI image from a full environment with multiple roots."""
|
||||||
|
env("create", "test")
|
||||||
|
with ev.read("test"):
|
||||||
|
install("--add", "libelf")
|
||||||
|
install("--add", "trivial-install-test-package")
|
||||||
|
|
||||||
|
registry = InMemoryOCIRegistry("example.com")
|
||||||
|
|
||||||
|
with oci_servers(registry):
|
||||||
|
mirror("add", "oci-test", "oci://example.com/image")
|
||||||
|
|
||||||
|
with ev.read("test"):
|
||||||
|
buildcache("push", "--tag", "full_env", "oci-test")
|
||||||
|
|
||||||
|
name = ImageReference.from_string("example.com/image:full_env")
|
||||||
|
|
||||||
|
with ev.read("test") as e:
|
||||||
|
specs = e.all_specs()
|
||||||
|
|
||||||
|
manifest, config = get_manifest_and_config(name)
|
||||||
|
|
||||||
|
# without a base image, we should have one layer per spec
|
||||||
|
assert len(manifest["layers"]) == len(specs)
|
||||||
|
|
||||||
|
# Now create yet another tag, but with just a single selected spec as root. This should
|
||||||
|
# also test the case where Spack doesn't have to upload any binaries, it just has to create
|
||||||
|
# a new tag.
|
||||||
|
libelf = next(s for s in specs if s.name == "libelf")
|
||||||
|
with ev.read("test"):
|
||||||
|
# Get libelf spec
|
||||||
|
buildcache("push", "--tag", "single_spec", "oci-test", libelf.format("libelf{/hash}"))
|
||||||
|
|
||||||
|
name = ImageReference.from_string("example.com/image:single_spec")
|
||||||
|
manifest, config = get_manifest_and_config(name)
|
||||||
|
assert len(manifest["layers"]) == 1
|
||||||
|
|
||||||
|
|
||||||
def test_buildcache_push_with_base_image_command(
|
def test_buildcache_push_with_base_image_command(
|
||||||
mutable_database, tmpdir, disable_parallel_buildcache_push
|
mutable_database, tmpdir, disable_parallel_buildcache_push
|
||||||
):
|
):
|
||||||
|
@ -571,7 +571,7 @@ _spack_buildcache() {
|
|||||||
_spack_buildcache_push() {
|
_spack_buildcache_push() {
|
||||||
if $list_options
|
if $list_options
|
||||||
then
|
then
|
||||||
SPACK_COMPREPLY="-h --help -f --force --allow-root -a --unsigned -u --signed --key -k --update-index --rebuild-index --spec-file --only --fail-fast --base-image -j --jobs"
|
SPACK_COMPREPLY="-h --help -f --force --allow-root -a --unsigned -u --signed --key -k --update-index --rebuild-index --spec-file --only --fail-fast --base-image --tag -t -j --jobs"
|
||||||
else
|
else
|
||||||
_mirrors
|
_mirrors
|
||||||
fi
|
fi
|
||||||
@ -580,7 +580,7 @@ _spack_buildcache_push() {
|
|||||||
_spack_buildcache_create() {
|
_spack_buildcache_create() {
|
||||||
if $list_options
|
if $list_options
|
||||||
then
|
then
|
||||||
SPACK_COMPREPLY="-h --help -f --force --allow-root -a --unsigned -u --signed --key -k --update-index --rebuild-index --spec-file --only --fail-fast --base-image -j --jobs"
|
SPACK_COMPREPLY="-h --help -f --force --allow-root -a --unsigned -u --signed --key -k --update-index --rebuild-index --spec-file --only --fail-fast --base-image --tag -t -j --jobs"
|
||||||
else
|
else
|
||||||
_mirrors
|
_mirrors
|
||||||
fi
|
fi
|
||||||
|
@ -697,7 +697,7 @@ complete -c spack -n '__fish_spack_using_command buildcache' -s h -l help -f -a
|
|||||||
complete -c spack -n '__fish_spack_using_command buildcache' -s h -l help -d 'show this help message and exit'
|
complete -c spack -n '__fish_spack_using_command buildcache' -s h -l help -d 'show this help message and exit'
|
||||||
|
|
||||||
# spack buildcache push
|
# spack buildcache push
|
||||||
set -g __fish_spack_optspecs_spack_buildcache_push h/help f/force a/allow-root u/unsigned signed k/key= update-index spec-file= only= fail-fast base-image= j/jobs=
|
set -g __fish_spack_optspecs_spack_buildcache_push h/help f/force a/allow-root u/unsigned signed k/key= update-index spec-file= only= fail-fast base-image= t/tag= j/jobs=
|
||||||
complete -c spack -n '__fish_spack_using_command_pos_remainder 1 buildcache push' -f -k -a '(__fish_spack_specs)'
|
complete -c spack -n '__fish_spack_using_command_pos_remainder 1 buildcache push' -f -k -a '(__fish_spack_specs)'
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache push' -s h -l help -f -a help
|
complete -c spack -n '__fish_spack_using_command buildcache push' -s h -l help -f -a help
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache push' -s h -l help -d 'show this help message and exit'
|
complete -c spack -n '__fish_spack_using_command buildcache push' -s h -l help -d 'show this help message and exit'
|
||||||
@ -720,12 +720,14 @@ complete -c spack -n '__fish_spack_using_command buildcache push' -l only -r -d
|
|||||||
complete -c spack -n '__fish_spack_using_command buildcache push' -l fail-fast -f -a fail_fast
|
complete -c spack -n '__fish_spack_using_command buildcache push' -l fail-fast -f -a fail_fast
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache push' -l fail-fast -d 'stop pushing on first failure (default is best effort)'
|
complete -c spack -n '__fish_spack_using_command buildcache push' -l fail-fast -d 'stop pushing on first failure (default is best effort)'
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache push' -l base-image -r -f -a base_image
|
complete -c spack -n '__fish_spack_using_command buildcache push' -l base-image -r -f -a base_image
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache push' -l base-image -r -d 'specify the base image for the buildcache. '
|
complete -c spack -n '__fish_spack_using_command buildcache push' -l base-image -r -d 'specify the base image for the buildcache'
|
||||||
|
complete -c spack -n '__fish_spack_using_command buildcache push' -l tag -s t -r -f -a tag
|
||||||
|
complete -c spack -n '__fish_spack_using_command buildcache push' -l tag -s t -r -d 'when pushing to an OCI registry, tag an image containing all root specs and their runtime dependencies'
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache push' -s j -l jobs -r -f -a jobs
|
complete -c spack -n '__fish_spack_using_command buildcache push' -s j -l jobs -r -f -a jobs
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache push' -s j -l jobs -r -d 'explicitly set number of parallel jobs'
|
complete -c spack -n '__fish_spack_using_command buildcache push' -s j -l jobs -r -d 'explicitly set number of parallel jobs'
|
||||||
|
|
||||||
# spack buildcache create
|
# spack buildcache create
|
||||||
set -g __fish_spack_optspecs_spack_buildcache_create h/help f/force a/allow-root u/unsigned signed k/key= update-index spec-file= only= fail-fast base-image= j/jobs=
|
set -g __fish_spack_optspecs_spack_buildcache_create h/help f/force a/allow-root u/unsigned signed k/key= update-index spec-file= only= fail-fast base-image= t/tag= j/jobs=
|
||||||
complete -c spack -n '__fish_spack_using_command_pos_remainder 1 buildcache create' -f -k -a '(__fish_spack_specs)'
|
complete -c spack -n '__fish_spack_using_command_pos_remainder 1 buildcache create' -f -k -a '(__fish_spack_specs)'
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache create' -s h -l help -f -a help
|
complete -c spack -n '__fish_spack_using_command buildcache create' -s h -l help -f -a help
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache create' -s h -l help -d 'show this help message and exit'
|
complete -c spack -n '__fish_spack_using_command buildcache create' -s h -l help -d 'show this help message and exit'
|
||||||
@ -748,7 +750,9 @@ complete -c spack -n '__fish_spack_using_command buildcache create' -l only -r -
|
|||||||
complete -c spack -n '__fish_spack_using_command buildcache create' -l fail-fast -f -a fail_fast
|
complete -c spack -n '__fish_spack_using_command buildcache create' -l fail-fast -f -a fail_fast
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache create' -l fail-fast -d 'stop pushing on first failure (default is best effort)'
|
complete -c spack -n '__fish_spack_using_command buildcache create' -l fail-fast -d 'stop pushing on first failure (default is best effort)'
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache create' -l base-image -r -f -a base_image
|
complete -c spack -n '__fish_spack_using_command buildcache create' -l base-image -r -f -a base_image
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache create' -l base-image -r -d 'specify the base image for the buildcache. '
|
complete -c spack -n '__fish_spack_using_command buildcache create' -l base-image -r -d 'specify the base image for the buildcache'
|
||||||
|
complete -c spack -n '__fish_spack_using_command buildcache create' -l tag -s t -r -f -a tag
|
||||||
|
complete -c spack -n '__fish_spack_using_command buildcache create' -l tag -s t -r -d 'when pushing to an OCI registry, tag an image containing all root specs and their runtime dependencies'
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache create' -s j -l jobs -r -f -a jobs
|
complete -c spack -n '__fish_spack_using_command buildcache create' -s j -l jobs -r -f -a jobs
|
||||||
complete -c spack -n '__fish_spack_using_command buildcache create' -s j -l jobs -r -d 'explicitly set number of parallel jobs'
|
complete -c spack -n '__fish_spack_using_command buildcache create' -s j -l jobs -r -d 'explicitly set number of parallel jobs'
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user