Compare commits
4 Commits
develop-20
...
bugfix/inv
Author | SHA1 | Date | |
---|---|---|---|
![]() |
c899dcac5b | ||
![]() |
94dc25ecfa | ||
![]() |
aded859856 | ||
![]() |
2e3fc288ae |
8
.github/workflows/bootstrap.yml
vendored
8
.github/workflows/bootstrap.yml
vendored
@@ -84,11 +84,13 @@ jobs:
|
||||
- name: Setup macOS
|
||||
if: ${{ matrix.runner != 'ubuntu-latest' }}
|
||||
run: |
|
||||
brew install tree gawk
|
||||
sudo rm -rf $(command -v gpg gpg2)
|
||||
brew install tree
|
||||
# Remove GnuPG since we want to bootstrap it
|
||||
sudo rm -rf /usr/local/bin/gpg
|
||||
- name: Setup Ubuntu
|
||||
if: ${{ matrix.runner == 'ubuntu-latest' }}
|
||||
run: sudo rm -rf $(command -v gpg gpg2 patchelf)
|
||||
run: |
|
||||
sudo rm -rf $(which gpg) $(which gpg2) $(which patchelf)
|
||||
- name: Checkout
|
||||
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
|
||||
with:
|
||||
|
@@ -32,7 +32,7 @@
|
||||
|
||||
Spack is a multi-platform package manager that builds and installs
|
||||
multiple versions and configurations of software. It works on Linux,
|
||||
macOS, Windows, and many supercomputers. Spack is non-destructive: installing a
|
||||
macOS, and many supercomputers. Spack is non-destructive: installing a
|
||||
new version of a package does not break existing installations, so many
|
||||
configurations of the same package can coexist.
|
||||
|
||||
|
16
etc/spack/defaults/cray/modules.yaml
Normal file
16
etc/spack/defaults/cray/modules.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
# -------------------------------------------------------------------------
|
||||
# This is the default configuration for Spack's module file generation.
|
||||
#
|
||||
# Settings here are versioned with Spack and are intended to provide
|
||||
# sensible defaults out of the box. Spack maintainers should edit this
|
||||
# file to keep it current.
|
||||
#
|
||||
# Users can override these settings by editing the following files.
|
||||
#
|
||||
# Per-spack-instance settings (overrides defaults):
|
||||
# $SPACK_ROOT/etc/spack/modules.yaml
|
||||
#
|
||||
# Per-user settings (overrides default and site settings):
|
||||
# ~/.spack/modules.yaml
|
||||
# -------------------------------------------------------------------------
|
||||
modules: {}
|
19
etc/spack/defaults/cray/packages.yaml
Normal file
19
etc/spack/defaults/cray/packages.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
# -------------------------------------------------------------------------
|
||||
# This file controls default concretization preferences for Spack.
|
||||
#
|
||||
# Settings here are versioned with Spack and are intended to provide
|
||||
# sensible defaults out of the box. Spack maintainers should edit this
|
||||
# file to keep it current.
|
||||
#
|
||||
# Users can override these settings by editing the following files.
|
||||
#
|
||||
# Per-spack-instance settings (overrides defaults):
|
||||
# $SPACK_ROOT/etc/spack/packages.yaml
|
||||
#
|
||||
# Per-user settings (overrides default and site settings):
|
||||
# ~/.spack/packages.yaml
|
||||
# -------------------------------------------------------------------------
|
||||
packages:
|
||||
all:
|
||||
providers:
|
||||
iconv: [glibc, musl, libiconv]
|
19
etc/spack/defaults/linux/packages.yaml
Normal file
19
etc/spack/defaults/linux/packages.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
# -------------------------------------------------------------------------
|
||||
# This file controls default concretization preferences for Spack.
|
||||
#
|
||||
# Settings here are versioned with Spack and are intended to provide
|
||||
# sensible defaults out of the box. Spack maintainers should edit this
|
||||
# file to keep it current.
|
||||
#
|
||||
# Users can override these settings by editing the following files.
|
||||
#
|
||||
# Per-spack-instance settings (overrides defaults):
|
||||
# $SPACK_ROOT/etc/spack/packages.yaml
|
||||
#
|
||||
# Per-user settings (overrides default and site settings):
|
||||
# ~/.spack/packages.yaml
|
||||
# -------------------------------------------------------------------------
|
||||
packages:
|
||||
all:
|
||||
providers:
|
||||
iconv: [glibc, musl, libiconv]
|
@@ -1433,12 +1433,22 @@ the reserved keywords ``platform``, ``os`` and ``target``:
|
||||
$ spack install libelf os=ubuntu18.04
|
||||
$ spack install libelf target=broadwell
|
||||
|
||||
or together by using the reserved keyword ``arch``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install libelf arch=cray-CNL10-haswell
|
||||
|
||||
Normally users don't have to bother specifying the architecture if they
|
||||
are installing software for their current host, as in that case the
|
||||
values will be detected automatically. If you need fine-grained control
|
||||
over which packages use which targets (or over *all* packages' default
|
||||
target), see :ref:`package-preferences`.
|
||||
|
||||
.. admonition:: Cray machines
|
||||
|
||||
The situation is a little bit different for Cray machines and a detailed
|
||||
explanation on how the architecture can be set on them can be found at :ref:`cray-support`
|
||||
|
||||
.. _support-for-microarchitectures:
|
||||
|
||||
|
@@ -11,8 +11,7 @@ Chaining Spack Installations
|
||||
|
||||
You can point your Spack installation to another installation to use any
|
||||
packages that are installed there. To register the other Spack instance,
|
||||
you can add it as an entry to ``upstreams.yaml`` at any of the
|
||||
:ref:`configuration-scopes`:
|
||||
you can add it as an entry to ``upstreams.yaml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@@ -23,8 +22,7 @@ you can add it as an entry to ``upstreams.yaml`` at any of the
|
||||
install_tree: /path/to/another/spack/opt/spack
|
||||
|
||||
``install_tree`` must point to the ``opt/spack`` directory inside of the
|
||||
Spack base directory, or the location of the ``install_tree`` defined
|
||||
in :ref:`config.yaml <config-yaml>`.
|
||||
Spack base directory.
|
||||
|
||||
Once the upstream Spack instance has been added, ``spack find`` will
|
||||
automatically check the upstream instance when querying installed packages,
|
||||
|
@@ -1364,6 +1364,187 @@ This will write the private key to the file `dinosaur.priv`.
|
||||
or for help on an issue or the Spack slack.
|
||||
|
||||
|
||||
.. _cray-support:
|
||||
|
||||
-------------
|
||||
Spack on Cray
|
||||
-------------
|
||||
|
||||
Spack differs slightly when used on a Cray system. The architecture spec
|
||||
can differentiate between the front-end and back-end processor and operating system.
|
||||
For example, on Edison at NERSC, the back-end target processor
|
||||
is "Ivy Bridge", so you can specify to use the back-end this way:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install zlib target=ivybridge
|
||||
|
||||
You can also use the operating system to build against the back-end:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install zlib os=CNL10
|
||||
|
||||
Notice that the name includes both the operating system name and the major
|
||||
version number concatenated together.
|
||||
|
||||
Alternatively, if you want to build something for the front-end,
|
||||
you can specify the front-end target processor. The processor for a login node
|
||||
on Edison is "Sandy bridge" so we specify on the command line like so:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install zlib target=sandybridge
|
||||
|
||||
And the front-end operating system is:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install zlib os=SuSE11
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Cray compiler detection
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Spack can detect compilers using two methods. For the front-end, we treat
|
||||
everything the same. The difference lies in back-end compiler detection.
|
||||
Back-end compiler detection is made via the Tcl module avail command.
|
||||
Once it detects the compiler it writes the appropriate PrgEnv and compiler
|
||||
module name to compilers.yaml and sets the paths to each compiler with Cray\'s
|
||||
compiler wrapper names (i.e. cc, CC, ftn). During build time, Spack will load
|
||||
the correct PrgEnv and compiler module and will call appropriate wrapper.
|
||||
|
||||
The compilers.yaml config file will also differ. There is a
|
||||
modules section that is filled with the compiler's Programming Environment
|
||||
and module name. On other systems, this field is empty []:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
- compiler:
|
||||
modules:
|
||||
- PrgEnv-intel
|
||||
- intel/15.0.109
|
||||
|
||||
As mentioned earlier, the compiler paths will look different on a Cray system.
|
||||
Since most compilers are invoked using cc, CC and ftn, the paths for each
|
||||
compiler are replaced with their respective Cray compiler wrapper names:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
paths:
|
||||
cc: cc
|
||||
cxx: CC
|
||||
f77: ftn
|
||||
fc: ftn
|
||||
|
||||
As opposed to an explicit path to the compiler executable. This allows Spack
|
||||
to call the Cray compiler wrappers during build time.
|
||||
|
||||
For more on compiler configuration, check out :ref:`compiler-config`.
|
||||
|
||||
Spack sets the default Cray link type to dynamic, to better match other
|
||||
other platforms. Individual packages can enable static linking (which is the
|
||||
default outside of Spack on cray systems) using the ``-static`` flag.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Setting defaults and using Cray modules
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If you want to use default compilers for each PrgEnv and also be able
|
||||
to load cray external modules, you will need to set up a ``packages.yaml``.
|
||||
|
||||
Here's an example of an external configuration for cray modules:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpich:
|
||||
externals:
|
||||
- spec: "mpich@7.3.1%gcc@5.2.0 arch=cray_xc-haswell-CNL10"
|
||||
modules:
|
||||
- cray-mpich
|
||||
- spec: "mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-haswell-CNL10"
|
||||
modules:
|
||||
- cray-mpich
|
||||
all:
|
||||
providers:
|
||||
mpi: [mpich]
|
||||
|
||||
This tells Spack that for whatever package that depends on mpi, load the
|
||||
cray-mpich module into the environment. You can then be able to use whatever
|
||||
environment variables, libraries, etc, that are brought into the environment
|
||||
via module load.
|
||||
|
||||
.. note::
|
||||
|
||||
For Cray-provided packages, it is best to use ``modules:`` instead of ``prefix:``
|
||||
in ``packages.yaml``, because the Cray Programming Environment heavily relies on
|
||||
modules (e.g., loading the ``cray-mpich`` module adds MPI libraries to the
|
||||
compiler wrapper link line).
|
||||
|
||||
You can set the default compiler that Spack can use for each compiler type.
|
||||
If you want to use the Cray defaults, then set them under ``all:`` in packages.yaml.
|
||||
In the compiler field, set the compiler specs in your order of preference.
|
||||
Whenever you build with that compiler type, Spack will concretize to that version.
|
||||
|
||||
Here is an example of a full packages.yaml used at NERSC
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpich:
|
||||
externals:
|
||||
- spec: "mpich@7.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
|
||||
modules:
|
||||
- cray-mpich
|
||||
- spec: "mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-SuSE11-ivybridge"
|
||||
modules:
|
||||
- cray-mpich
|
||||
buildable: False
|
||||
netcdf:
|
||||
externals:
|
||||
- spec: "netcdf@4.3.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
|
||||
modules:
|
||||
- cray-netcdf
|
||||
- spec: "netcdf@4.3.3.1%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge"
|
||||
modules:
|
||||
- cray-netcdf
|
||||
buildable: False
|
||||
hdf5:
|
||||
externals:
|
||||
- spec: "hdf5@1.8.14%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
|
||||
modules:
|
||||
- cray-hdf5
|
||||
- spec: "hdf5@1.8.14%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge"
|
||||
modules:
|
||||
- cray-hdf5
|
||||
buildable: False
|
||||
all:
|
||||
compiler: [gcc@5.2.0, intel@16.0.0.109]
|
||||
providers:
|
||||
mpi: [mpich]
|
||||
|
||||
Here we tell spack that whenever we want to build with gcc use version 5.2.0 or
|
||||
if we want to build with intel compilers, use version 16.0.0.109. We add a spec
|
||||
for each compiler type for each cray modules. This ensures that for each
|
||||
compiler on our system we can use that external module.
|
||||
|
||||
For more on external packages check out the section :ref:`sec-external-packages`.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Using Linux containers on Cray machines
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Spack uses environment variables particular to the Cray programming
|
||||
environment to determine which systems are Cray platforms. These
|
||||
environment variables may be propagated into containers that are not
|
||||
using the Cray programming environment.
|
||||
|
||||
To ensure that Spack does not autodetect the Cray programming
|
||||
environment, unset the environment variable ``MODULEPATH``. This
|
||||
will cause Spack to treat a linux container on a Cray system as a base
|
||||
linux distro.
|
||||
|
||||
.. _windows_support:
|
||||
|
||||
----------------
|
||||
|
@@ -766,6 +766,7 @@ def copy_tree(
|
||||
src: str,
|
||||
dest: str,
|
||||
symlinks: bool = True,
|
||||
allow_broken_symlinks: bool = sys.platform != "win32",
|
||||
ignore: Optional[Callable[[str], bool]] = None,
|
||||
_permissions: bool = False,
|
||||
):
|
||||
@@ -788,6 +789,8 @@ def copy_tree(
|
||||
src (str): the directory to copy
|
||||
dest (str): the destination directory
|
||||
symlinks (bool): whether or not to preserve symlinks
|
||||
allow_broken_symlinks (bool): whether or not to allow broken (dangling) symlinks,
|
||||
On Windows, setting this to True will raise an exception. Defaults to true on unix.
|
||||
ignore (typing.Callable): function indicating which files to ignore
|
||||
_permissions (bool): for internal use only
|
||||
|
||||
@@ -795,6 +798,8 @@ def copy_tree(
|
||||
IOError: if *src* does not match any files or directories
|
||||
ValueError: if *src* is a parent directory of *dest*
|
||||
"""
|
||||
if allow_broken_symlinks and sys.platform == "win32":
|
||||
raise llnl.util.symlink.SymlinkError("Cannot allow broken symlinks on Windows!")
|
||||
if _permissions:
|
||||
tty.debug("Installing {0} to {1}".format(src, dest))
|
||||
else:
|
||||
@@ -867,14 +872,16 @@ def escaped_path(path):
|
||||
copy_mode(s, d)
|
||||
|
||||
for target, d, s in links:
|
||||
symlink(target, d)
|
||||
symlink(target, d, allow_broken_symlinks=allow_broken_symlinks)
|
||||
if _permissions:
|
||||
set_install_permissions(d)
|
||||
copy_mode(s, d)
|
||||
|
||||
|
||||
@system_path_filter
|
||||
def install_tree(src, dest, symlinks=True, ignore=None):
|
||||
def install_tree(
|
||||
src, dest, symlinks=True, ignore=None, allow_broken_symlinks=sys.platform != "win32"
|
||||
):
|
||||
"""Recursively install an entire directory tree rooted at *src*.
|
||||
|
||||
Same as :py:func:`copy_tree` with the addition of setting proper
|
||||
@@ -885,12 +892,21 @@ def install_tree(src, dest, symlinks=True, ignore=None):
|
||||
dest (str): the destination directory
|
||||
symlinks (bool): whether or not to preserve symlinks
|
||||
ignore (typing.Callable): function indicating which files to ignore
|
||||
allow_broken_symlinks (bool): whether or not to allow broken (dangling) symlinks,
|
||||
On Windows, setting this to True will raise an exception.
|
||||
|
||||
Raises:
|
||||
IOError: if *src* does not match any files or directories
|
||||
ValueError: if *src* is a parent directory of *dest*
|
||||
"""
|
||||
copy_tree(src, dest, symlinks=symlinks, ignore=ignore, _permissions=True)
|
||||
copy_tree(
|
||||
src,
|
||||
dest,
|
||||
symlinks=symlinks,
|
||||
allow_broken_symlinks=allow_broken_symlinks,
|
||||
ignore=ignore,
|
||||
_permissions=True,
|
||||
)
|
||||
|
||||
|
||||
@system_path_filter
|
||||
|
@@ -8,7 +8,6 @@
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from typing import Union
|
||||
|
||||
from llnl.util import lang, tty
|
||||
|
||||
@@ -17,66 +16,92 @@
|
||||
if sys.platform == "win32":
|
||||
from win32file import CreateHardLink
|
||||
|
||||
is_windows = sys.platform == "win32"
|
||||
|
||||
def _windows_symlink(
|
||||
src: str, dst: str, target_is_directory: bool = False, *, dir_fd: Union[int, None] = None
|
||||
):
|
||||
"""On Windows with System Administrator privileges this will be a normal symbolic link via
|
||||
os.symlink. On Windows without privledges the link will be a junction for a directory and a
|
||||
hardlink for a file. On Windows the various link types are:
|
||||
|
||||
Symbolic Link: A link to a file or directory on the same or different volume (drive letter) or
|
||||
even to a remote file or directory (using UNC in its path). Need System Administrator
|
||||
privileges to make these.
|
||||
def symlink(source_path: str, link_path: str, allow_broken_symlinks: bool = not is_windows):
|
||||
"""
|
||||
Create a link.
|
||||
|
||||
Hard Link: A link to a file on the same volume (drive letter) only. Every file (file's data)
|
||||
has at least 1 hard link (file's name). But when this method creates a new hard link there will
|
||||
be 2. Deleting all hard links effectively deletes the file. Don't need System Administrator
|
||||
privileges.
|
||||
On non-Windows and Windows with System Administrator
|
||||
privleges this will be a normal symbolic link via
|
||||
os.symlink.
|
||||
|
||||
Junction: A link to a directory on the same or different volume (drive letter) but not to a
|
||||
remote directory. Don't need System Administrator privileges."""
|
||||
source_path = os.path.normpath(src)
|
||||
On Windows without privledges the link will be a
|
||||
junction for a directory and a hardlink for a file.
|
||||
On Windows the various link types are:
|
||||
|
||||
Symbolic Link: A link to a file or directory on the
|
||||
same or different volume (drive letter) or even to
|
||||
a remote file or directory (using UNC in its path).
|
||||
Need System Administrator privileges to make these.
|
||||
|
||||
Hard Link: A link to a file on the same volume (drive
|
||||
letter) only. Every file (file's data) has at least 1
|
||||
hard link (file's name). But when this method creates
|
||||
a new hard link there will be 2. Deleting all hard
|
||||
links effectively deletes the file. Don't need System
|
||||
Administrator privileges.
|
||||
|
||||
Junction: A link to a directory on the same or different
|
||||
volume (drive letter) but not to a remote directory. Don't
|
||||
need System Administrator privileges.
|
||||
|
||||
Parameters:
|
||||
source_path (str): The real file or directory that the link points to.
|
||||
Must be absolute OR relative to the link.
|
||||
link_path (str): The path where the link will exist.
|
||||
allow_broken_symlinks (bool): On Linux or Mac, don't raise an exception if the source_path
|
||||
doesn't exist. This will still raise an exception on Windows.
|
||||
"""
|
||||
source_path = os.path.normpath(source_path)
|
||||
win_source_path = source_path
|
||||
link_path = os.path.normpath(dst)
|
||||
link_path = os.path.normpath(link_path)
|
||||
|
||||
# Perform basic checks to make sure symlinking will succeed
|
||||
if os.path.lexists(link_path):
|
||||
raise AlreadyExistsError(f"Link path ({link_path}) already exists. Cannot create link.")
|
||||
# Never allow broken links on Windows.
|
||||
if sys.platform == "win32" and allow_broken_symlinks:
|
||||
raise ValueError("allow_broken_symlinks parameter cannot be True on Windows.")
|
||||
|
||||
if not os.path.exists(source_path):
|
||||
if os.path.isabs(source_path):
|
||||
# An absolute source path that does not exist will result in a broken link.
|
||||
raise SymlinkError(
|
||||
f"Source path ({source_path}) is absolute but does not exist. Resulting "
|
||||
f"link would be broken so not making link."
|
||||
if not allow_broken_symlinks:
|
||||
# Perform basic checks to make sure symlinking will succeed
|
||||
if os.path.lexists(link_path):
|
||||
raise AlreadyExistsError(
|
||||
f"Link path ({link_path}) already exists. Cannot create link."
|
||||
)
|
||||
else:
|
||||
# os.symlink can create a link when the given source path is relative to
|
||||
# the link path. Emulate this behavior and check to see if the source exists
|
||||
# relative to the link path ahead of link creation to prevent broken
|
||||
# links from being made.
|
||||
link_parent_dir = os.path.dirname(link_path)
|
||||
relative_path = os.path.join(link_parent_dir, source_path)
|
||||
if os.path.exists(relative_path):
|
||||
# In order to work on windows, the source path needs to be modified to be
|
||||
# relative because hardlink/junction dont resolve relative paths the same
|
||||
# way as os.symlink. This is ignored on other operating systems.
|
||||
win_source_path = relative_path
|
||||
else:
|
||||
|
||||
if not os.path.exists(source_path):
|
||||
if os.path.isabs(source_path) and not allow_broken_symlinks:
|
||||
# An absolute source path that does not exist will result in a broken link.
|
||||
raise SymlinkError(
|
||||
f"The source path ({source_path}) is not relative to the link path "
|
||||
f"({link_path}). Resulting link would be broken so not making link."
|
||||
f"Source path ({source_path}) is absolute but does not exist. Resulting "
|
||||
f"link would be broken so not making link."
|
||||
)
|
||||
else:
|
||||
# os.symlink can create a link when the given source path is relative to
|
||||
# the link path. Emulate this behavior and check to see if the source exists
|
||||
# relative to the link path ahead of link creation to prevent broken
|
||||
# links from being made.
|
||||
link_parent_dir = os.path.dirname(link_path)
|
||||
relative_path = os.path.join(link_parent_dir, source_path)
|
||||
if os.path.exists(relative_path):
|
||||
# In order to work on windows, the source path needs to be modified to be
|
||||
# relative because hardlink/junction dont resolve relative paths the same
|
||||
# way as os.symlink. This is ignored on other operating systems.
|
||||
win_source_path = relative_path
|
||||
elif not allow_broken_symlinks:
|
||||
raise SymlinkError(
|
||||
f"The source path ({source_path}) is not relative to the link path "
|
||||
f"({link_path}). Resulting link would be broken so not making link."
|
||||
)
|
||||
|
||||
# Create the symlink
|
||||
if not _windows_can_symlink():
|
||||
if sys.platform == "win32" and not _windows_can_symlink():
|
||||
_windows_create_link(win_source_path, link_path)
|
||||
else:
|
||||
os.symlink(source_path, link_path, target_is_directory=os.path.isdir(source_path))
|
||||
|
||||
|
||||
def _windows_islink(path: str) -> bool:
|
||||
def islink(path: str) -> bool:
|
||||
"""Override os.islink to give correct answer for spack logic.
|
||||
|
||||
For Non-Windows: a link can be determined with the os.path.islink method.
|
||||
@@ -244,7 +269,7 @@ def _windows_create_hard_link(path: str, link: str):
|
||||
CreateHardLink(link, path)
|
||||
|
||||
|
||||
def _windows_readlink(path: str, *, dir_fd=None):
|
||||
def readlink(path: str, *, dir_fd=None):
|
||||
"""Spack utility to override of os.readlink method to work cross platform"""
|
||||
if _windows_is_hardlink(path):
|
||||
return _windows_read_hard_link(path)
|
||||
@@ -313,16 +338,6 @@ def resolve_link_target_relative_to_the_link(link):
|
||||
return os.path.join(link_dir, target)
|
||||
|
||||
|
||||
if sys.platform == "win32":
|
||||
symlink = _windows_symlink
|
||||
readlink = _windows_readlink
|
||||
islink = _windows_islink
|
||||
else:
|
||||
symlink = os.symlink
|
||||
readlink = os.readlink
|
||||
islink = os.path.islink
|
||||
|
||||
|
||||
class SymlinkError(RuntimeError):
|
||||
"""Exception class for errors raised while creating symlinks,
|
||||
junctions and hard links
|
||||
|
@@ -213,18 +213,15 @@ def _root_spec(spec_str: str) -> str:
|
||||
Args:
|
||||
spec_str: spec to be bootstrapped. Must be without compiler and target.
|
||||
"""
|
||||
# Add a compiler and platform requirement to the root spec.
|
||||
# Add a compiler requirement to the root spec.
|
||||
platform = str(spack.platforms.host())
|
||||
|
||||
if platform == "darwin":
|
||||
spec_str += " %apple-clang"
|
||||
elif platform == "windows":
|
||||
spec_str += " %msvc"
|
||||
elif platform == "linux":
|
||||
spec_str += " %gcc"
|
||||
elif platform == "freebsd":
|
||||
spec_str += " %clang"
|
||||
spec_str += f" platform={platform}"
|
||||
|
||||
target = archspec.cpu.host().family
|
||||
spec_str += f" target={target}"
|
||||
|
||||
|
@@ -91,7 +91,7 @@
|
||||
)
|
||||
from spack.util.executable import Executable
|
||||
from spack.util.log_parse import make_log_context, parse_log_events
|
||||
from spack.util.module_cmd import load_module, path_from_modules
|
||||
from spack.util.module_cmd import load_module, module, path_from_modules
|
||||
|
||||
#
|
||||
# This can be set by the user to globally disable parallel builds.
|
||||
@@ -190,6 +190,14 @@ def __call__(self, *args, **kwargs):
|
||||
return super().__call__(*args, **kwargs)
|
||||
|
||||
|
||||
def _on_cray():
|
||||
host_platform = spack.platforms.host()
|
||||
host_os = host_platform.operating_system("default_os")
|
||||
on_cray = str(host_platform) == "cray"
|
||||
using_cnl = re.match(r"cnl\d+", str(host_os))
|
||||
return on_cray, using_cnl
|
||||
|
||||
|
||||
def clean_environment():
|
||||
# Stuff in here sanitizes the build environment to eliminate
|
||||
# anything the user has set that may interfere. We apply it immediately
|
||||
@@ -233,6 +241,17 @@ def clean_environment():
|
||||
if varname.endswith("_ROOT") and varname != "SPACK_ROOT":
|
||||
env.unset(varname)
|
||||
|
||||
# On Cray "cluster" systems, unset CRAY_LD_LIBRARY_PATH to avoid
|
||||
# interference with Spack dependencies.
|
||||
# CNL requires these variables to be set (or at least some of them,
|
||||
# depending on the CNL version).
|
||||
on_cray, using_cnl = _on_cray()
|
||||
if on_cray and not using_cnl:
|
||||
env.unset("CRAY_LD_LIBRARY_PATH")
|
||||
for varname in os.environ.keys():
|
||||
if "PKGCONF" in varname:
|
||||
env.unset(varname)
|
||||
|
||||
# Unset the following variables because they can affect installation of
|
||||
# Autotools and CMake packages.
|
||||
build_system_vars = [
|
||||
@@ -362,7 +381,11 @@ def set_compiler_environment_variables(pkg, env):
|
||||
_add_werror_handling(keep_werror, env)
|
||||
|
||||
# Set the target parameters that the compiler will add
|
||||
isa_arg = spec.architecture.target.optimization_flags(compiler)
|
||||
# Don't set on cray platform because the targeting module handles this
|
||||
if spec.satisfies("platform=cray"):
|
||||
isa_arg = ""
|
||||
else:
|
||||
isa_arg = spec.architecture.target.optimization_flags(compiler)
|
||||
env.set("SPACK_TARGET_ARGS", isa_arg)
|
||||
|
||||
# Trap spack-tracked compiler flags as appropriate.
|
||||
@@ -810,6 +833,14 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
|
||||
for mod in pkg.compiler.modules:
|
||||
load_module(mod)
|
||||
|
||||
# kludge to handle cray mpich and libsci being automatically loaded by
|
||||
# PrgEnv modules on cray platform. Module unload does no damage when
|
||||
# unnecessary
|
||||
on_cray, _ = _on_cray()
|
||||
if on_cray and not dirty:
|
||||
for mod in ["cray-mpich", "cray-libsci"]:
|
||||
module("unload", mod)
|
||||
|
||||
if target and target.module_name:
|
||||
load_module(target.module_name)
|
||||
|
||||
|
@@ -110,8 +110,9 @@ def cuda_flags(arch_list):
|
||||
# From the NVIDIA install guide we know of conflicts for particular
|
||||
# platforms (linux, darwin), architectures (x86, powerpc) and compilers
|
||||
# (gcc, clang). We don't restrict %gcc and %clang conflicts to
|
||||
# platform=linux, since they may apply to platform=darwin. We currently
|
||||
# do not provide conflicts for platform=darwin with %apple-clang.
|
||||
# platform=linux, since they should also apply to platform=cray, and may
|
||||
# apply to platform=darwin. We currently do not provide conflicts for
|
||||
# platform=darwin with %apple-clang.
|
||||
|
||||
# Linux x86_64 compiler conflicts from here:
|
||||
# https://gist.github.com/ax3l/9489132
|
||||
|
@@ -846,7 +846,6 @@ def scalapack_libs(self):
|
||||
"^mpich@2:" in spec_root
|
||||
or "^cray-mpich" in spec_root
|
||||
or "^mvapich2" in spec_root
|
||||
or "^mvapich" in spec_root
|
||||
or "^intel-mpi" in spec_root
|
||||
or "^intel-oneapi-mpi" in spec_root
|
||||
or "^intel-parallel-studio" in spec_root
|
||||
@@ -937,15 +936,32 @@ def mpi_setup_dependent_build_environment(self, env, dependent_spec, compilers_o
|
||||
"I_MPI_ROOT": self.normalize_path("mpi"),
|
||||
}
|
||||
|
||||
compiler_wrapper_commands = self.mpi_compiler_wrappers
|
||||
wrapper_vars.update(
|
||||
{
|
||||
"MPICC": compiler_wrapper_commands["MPICC"],
|
||||
"MPICXX": compiler_wrapper_commands["MPICXX"],
|
||||
"MPIF77": compiler_wrapper_commands["MPIF77"],
|
||||
"MPIF90": compiler_wrapper_commands["MPIF90"],
|
||||
}
|
||||
)
|
||||
# CAUTION - SIMILAR code in:
|
||||
# var/spack/repos/builtin/packages/mpich/package.py
|
||||
# var/spack/repos/builtin/packages/openmpi/package.py
|
||||
# var/spack/repos/builtin/packages/mvapich2/package.py
|
||||
#
|
||||
# On Cray, the regular compiler wrappers *are* the MPI wrappers.
|
||||
if "platform=cray" in self.spec:
|
||||
# TODO: Confirm
|
||||
wrapper_vars.update(
|
||||
{
|
||||
"MPICC": compilers_of_client["CC"],
|
||||
"MPICXX": compilers_of_client["CXX"],
|
||||
"MPIF77": compilers_of_client["F77"],
|
||||
"MPIF90": compilers_of_client["F90"],
|
||||
}
|
||||
)
|
||||
else:
|
||||
compiler_wrapper_commands = self.mpi_compiler_wrappers
|
||||
wrapper_vars.update(
|
||||
{
|
||||
"MPICC": compiler_wrapper_commands["MPICC"],
|
||||
"MPICXX": compiler_wrapper_commands["MPICXX"],
|
||||
"MPIF77": compiler_wrapper_commands["MPIF77"],
|
||||
"MPIF90": compiler_wrapper_commands["MPIF90"],
|
||||
}
|
||||
)
|
||||
|
||||
# Ensure that the directory containing the compiler wrappers is in the
|
||||
# PATH. Spack packages add `prefix.bin` to their dependents' paths,
|
||||
|
@@ -24,6 +24,7 @@ class MSBuildPackage(spack.package_base.PackageBase):
|
||||
build_system("msbuild")
|
||||
conflicts("platform=linux", when="build_system=msbuild")
|
||||
conflicts("platform=darwin", when="build_system=msbuild")
|
||||
conflicts("platform=cray", when="build_system=msbuild")
|
||||
|
||||
|
||||
@spack.builder.builder("msbuild")
|
||||
|
@@ -24,6 +24,7 @@ class NMakePackage(spack.package_base.PackageBase):
|
||||
build_system("nmake")
|
||||
conflicts("platform=linux", when="build_system=nmake")
|
||||
conflicts("platform=darwin", when="build_system=nmake")
|
||||
conflicts("platform=cray", when="build_system=nmake")
|
||||
|
||||
|
||||
@spack.builder.builder("nmake")
|
||||
|
@@ -36,8 +36,9 @@ class IntelOneApiPackage(Package):
|
||||
"target=ppc64:",
|
||||
"target=ppc64le:",
|
||||
"target=aarch64:",
|
||||
"platform=darwin",
|
||||
"platform=windows",
|
||||
"platform=darwin:",
|
||||
"platform=cray:",
|
||||
"platform=windows:",
|
||||
]:
|
||||
conflicts(c, msg="This package in only available for x86_64 and Linux")
|
||||
|
||||
|
@@ -106,8 +106,7 @@ def clean(parser, args):
|
||||
|
||||
# Then do the cleaning falling through the cases
|
||||
if args.specs:
|
||||
specs = spack.cmd.parse_specs(args.specs, concretize=False)
|
||||
specs = list(spack.cmd.matching_spec_from_env(x) for x in specs)
|
||||
specs = spack.cmd.parse_specs(args.specs, concretize=True)
|
||||
for spec in specs:
|
||||
msg = "Cleaning build stage [{0}]"
|
||||
tty.msg(msg.format(spec.short_spec))
|
||||
|
@@ -50,7 +50,7 @@
|
||||
@B{++}, @r{--}, @r{~~}, @B{==} propagate variants to package dependencies
|
||||
|
||||
architecture variants:
|
||||
@m{platform=platform} linux, darwin, freebsd, windows
|
||||
@m{platform=platform} linux, darwin, cray, etc.
|
||||
@m{os=operating_system} specific <operating_system>
|
||||
@m{target=target} specific <target> processor
|
||||
@m{arch=platform-os-target} shortcut for all three above
|
||||
|
@@ -61,6 +61,7 @@ def install_kwargs_from_args(args):
|
||||
"dependencies_use_cache": cache_opt(args.use_cache, dep_use_bc),
|
||||
"dependencies_cache_only": cache_opt(args.cache_only, dep_use_bc),
|
||||
"include_build_deps": args.include_build_deps,
|
||||
"explicit": True, # Use true as a default for install command
|
||||
"stop_at": args.until,
|
||||
"unsigned": args.unsigned,
|
||||
"install_deps": ("dependencies" in args.things_to_install),
|
||||
@@ -472,7 +473,6 @@ def install_without_active_env(args, install_kwargs, reporter_factory):
|
||||
require_user_confirmation_for_overwrite(concrete_specs, args)
|
||||
install_kwargs["overwrite"] = [spec.dag_hash() for spec in concrete_specs]
|
||||
|
||||
installs = [s.package for s in concrete_specs]
|
||||
install_kwargs["explicit"] = [s.dag_hash() for s in concrete_specs]
|
||||
builder = PackageInstaller(installs, install_kwargs)
|
||||
installs = [(s.package, install_kwargs) for s in concrete_specs]
|
||||
builder = PackageInstaller(installs)
|
||||
builder.install()
|
||||
|
@@ -695,6 +695,10 @@ def compiler_environment(self):
|
||||
try:
|
||||
# load modules and set env variables
|
||||
for module in self.modules:
|
||||
# On cray, mic-knl module cannot be loaded without cce module
|
||||
# See: https://github.com/spack/spack/issues/3153
|
||||
if os.environ.get("CRAY_CPU_TARGET") == "mic-knl":
|
||||
spack.util.module_cmd.load_module("cce")
|
||||
spack.util.module_cmd.load_module(module)
|
||||
|
||||
# apply other compiler environment changes
|
||||
|
@@ -156,7 +156,15 @@ def get_compiler_config_from_packages(
|
||||
def _compiler_config_from_package_config(config):
|
||||
compilers = []
|
||||
for entry in config:
|
||||
compiler = _compiler_config_from_external(entry)
|
||||
try:
|
||||
compiler = _compiler_config_from_external(entry)
|
||||
except Exception as e:
|
||||
msg = "Reading compiler from packages config section failed\n"
|
||||
msg += f" Compiler: {entry.get('spec', None)}\n"
|
||||
msg += f" Prefix: {entry.get('prefix', None)}\n"
|
||||
msg += f" Failure: {e}"
|
||||
warnings.warn(msg)
|
||||
compiler = None
|
||||
if compiler:
|
||||
compilers.append(compiler)
|
||||
|
||||
@@ -220,10 +228,10 @@ def _compiler_config_from_external(config):
|
||||
operating_system = host_platform.operating_system("default_os")
|
||||
target = host_platform.target("default_target").microarchitecture
|
||||
else:
|
||||
target = spec.architecture.target
|
||||
target = spec.target
|
||||
if not target:
|
||||
target = spack.platforms.host().target("default_target")
|
||||
target = target.microarchitecture
|
||||
host_platform = spack.platforms.host()
|
||||
target = host_platform.target("default_target").microarchitecture
|
||||
|
||||
operating_system = spec.os
|
||||
if not operating_system:
|
||||
|
@@ -1948,19 +1948,13 @@ def install_specs(self, specs: Optional[List[Spec]] = None, **install_args):
|
||||
specs = specs if specs is not None else roots
|
||||
|
||||
# Extend the set of specs to overwrite with modified dev specs and their parents
|
||||
overwrite: Set[str] = set()
|
||||
overwrite.update(install_args.get("overwrite", []), self._dev_specs_that_need_overwrite())
|
||||
install_args["overwrite"] = overwrite
|
||||
|
||||
explicit: Set[str] = set()
|
||||
explicit.update(
|
||||
install_args.get("explicit", []),
|
||||
(s.dag_hash() for s in specs),
|
||||
(s.dag_hash() for s in roots),
|
||||
install_args["overwrite"] = (
|
||||
install_args.get("overwrite", []) + self._dev_specs_that_need_overwrite()
|
||||
)
|
||||
install_args["explicit"] = explicit
|
||||
|
||||
PackageInstaller([spec.package for spec in specs], install_args).install()
|
||||
installs = [(spec.package, {**install_args, "explicit": spec in roots}) for spec in specs]
|
||||
|
||||
PackageInstaller(installs).install()
|
||||
|
||||
def all_specs_generator(self) -> Iterable[Spec]:
|
||||
"""Returns a generator for all concrete specs"""
|
||||
|
@@ -13,6 +13,7 @@
|
||||
import spack.config
|
||||
import spack.relocate
|
||||
from spack.util.elf import ElfParsingError, parse_elf
|
||||
from spack.util.executable import Executable
|
||||
|
||||
|
||||
def is_shared_library_elf(filepath):
|
||||
@@ -140,7 +141,7 @@ def post_install(spec, explicit=None):
|
||||
return
|
||||
|
||||
# Only enable on platforms using ELF.
|
||||
if not spec.satisfies("platform=linux"):
|
||||
if not spec.satisfies("platform=linux") and not spec.satisfies("platform=cray"):
|
||||
return
|
||||
|
||||
# Disable this hook when bootstrapping, to avoid recursion.
|
||||
@@ -148,9 +149,10 @@ def post_install(spec, explicit=None):
|
||||
return
|
||||
|
||||
# Should failing to locate patchelf be a hard error?
|
||||
patchelf = spack.relocate._patchelf()
|
||||
if not patchelf:
|
||||
patchelf_path = spack.relocate._patchelf()
|
||||
if not patchelf_path:
|
||||
return
|
||||
patchelf = Executable(patchelf_path)
|
||||
|
||||
fixes = find_and_patch_sonames(spec.prefix, spec.package.non_bindable_shared_objects, patchelf)
|
||||
|
||||
|
@@ -117,7 +117,7 @@ def post_install(spec, explicit=None):
|
||||
return
|
||||
|
||||
# Only enable on platforms using ELF.
|
||||
if not spec.satisfies("platform=linux"):
|
||||
if not spec.satisfies("platform=linux") and not spec.satisfies("platform=cray"):
|
||||
return
|
||||
|
||||
visit_directory_tree(spec.prefix, ElfFilesWithRPathVisitor())
|
||||
|
@@ -600,7 +600,9 @@ def dump_packages(spec: "spack.spec.Spec", path: str) -> None:
|
||||
if node is spec:
|
||||
spack.repo.PATH.dump_provenance(node, dest_pkg_dir)
|
||||
elif source_pkg_dir:
|
||||
fs.install_tree(source_pkg_dir, dest_pkg_dir)
|
||||
fs.install_tree(
|
||||
source_pkg_dir, dest_pkg_dir, allow_broken_symlinks=(sys.platform != "win32")
|
||||
)
|
||||
|
||||
|
||||
def get_dependent_ids(spec: "spack.spec.Spec") -> List[str]:
|
||||
@@ -759,8 +761,12 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
|
||||
if not self.pkg.spec.concrete:
|
||||
raise ValueError(f"{self.pkg.name} must have a concrete spec")
|
||||
|
||||
self.pkg.stop_before_phase = install_args.get("stop_before") # type: ignore[attr-defined] # noqa: E501
|
||||
self.pkg.last_phase = install_args.get("stop_at") # type: ignore[attr-defined]
|
||||
# Cache the package phase options with the explicit package,
|
||||
# popping the options to ensure installation of associated
|
||||
# dependencies is NOT affected by these options.
|
||||
|
||||
self.pkg.stop_before_phase = install_args.pop("stop_before", None) # type: ignore[attr-defined] # noqa: E501
|
||||
self.pkg.last_phase = install_args.pop("stop_at", None) # type: ignore[attr-defined]
|
||||
|
||||
# Cache the package id for convenience
|
||||
self.pkg_id = package_id(pkg.spec)
|
||||
@@ -1070,17 +1076,19 @@ def flag_installed(self, installed: List[str]) -> None:
|
||||
|
||||
@property
|
||||
def explicit(self) -> bool:
|
||||
return self.pkg.spec.dag_hash() in self.request.install_args.get("explicit", [])
|
||||
"""The package was explicitly requested by the user."""
|
||||
return self.is_root and self.request.install_args.get("explicit", True)
|
||||
|
||||
@property
|
||||
def is_build_request(self) -> bool:
|
||||
"""The package was requested directly"""
|
||||
def is_root(self) -> bool:
|
||||
"""The package was requested directly, but may or may not be explicit
|
||||
in an environment."""
|
||||
return self.pkg == self.request.pkg
|
||||
|
||||
@property
|
||||
def use_cache(self) -> bool:
|
||||
_use_cache = True
|
||||
if self.is_build_request:
|
||||
if self.is_root:
|
||||
return self.request.install_args.get("package_use_cache", _use_cache)
|
||||
else:
|
||||
return self.request.install_args.get("dependencies_use_cache", _use_cache)
|
||||
@@ -1088,7 +1096,7 @@ def use_cache(self) -> bool:
|
||||
@property
|
||||
def cache_only(self) -> bool:
|
||||
_cache_only = False
|
||||
if self.is_build_request:
|
||||
if self.is_root:
|
||||
return self.request.install_args.get("package_cache_only", _cache_only)
|
||||
else:
|
||||
return self.request.install_args.get("dependencies_cache_only", _cache_only)
|
||||
@@ -1114,17 +1122,24 @@ def priority(self):
|
||||
|
||||
class PackageInstaller:
|
||||
"""
|
||||
Class for managing the install process for a Spack instance based on a bottom-up DAG approach.
|
||||
Class for managing the install process for a Spack instance based on a
|
||||
bottom-up DAG approach.
|
||||
|
||||
This installer can coordinate concurrent batch and interactive, local and distributed (on a
|
||||
shared file system) builds for the same Spack instance.
|
||||
This installer can coordinate concurrent batch and interactive, local
|
||||
and distributed (on a shared file system) builds for the same Spack
|
||||
instance.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, packages: List["spack.package_base.PackageBase"], install_args: dict
|
||||
) -> None:
|
||||
def __init__(self, installs: List[Tuple["spack.package_base.PackageBase", dict]] = []) -> None:
|
||||
"""Initialize the installer.
|
||||
|
||||
Args:
|
||||
installs (list): list of tuples, where each
|
||||
tuple consists of a package (PackageBase) and its associated
|
||||
install arguments (dict)
|
||||
"""
|
||||
# List of build requests
|
||||
self.build_requests = [BuildRequest(pkg, install_args) for pkg in packages]
|
||||
self.build_requests = [BuildRequest(pkg, install_args) for pkg, install_args in installs]
|
||||
|
||||
# Priority queue of build tasks
|
||||
self.build_pq: List[Tuple[Tuple[int, int], BuildTask]] = []
|
||||
@@ -1547,7 +1562,7 @@ def _add_tasks(self, request: BuildRequest, all_deps):
|
||||
#
|
||||
# External and upstream packages need to get flagged as installed to
|
||||
# ensure proper status tracking for environment build.
|
||||
explicit = request.pkg.spec.dag_hash() in request.install_args.get("explicit", [])
|
||||
explicit = request.install_args.get("explicit", True)
|
||||
not_local = _handle_external_and_upstream(request.pkg, explicit)
|
||||
if not_local:
|
||||
self._flag_installed(request.pkg)
|
||||
@@ -1668,6 +1683,10 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
|
||||
if not pkg.unit_test_check():
|
||||
return
|
||||
|
||||
# Injecting information to know if this installation request is the root one
|
||||
# to determine in BuildProcessInstaller whether installation is explicit or not
|
||||
install_args["is_root"] = task.is_root
|
||||
|
||||
try:
|
||||
self._setup_install_dir(pkg)
|
||||
|
||||
@@ -1979,8 +1998,8 @@ def install(self) -> None:
|
||||
|
||||
self._init_queue()
|
||||
fail_fast_err = "Terminating after first install failure"
|
||||
single_requested_spec = len(self.build_requests) == 1
|
||||
failed_build_requests = []
|
||||
single_explicit_spec = len(self.build_requests) == 1
|
||||
failed_explicits = []
|
||||
|
||||
install_status = InstallStatus(len(self.build_pq))
|
||||
|
||||
@@ -2178,11 +2197,14 @@ def install(self) -> None:
|
||||
if self.fail_fast:
|
||||
raise InstallError(f"{fail_fast_err}: {str(exc)}", pkg=pkg)
|
||||
|
||||
# Terminate when a single build request has failed, or summarize errors later.
|
||||
if task.is_build_request:
|
||||
if single_requested_spec:
|
||||
raise
|
||||
failed_build_requests.append((pkg, pkg_id, str(exc)))
|
||||
# Terminate at this point if the single explicit spec has
|
||||
# failed to install.
|
||||
if single_explicit_spec and task.explicit:
|
||||
raise
|
||||
|
||||
# Track explicit spec id and error to summarize when done
|
||||
if task.explicit:
|
||||
failed_explicits.append((pkg, pkg_id, str(exc)))
|
||||
|
||||
finally:
|
||||
# Remove the install prefix if anything went wrong during
|
||||
@@ -2205,16 +2227,16 @@ def install(self) -> None:
|
||||
if request.install_args.get("install_package") and request.pkg_id not in self.installed
|
||||
]
|
||||
|
||||
if failed_build_requests or missing:
|
||||
for _, pkg_id, err in failed_build_requests:
|
||||
if failed_explicits or missing:
|
||||
for _, pkg_id, err in failed_explicits:
|
||||
tty.error(f"{pkg_id}: {err}")
|
||||
|
||||
for _, pkg_id in missing:
|
||||
tty.error(f"{pkg_id}: Package was not installed")
|
||||
|
||||
if len(failed_build_requests) > 0:
|
||||
pkg = failed_build_requests[0][0]
|
||||
ids = [pkg_id for _, pkg_id, _ in failed_build_requests]
|
||||
if len(failed_explicits) > 0:
|
||||
pkg = failed_explicits[0][0]
|
||||
ids = [pkg_id for _, pkg_id, _ in failed_explicits]
|
||||
tty.debug(
|
||||
"Associating installation failure with first failed "
|
||||
f"explicit package ({ids[0]}) from {', '.join(ids)}"
|
||||
@@ -2273,7 +2295,7 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
|
||||
self.verbose = bool(install_args.get("verbose", False))
|
||||
|
||||
# whether installation was explicitly requested by the user
|
||||
self.explicit = pkg.spec.dag_hash() in install_args.get("explicit", [])
|
||||
self.explicit = install_args.get("is_root", False) and install_args.get("explicit", True)
|
||||
|
||||
# env before starting installation
|
||||
self.unmodified_env = install_args.get("unmodified_env", {})
|
||||
@@ -2358,7 +2380,9 @@ def _install_source(self) -> None:
|
||||
src_target = os.path.join(pkg.spec.prefix, "share", pkg.name, "src")
|
||||
tty.debug(f"{self.pre} Copying source to {src_target}")
|
||||
|
||||
fs.install_tree(pkg.stage.source_path, src_target)
|
||||
fs.install_tree(
|
||||
pkg.stage.source_path, src_target, allow_broken_symlinks=(sys.platform != "win32")
|
||||
)
|
||||
|
||||
def _real_install(self) -> None:
|
||||
import spack.builder
|
||||
|
@@ -3,12 +3,22 @@
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
from ._operating_system import OperatingSystem
|
||||
from .cray_backend import CrayBackend
|
||||
from .cray_frontend import CrayFrontend
|
||||
from .freebsd import FreeBSDOs
|
||||
from .linux_distro import LinuxDistro
|
||||
from .mac_os import MacOs
|
||||
from .windows_os import WindowsOs
|
||||
|
||||
__all__ = ["OperatingSystem", "LinuxDistro", "MacOs", "WindowsOs", "FreeBSDOs"]
|
||||
__all__ = [
|
||||
"OperatingSystem",
|
||||
"LinuxDistro",
|
||||
"MacOs",
|
||||
"CrayFrontend",
|
||||
"CrayBackend",
|
||||
"WindowsOs",
|
||||
"FreeBSDOs",
|
||||
]
|
||||
|
||||
#: List of all the Operating Systems known to Spack
|
||||
operating_systems = [LinuxDistro, MacOs, WindowsOs, FreeBSDOs]
|
||||
operating_systems = [LinuxDistro, MacOs, CrayFrontend, CrayBackend, WindowsOs, FreeBSDOs]
|
||||
|
172
lib/spack/spack/operating_systems/cray_backend.py
Normal file
172
lib/spack/spack/operating_systems/cray_backend.py
Normal file
@@ -0,0 +1,172 @@
|
||||
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import os
|
||||
import re
|
||||
|
||||
import llnl.util.tty as tty
|
||||
|
||||
import spack.error
|
||||
import spack.version
|
||||
from spack.util.module_cmd import module
|
||||
|
||||
from .linux_distro import LinuxDistro
|
||||
|
||||
#: Possible locations of the Cray CLE release file,
|
||||
#: which we look at to get the CNL OS version.
|
||||
_cle_release_file = "/etc/opt/cray/release/cle-release"
|
||||
_clerelease_file = "/etc/opt/cray/release/clerelease"
|
||||
|
||||
|
||||
def read_cle_release_file():
|
||||
"""Read the CLE release file and return a dict with its attributes.
|
||||
|
||||
This file is present on newer versions of Cray.
|
||||
|
||||
The release file looks something like this::
|
||||
|
||||
RELEASE=6.0.UP07
|
||||
BUILD=6.0.7424
|
||||
...
|
||||
|
||||
The dictionary we produce looks like this::
|
||||
|
||||
{
|
||||
"RELEASE": "6.0.UP07",
|
||||
"BUILD": "6.0.7424",
|
||||
...
|
||||
}
|
||||
|
||||
Returns:
|
||||
dict: dictionary of release attributes
|
||||
"""
|
||||
with open(_cle_release_file) as release_file:
|
||||
result = {}
|
||||
for line in release_file:
|
||||
# use partition instead of split() to ensure we only split on
|
||||
# the first '=' in the line.
|
||||
key, _, value = line.partition("=")
|
||||
result[key] = value.strip()
|
||||
return result
|
||||
|
||||
|
||||
def read_clerelease_file():
|
||||
"""Read the CLE release file and return the Cray OS version.
|
||||
|
||||
This file is present on older versions of Cray.
|
||||
|
||||
The release file looks something like this::
|
||||
|
||||
5.2.UP04
|
||||
|
||||
Returns:
|
||||
str: the Cray OS version
|
||||
"""
|
||||
with open(_clerelease_file) as release_file:
|
||||
for line in release_file:
|
||||
return line.strip()
|
||||
|
||||
|
||||
class CrayBackend(LinuxDistro):
|
||||
"""Compute Node Linux (CNL) is the operating system used for the Cray XC
|
||||
series super computers. It is a very stripped down version of GNU/Linux.
|
||||
Any compilers found through this operating system will be used with
|
||||
modules. If updated, user must make sure that version and name are
|
||||
updated to indicate that OS has been upgraded (or downgraded)
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
name = "cnl"
|
||||
version = self._detect_crayos_version()
|
||||
if version:
|
||||
# If we found a CrayOS version, we do not want the information
|
||||
# from LinuxDistro. In order to skip the logic from
|
||||
# distro.linux_distribution, while still calling __init__
|
||||
# methods further up the MRO, we skip LinuxDistro in the MRO and
|
||||
# call the OperatingSystem superclass __init__ method
|
||||
super(LinuxDistro, self).__init__(name, version)
|
||||
else:
|
||||
super().__init__()
|
||||
self.modulecmd = module
|
||||
|
||||
def __str__(self):
|
||||
return self.name + str(self.version)
|
||||
|
||||
@classmethod
|
||||
def _detect_crayos_version(cls):
|
||||
if os.path.isfile(_cle_release_file):
|
||||
release_attrs = read_cle_release_file()
|
||||
if "RELEASE" not in release_attrs:
|
||||
# This Cray system uses a base OS not CLE/CNL
|
||||
return None
|
||||
v = spack.version.Version(release_attrs["RELEASE"])
|
||||
return v[0]
|
||||
elif os.path.isfile(_clerelease_file):
|
||||
v = read_clerelease_file()
|
||||
return spack.version.Version(v)[0]
|
||||
else:
|
||||
# Not all Cray systems run CNL on the backend.
|
||||
# Systems running in what Cray calls "cluster" mode run other
|
||||
# linux OSs under the Cray PE.
|
||||
# So if we don't detect any Cray OS version on the system,
|
||||
# we return None. We can't ever be sure we will get a Cray OS
|
||||
# version.
|
||||
# Returning None allows the calling code to test for the value
|
||||
# being "True-ish" rather than requiring a try/except block.
|
||||
return None
|
||||
|
||||
def arguments_to_detect_version_fn(self, paths):
|
||||
import spack.compilers
|
||||
|
||||
command_arguments = []
|
||||
for compiler_name in spack.compilers.supported_compilers():
|
||||
cmp_cls = spack.compilers.class_for_compiler_name(compiler_name)
|
||||
|
||||
# If the compiler doesn't have a corresponding
|
||||
# Programming Environment, skip to the next
|
||||
if cmp_cls.PrgEnv is None:
|
||||
continue
|
||||
|
||||
if cmp_cls.PrgEnv_compiler is None:
|
||||
tty.die("Must supply PrgEnv_compiler with PrgEnv")
|
||||
|
||||
compiler_id = spack.compilers.CompilerID(self, compiler_name, None)
|
||||
detect_version_args = spack.compilers.DetectVersionArgs(
|
||||
id=compiler_id, variation=(None, None), language="cc", path="cc"
|
||||
)
|
||||
command_arguments.append(detect_version_args)
|
||||
return command_arguments
|
||||
|
||||
def detect_version(self, detect_version_args):
|
||||
import spack.compilers
|
||||
|
||||
modulecmd = self.modulecmd
|
||||
compiler_name = detect_version_args.id.compiler_name
|
||||
compiler_cls = spack.compilers.class_for_compiler_name(compiler_name)
|
||||
output = modulecmd("avail", compiler_cls.PrgEnv_compiler)
|
||||
version_regex = r"({0})/([\d\.]+[\d]-?[\w]*)".format(compiler_cls.PrgEnv_compiler)
|
||||
matches = re.findall(version_regex, output)
|
||||
version = tuple(version for _, version in matches if "classic" not in version)
|
||||
compiler_id = detect_version_args.id
|
||||
value = detect_version_args._replace(id=compiler_id._replace(version=version))
|
||||
return value, None
|
||||
|
||||
def make_compilers(self, compiler_id, paths):
|
||||
import spack.spec
|
||||
|
||||
name = compiler_id.compiler_name
|
||||
cmp_cls = spack.compilers.class_for_compiler_name(name)
|
||||
compilers = []
|
||||
for v in compiler_id.version:
|
||||
comp = cmp_cls(
|
||||
spack.spec.CompilerSpec(name + "@=" + v),
|
||||
self,
|
||||
"any",
|
||||
["cc", "CC", "ftn"],
|
||||
[cmp_cls.PrgEnv, name + "/" + v],
|
||||
)
|
||||
|
||||
compilers.append(comp)
|
||||
return compilers
|
105
lib/spack/spack/operating_systems/cray_frontend.py
Normal file
105
lib/spack/spack/operating_systems/cray_frontend.py
Normal file
@@ -0,0 +1,105 @@
|
||||
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import contextlib
|
||||
import os
|
||||
import re
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
import llnl.util.lang
|
||||
import llnl.util.tty as tty
|
||||
|
||||
from spack.util.environment import get_path
|
||||
from spack.util.module_cmd import module
|
||||
|
||||
from .linux_distro import LinuxDistro
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def unload_programming_environment():
|
||||
"""Context manager that unloads Cray Programming Environments."""
|
||||
env_bu = None
|
||||
|
||||
# We rely on the fact that the PrgEnv-* modules set the PE_ENV
|
||||
# environment variable.
|
||||
if "PE_ENV" in os.environ:
|
||||
# Copy environment variables to restore them after the compiler
|
||||
# detection. We expect that the only thing PrgEnv-* modules do is
|
||||
# the environment variables modifications.
|
||||
env_bu = os.environ.copy()
|
||||
|
||||
# Get the name of the module from the environment variable.
|
||||
prg_env = "PrgEnv-" + os.environ["PE_ENV"].lower()
|
||||
|
||||
# Unload the PrgEnv-* module. By doing this we intentionally
|
||||
# provoke errors when the Cray's compiler wrappers are executed
|
||||
# (Error: A PrgEnv-* modulefile must be loaded.) so they will not
|
||||
# be detected as valid compilers by the overridden method. We also
|
||||
# expect that the modules that add the actual compilers' binaries
|
||||
# into the PATH environment variable (i.e. the following modules:
|
||||
# 'intel', 'cce', 'gcc', etc.) will also be unloaded since they are
|
||||
# specified as prerequisites in the PrgEnv-* modulefiles.
|
||||
module("unload", prg_env)
|
||||
|
||||
yield
|
||||
|
||||
# Restore the environment.
|
||||
if env_bu is not None:
|
||||
os.environ.clear()
|
||||
os.environ.update(env_bu)
|
||||
|
||||
|
||||
class CrayFrontend(LinuxDistro):
|
||||
"""Represents OS that runs on login and service nodes of the Cray platform.
|
||||
It acts as a regular Linux without Cray-specific modules and compiler
|
||||
wrappers."""
|
||||
|
||||
@property
|
||||
def compiler_search_paths(self):
|
||||
"""Calls the default function but unloads Cray's programming
|
||||
environments first.
|
||||
|
||||
This prevents from detecting Cray compiler wrappers and avoids
|
||||
possible false detections.
|
||||
"""
|
||||
import spack.compilers
|
||||
|
||||
with unload_programming_environment():
|
||||
search_paths = get_path("PATH")
|
||||
|
||||
extract_path_re = re.compile(r"prepend-path[\s]*PATH[\s]*([/\w\.:-]*)")
|
||||
|
||||
for compiler_cls in spack.compilers.all_compiler_types():
|
||||
# Check if the compiler class is supported on Cray
|
||||
prg_env = getattr(compiler_cls, "PrgEnv", None)
|
||||
compiler_module = getattr(compiler_cls, "PrgEnv_compiler", None)
|
||||
if not (prg_env and compiler_module):
|
||||
continue
|
||||
|
||||
# It is supported, check which versions are available
|
||||
output = module("avail", compiler_cls.PrgEnv_compiler)
|
||||
version_regex = r"({0})/([\d\.]+[\d]-?[\w]*)".format(compiler_cls.PrgEnv_compiler)
|
||||
matches = re.findall(version_regex, output)
|
||||
versions = tuple(version for _, version in matches if "classic" not in version)
|
||||
|
||||
# Now inspect the modules and add to paths
|
||||
msg = "[CRAY FE] Detected FE compiler [name={0}, versions={1}]"
|
||||
tty.debug(msg.format(compiler_module, versions))
|
||||
for v in versions:
|
||||
try:
|
||||
current_module = compiler_module + "/" + v
|
||||
out = module("show", current_module)
|
||||
match = extract_path_re.search(out)
|
||||
search_paths += match.group(1).split(":")
|
||||
except Exception as e:
|
||||
msg = (
|
||||
"[CRAY FE] An unexpected error occurred while "
|
||||
"detecting FE compiler [compiler={0}, "
|
||||
" version={1}, error={2}]"
|
||||
)
|
||||
tty.debug(msg.format(compiler_cls.name, v, str(e)))
|
||||
|
||||
search_paths = list(llnl.util.lang.dedupe(search_paths))
|
||||
return fs.search_paths_for_executables(*search_paths)
|
@@ -1881,10 +1881,7 @@ def do_install(self, **kwargs):
|
||||
verbose (bool): Display verbose build output (by default,
|
||||
suppresses it)
|
||||
"""
|
||||
explicit = kwargs.get("explicit", True)
|
||||
if isinstance(explicit, bool):
|
||||
kwargs["explicit"] = {self.spec.dag_hash()} if explicit else set()
|
||||
PackageInstaller([self], kwargs).install()
|
||||
PackageInstaller([(self, kwargs)]).install()
|
||||
|
||||
# TODO (post-34236): Update tests and all packages that use this as a
|
||||
# TODO (post-34236): package method to the routine made available to
|
||||
|
@@ -6,6 +6,7 @@
|
||||
|
||||
from ._functions import _host, by_name, platforms, prevent_cray_detection, reset
|
||||
from ._platform import Platform
|
||||
from .cray import Cray
|
||||
from .darwin import Darwin
|
||||
from .freebsd import FreeBSD
|
||||
from .linux import Linux
|
||||
@@ -14,6 +15,7 @@
|
||||
|
||||
__all__ = [
|
||||
"Platform",
|
||||
"Cray",
|
||||
"Darwin",
|
||||
"Linux",
|
||||
"FreeBSD",
|
||||
|
@@ -8,6 +8,7 @@
|
||||
|
||||
import spack.util.environment
|
||||
|
||||
from .cray import Cray
|
||||
from .darwin import Darwin
|
||||
from .freebsd import FreeBSD
|
||||
from .linux import Linux
|
||||
@@ -15,7 +16,7 @@
|
||||
from .windows import Windows
|
||||
|
||||
#: List of all the platform classes known to Spack
|
||||
platforms = [Darwin, Linux, Windows, FreeBSD, Test]
|
||||
platforms = [Cray, Darwin, Linux, Windows, FreeBSD, Test]
|
||||
|
||||
|
||||
@llnl.util.lang.memoized
|
||||
|
@@ -2,10 +2,254 @@
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import os
|
||||
import os.path
|
||||
import platform
|
||||
import re
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
import llnl.util.tty as tty
|
||||
from llnl.util.symlink import readlink
|
||||
|
||||
import spack.target
|
||||
import spack.version
|
||||
from spack.operating_systems.cray_backend import CrayBackend
|
||||
from spack.operating_systems.cray_frontend import CrayFrontend
|
||||
from spack.paths import build_env_path
|
||||
from spack.util.executable import Executable
|
||||
from spack.util.module_cmd import module
|
||||
|
||||
from ._platform import NoPlatformError, Platform
|
||||
|
||||
_craype_name_to_target_name = {
|
||||
"x86-cascadelake": "cascadelake",
|
||||
"x86-naples": "zen",
|
||||
"x86-rome": "zen2",
|
||||
"x86-milan": "zen3",
|
||||
"x86-skylake": "skylake_avx512",
|
||||
"mic-knl": "mic_knl",
|
||||
"interlagos": "bulldozer",
|
||||
"abudhabi": "piledriver",
|
||||
}
|
||||
|
||||
_ex_craype_dir = "/opt/cray/pe/cpe"
|
||||
_xc_craype_dir = "/opt/cray/pe/cdt"
|
||||
|
||||
|
||||
def slingshot_network():
|
||||
return os.path.exists("/opt/cray/pe") and (
|
||||
os.path.exists("/lib64/libcxi.so") or os.path.exists("/usr/lib64/libcxi.so")
|
||||
)
|
||||
|
||||
|
||||
def _target_name_from_craype_target_name(name):
|
||||
return _craype_name_to_target_name.get(name, name)
|
||||
|
||||
|
||||
class Cray(Platform):
|
||||
priority = 10
|
||||
|
||||
def __init__(self):
|
||||
"""Create a Cray system platform.
|
||||
|
||||
Target names should use craype target names but not include the
|
||||
'craype-' prefix. Uses first viable target from:
|
||||
self
|
||||
envars [SPACK_FRONT_END, SPACK_BACK_END]
|
||||
configuration file "targets.yaml" with keys 'front_end', 'back_end'
|
||||
scanning /etc/bash/bashrc.local for back_end only
|
||||
"""
|
||||
super().__init__("cray")
|
||||
|
||||
# Make all craype targets available.
|
||||
for target in self._avail_targets():
|
||||
name = _target_name_from_craype_target_name(target)
|
||||
self.add_target(name, spack.target.Target(name, "craype-%s" % target))
|
||||
|
||||
self.back_end = os.environ.get("SPACK_BACK_END", self._default_target_from_env())
|
||||
self.default = self.back_end
|
||||
if self.back_end not in self.targets:
|
||||
# We didn't find a target module for the backend
|
||||
raise NoPlatformError()
|
||||
|
||||
# Setup frontend targets
|
||||
for name in archspec.cpu.TARGETS:
|
||||
if name not in self.targets:
|
||||
self.add_target(name, spack.target.Target(name))
|
||||
self.front_end = os.environ.get("SPACK_FRONT_END", archspec.cpu.host().name)
|
||||
if self.front_end not in self.targets:
|
||||
self.add_target(self.front_end, spack.target.Target(self.front_end))
|
||||
|
||||
front_distro = CrayFrontend()
|
||||
back_distro = CrayBackend()
|
||||
|
||||
self.default_os = str(back_distro)
|
||||
self.back_os = self.default_os
|
||||
self.front_os = str(front_distro)
|
||||
|
||||
self.add_operating_system(self.back_os, back_distro)
|
||||
if self.front_os != self.back_os:
|
||||
self.add_operating_system(self.front_os, front_distro)
|
||||
|
||||
def setup_platform_environment(self, pkg, env):
|
||||
"""Change the linker to default dynamic to be more
|
||||
similar to linux/standard linker behavior
|
||||
"""
|
||||
# Unload these modules to prevent any silent linking or unnecessary
|
||||
# I/O profiling in the case of darshan.
|
||||
modules_to_unload = ["cray-mpich", "darshan", "cray-libsci", "altd"]
|
||||
for mod in modules_to_unload:
|
||||
module("unload", mod)
|
||||
|
||||
env.set("CRAYPE_LINK_TYPE", "dynamic")
|
||||
cray_wrapper_names = os.path.join(build_env_path, "cray")
|
||||
|
||||
if os.path.isdir(cray_wrapper_names):
|
||||
env.prepend_path("PATH", cray_wrapper_names)
|
||||
env.prepend_path("SPACK_ENV_PATH", cray_wrapper_names)
|
||||
|
||||
# Makes spack installed pkg-config work on Crays
|
||||
env.append_path("PKG_CONFIG_PATH", "/usr/lib64/pkgconfig")
|
||||
env.append_path("PKG_CONFIG_PATH", "/usr/local/lib64/pkgconfig")
|
||||
|
||||
# CRAY_LD_LIBRARY_PATH is used at build time by the cray compiler
|
||||
# wrappers to augment LD_LIBRARY_PATH. This is to avoid long load
|
||||
# times at runtime. This behavior is not always respected on cray
|
||||
# "cluster" systems, so we reproduce it here.
|
||||
if os.environ.get("CRAY_LD_LIBRARY_PATH"):
|
||||
env.prepend_path("LD_LIBRARY_PATH", os.environ["CRAY_LD_LIBRARY_PATH"])
|
||||
|
||||
@classmethod
|
||||
def craype_type_and_version(cls):
|
||||
if os.path.isdir(_ex_craype_dir):
|
||||
craype_dir = _ex_craype_dir
|
||||
craype_type = "EX"
|
||||
elif os.path.isdir(_xc_craype_dir):
|
||||
craype_dir = _xc_craype_dir
|
||||
craype_type = "XC"
|
||||
else:
|
||||
return (None, None)
|
||||
|
||||
# Take the default version from known symlink path
|
||||
default_path = os.path.join(craype_dir, "default")
|
||||
if os.path.islink(default_path):
|
||||
version = spack.version.Version(readlink(default_path))
|
||||
return (craype_type, version)
|
||||
|
||||
# If no default version, sort available versions and return latest
|
||||
versions_available = [spack.version.Version(v) for v in os.listdir(craype_dir)]
|
||||
versions_available.sort(reverse=True)
|
||||
if not versions_available:
|
||||
return (craype_type, None)
|
||||
return (craype_type, versions_available[0])
|
||||
|
||||
@classmethod
|
||||
def detect(cls):
|
||||
"""
|
||||
Detect whether this system requires CrayPE module support.
|
||||
|
||||
Systems with newer CrayPE (21.10 for EX systems, future work for CS and
|
||||
XC systems) have compilers and MPI wrappers that can be used directly
|
||||
by path. These systems are considered ``linux`` platforms.
|
||||
|
||||
For systems running an older CrayPE, we detect the Cray platform based
|
||||
on the availability through `module` of the Cray programming
|
||||
environment. If this environment is available, we can use it to find
|
||||
compilers, target modules, etc. If the Cray programming environment is
|
||||
not available via modules, then we will treat it as a standard linux
|
||||
system, as the Cray compiler wrappers and other components of the Cray
|
||||
programming environment are irrelevant without module support.
|
||||
"""
|
||||
if "opt/cray" not in os.environ.get("MODULEPATH", ""):
|
||||
return False
|
||||
|
||||
craype_type, craype_version = cls.craype_type_and_version()
|
||||
if craype_type == "XC":
|
||||
return True
|
||||
if craype_type == "EX" and craype_version < spack.version.Version("21.10"):
|
||||
return True
|
||||
return False
|
||||
|
||||
def _default_target_from_env(self):
|
||||
"""Set and return the default CrayPE target loaded in a clean login
|
||||
session.
|
||||
|
||||
A bash subshell is launched with a wiped environment and the list of
|
||||
loaded modules is parsed for the first acceptable CrayPE target.
|
||||
"""
|
||||
# env -i /bin/bash -lc echo $CRAY_CPU_TARGET 2> /dev/null
|
||||
if getattr(self, "default", None) is None:
|
||||
bash = Executable("/bin/bash")
|
||||
output = bash(
|
||||
"--norc",
|
||||
"--noprofile",
|
||||
"-lc",
|
||||
"echo $CRAY_CPU_TARGET",
|
||||
env={"TERM": os.environ.get("TERM", "")},
|
||||
output=str,
|
||||
error=os.devnull,
|
||||
)
|
||||
|
||||
default_from_module = "".join(output.split()) # rm all whitespace
|
||||
if default_from_module:
|
||||
tty.debug("Found default module:%s" % default_from_module)
|
||||
return default_from_module
|
||||
else:
|
||||
front_end = archspec.cpu.host()
|
||||
# Look for the frontend architecture or closest ancestor
|
||||
# available in cray target modules
|
||||
avail = [_target_name_from_craype_target_name(x) for x in self._avail_targets()]
|
||||
for front_end_possibility in [front_end] + front_end.ancestors:
|
||||
if front_end_possibility.name in avail:
|
||||
tty.debug("using front-end architecture or available ancestor")
|
||||
return front_end_possibility.name
|
||||
else:
|
||||
tty.debug("using platform.machine as default")
|
||||
return platform.machine()
|
||||
|
||||
def _avail_targets(self):
|
||||
"""Return a list of available CrayPE CPU targets."""
|
||||
|
||||
def modules_in_output(output):
|
||||
"""Returns a list of valid modules parsed from modulecmd output"""
|
||||
return [i for i in re.split(r"\s\s+|\n", output)]
|
||||
|
||||
def target_names_from_modules(modules):
|
||||
# Craype- module prefixes that are not valid CPU targets.
|
||||
targets = []
|
||||
for mod in modules:
|
||||
if "craype-" in mod:
|
||||
name = mod[7:]
|
||||
name = name.split()[0]
|
||||
_n = name.replace("-", "_") # test for mic-knl/mic_knl
|
||||
is_target_name = name in archspec.cpu.TARGETS or _n in archspec.cpu.TARGETS
|
||||
is_cray_target_name = name in _craype_name_to_target_name
|
||||
if is_target_name or is_cray_target_name:
|
||||
targets.append(name)
|
||||
|
||||
return targets
|
||||
|
||||
def modules_from_listdir():
|
||||
craype_default_path = "/opt/cray/pe/craype/default/modulefiles"
|
||||
if os.path.isdir(craype_default_path):
|
||||
return os.listdir(craype_default_path)
|
||||
return []
|
||||
|
||||
if getattr(self, "_craype_targets", None) is None:
|
||||
strategies = [
|
||||
lambda: modules_in_output(module("avail", "-t", "craype-")),
|
||||
modules_from_listdir,
|
||||
]
|
||||
for available_craype_modules in strategies:
|
||||
craype_modules = available_craype_modules()
|
||||
craype_targets = target_names_from_modules(craype_modules)
|
||||
if craype_targets:
|
||||
self._craype_targets = craype_targets
|
||||
break
|
||||
else:
|
||||
# If nothing is found add platform.machine()
|
||||
# to avoid Spack erroring out
|
||||
self._craype_targets = [platform.machine()]
|
||||
|
||||
return self._craype_targets
|
||||
|
@@ -1939,11 +1939,6 @@ def _spec_clauses(
|
||||
for virtual in virtuals:
|
||||
clauses.append(fn.attr("virtual_on_incoming_edges", spec.name, virtual))
|
||||
|
||||
# If the spec is external and concrete, we allow all the libcs on the system
|
||||
if spec.external and spec.concrete and using_libc_compatibility():
|
||||
for libc in self.libcs:
|
||||
clauses.append(fn.attr("compatible_libc", spec.name, libc.name, libc.version))
|
||||
|
||||
# add all clauses from dependencies
|
||||
if transitive:
|
||||
# TODO: Eventually distinguish 2 deps on the same pkg (build and link)
|
||||
@@ -2440,7 +2435,7 @@ def setup(
|
||||
|
||||
if using_libc_compatibility():
|
||||
for libc in self.libcs:
|
||||
self.gen.fact(fn.host_libc(libc.name, libc.version))
|
||||
self.gen.fact(fn.allowed_libc(libc.name, libc.version))
|
||||
|
||||
if not allow_deprecated:
|
||||
self.gen.fact(fn.deprecated_versions_not_allowed())
|
||||
@@ -3799,6 +3794,12 @@ class Solver:
|
||||
def __init__(self):
|
||||
self.driver = PyclingoDriver()
|
||||
self.selector = ReusableSpecsSelector(configuration=spack.config.CONFIG)
|
||||
if spack.platforms.host().name == "cray":
|
||||
msg = (
|
||||
"The Cray platform, i.e. 'platform=cray', will be removed in Spack v0.23. "
|
||||
"All Cray machines will be then detected as 'platform=linux'."
|
||||
)
|
||||
warnings.warn(msg)
|
||||
|
||||
@staticmethod
|
||||
def _check_input_and_extract_concrete_specs(specs):
|
||||
|
@@ -1345,10 +1345,8 @@ build(PackageNode) :- not attr("hash", PackageNode, _), attr("node", PackageNode
|
||||
% topmost-priority criterion to reuse what is installed.
|
||||
%
|
||||
% The priority ranges are:
|
||||
% 1000+ Optimizations for concretization errors
|
||||
% 300 - 1000 Highest priority optimizations for valid solutions
|
||||
% 200 - 299 Shifted priorities for build nodes; correspond to priorities 0 - 99.
|
||||
% 100 - 199 Unshifted priorities. Currently only includes minimizing #builds and minimizing dupes.
|
||||
% 200+ Shifted priorities for build nodes; correspond to priorities 0 - 99.
|
||||
% 100 - 199 Unshifted priorities. Currently only includes minimizing #builds.
|
||||
% 0 - 99 Priorities for non-built nodes.
|
||||
build_priority(PackageNode, 200) :- build(PackageNode), attr("node", PackageNode).
|
||||
build_priority(PackageNode, 0) :- not build(PackageNode), attr("node", PackageNode).
|
||||
@@ -1396,16 +1394,6 @@ build_priority(PackageNode, 0) :- not build(PackageNode), attr("node", Package
|
||||
% 2. a `#minimize{ 0@2 : #true }.` statement that ensures the criterion
|
||||
% is displayed (clingo doesn't display sums over empty sets by default)
|
||||
|
||||
% A condition group specifies one or more specs that must be satisfied.
|
||||
% Specs declared first are preferred, so we assign increasing weights and
|
||||
% minimize the weights.
|
||||
opt_criterion(310, "requirement weight").
|
||||
#minimize{ 0@310: #true }.
|
||||
#minimize {
|
||||
Weight@310,PackageNode,Group
|
||||
: requirement_weight(PackageNode, Group, Weight)
|
||||
}.
|
||||
|
||||
% Try hard to reuse installed packages (i.e., minimize the number built)
|
||||
opt_criterion(110, "number of packages to build (vs. reuse)").
|
||||
#minimize { 0@110: #true }.
|
||||
@@ -1417,6 +1405,18 @@ opt_criterion(100, "number of nodes from the same package").
|
||||
#minimize { ID@100,Package : attr("virtual_node", node(ID, Package)) }.
|
||||
#defined optimize_for_reuse/0.
|
||||
|
||||
% A condition group specifies one or more specs that must be satisfied.
|
||||
% Specs declared first are preferred, so we assign increasing weights and
|
||||
% minimize the weights.
|
||||
opt_criterion(75, "requirement weight").
|
||||
#minimize{ 0@275: #true }.
|
||||
#minimize{ 0@75: #true }.
|
||||
#minimize {
|
||||
Weight@75+Priority,PackageNode,Group
|
||||
: requirement_weight(PackageNode, Group, Weight),
|
||||
build_priority(PackageNode, Priority)
|
||||
}.
|
||||
|
||||
% Minimize the number of deprecated versions being used
|
||||
opt_criterion(73, "deprecated versions used").
|
||||
#minimize{ 0@273: #true }.
|
||||
|
@@ -10,13 +10,12 @@
|
||||
%=============================================================================
|
||||
|
||||
% A package cannot be reused if the libc is not compatible with it
|
||||
error(100, "Cannot reuse {0} since we cannot determine libc compatibility", ReusedPackage)
|
||||
:- provider(node(X, LibcPackage), node(0, "libc")),
|
||||
attr("version", node(X, LibcPackage), LibcVersion),
|
||||
attr("hash", node(R, ReusedPackage), Hash),
|
||||
% Libc packages can be reused without the "compatible_libc" attribute
|
||||
ReusedPackage != LibcPackage,
|
||||
not attr("compatible_libc", node(R, ReusedPackage), LibcPackage, LibcVersion).
|
||||
:- provider(node(X, LibcPackage), node(0, "libc")),
|
||||
attr("version", node(X, LibcPackage), LibcVersion),
|
||||
attr("hash", node(R, ReusedPackage), Hash),
|
||||
% Libc packages can be reused without the "compatible_libc" attribute
|
||||
ReusedPackage != LibcPackage,
|
||||
not attr("compatible_libc", node(R, ReusedPackage), LibcPackage, LibcVersion).
|
||||
|
||||
% Check whether the DAG has any built package
|
||||
has_built_packages() :- build(X), not external(X).
|
||||
@@ -24,20 +23,12 @@ has_built_packages() :- build(X), not external(X).
|
||||
% A libc is needed in the DAG
|
||||
:- has_built_packages(), not provider(_, node(0, "libc")).
|
||||
|
||||
% Non-libc reused specs must be host libc compatible. In case we build packages, we get a
|
||||
% host compatible libc provider from other rules. If nothing is built, there is no libc provider,
|
||||
% since it's pruned from reusable specs, meaning we have to explicitly impose reused specs are host
|
||||
% compatible.
|
||||
:- attr("hash", node(R, ReusedPackage), Hash),
|
||||
not provider(node(R, ReusedPackage), node(0, "libc")),
|
||||
not attr("compatible_libc", node(R, ReusedPackage), _, _).
|
||||
|
||||
% The libc provider must be one that a compiler can target
|
||||
% The libc must be chosen among available ones
|
||||
:- has_built_packages(),
|
||||
provider(node(X, LibcPackage), node(0, "libc")),
|
||||
attr("node", node(X, LibcPackage)),
|
||||
attr("version", node(X, LibcPackage), LibcVersion),
|
||||
not host_libc(LibcPackage, LibcVersion).
|
||||
not allowed_libc(LibcPackage, LibcVersion).
|
||||
|
||||
% A built node must depend on libc
|
||||
:- build(PackageNode),
|
||||
|
@@ -2816,7 +2816,9 @@ def _old_concretize(self, tests=False, deprecation_warning=True):
|
||||
|
||||
# Check if we can produce an optimized binary (will throw if
|
||||
# there are declared inconsistencies)
|
||||
self.architecture.target.optimization_flags(self.compiler)
|
||||
# No need on platform=cray because of the targeting modules
|
||||
if not self.satisfies("platform=cray"):
|
||||
self.architecture.target.optimization_flags(self.compiler)
|
||||
|
||||
def _patches_assigned(self):
|
||||
"""Whether patches have been assigned to this spec by the concretizer."""
|
||||
|
@@ -2,12 +2,16 @@
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import os
|
||||
import platform
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
import spack.compilers
|
||||
import spack.concretize
|
||||
import spack.operating_systems
|
||||
@@ -21,8 +25,9 @@ def current_host_platform():
|
||||
"""Return the platform of the current host as detected by the
|
||||
'platform' stdlib package.
|
||||
"""
|
||||
current_platform = None
|
||||
if "Linux" in platform.system():
|
||||
if os.path.exists("/opt/cray/pe"):
|
||||
current_platform = spack.platforms.Cray()
|
||||
elif "Linux" in platform.system():
|
||||
current_platform = spack.platforms.Linux()
|
||||
elif "Darwin" in platform.system():
|
||||
current_platform = spack.platforms.Darwin()
|
||||
@@ -217,3 +222,28 @@ def test_concretize_target_ranges(root_target_range, dep_target_range, result, m
|
||||
with spack.concretize.disable_compiler_existence_check():
|
||||
spec.concretize()
|
||||
assert spec.target == spec["b"].target == result
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"versions,default,expected",
|
||||
[
|
||||
(["21.11", "21.9"], "21.11", False),
|
||||
(["21.11", "21.9"], "21.9", True),
|
||||
(["21.11", "21.9"], None, False),
|
||||
],
|
||||
)
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="Cray does not use windows")
|
||||
def test_cray_platform_detection(versions, default, expected, tmpdir, monkeypatch, working_env):
|
||||
ex_path = str(tmpdir.join("fake_craype_dir"))
|
||||
fs.mkdirp(ex_path)
|
||||
|
||||
with fs.working_dir(ex_path):
|
||||
for version in versions:
|
||||
fs.touch(version)
|
||||
if default:
|
||||
os.symlink(default, "default")
|
||||
|
||||
monkeypatch.setattr(spack.platforms.cray, "_ex_craype_dir", ex_path)
|
||||
os.environ["MODULEPATH"] = "/opt/cray/pe"
|
||||
|
||||
assert spack.platforms.cray.Cray.detect() == expected
|
||||
|
@@ -556,6 +556,24 @@ def test_build_jobs_defaults():
|
||||
)
|
||||
|
||||
|
||||
def test_dirty_disable_module_unload(config, mock_packages, working_env, mock_module_cmd):
|
||||
"""Test that on CRAY platform 'module unload' is not called if the 'dirty'
|
||||
option is on.
|
||||
"""
|
||||
s = spack.spec.Spec("a").concretized()
|
||||
|
||||
# If called with "dirty" we don't unload modules, so no calls to the
|
||||
# `module` function on Cray
|
||||
spack.build_environment.setup_package(s.package, dirty=True)
|
||||
assert not mock_module_cmd.calls
|
||||
|
||||
# If called without "dirty" we unload modules on Cray
|
||||
spack.build_environment.setup_package(s.package, dirty=False)
|
||||
assert mock_module_cmd.calls
|
||||
assert any(("unload", "cray-libsci") == item[0] for item in mock_module_cmd.calls)
|
||||
assert any(("unload", "cray-mpich") == item[0] for item in mock_module_cmd.calls)
|
||||
|
||||
|
||||
class TestModuleMonkeyPatcher:
|
||||
def test_getting_attributes(self, default_mock_concretization):
|
||||
s = default_mock_concretization("libelf")
|
||||
|
@@ -12,21 +12,21 @@
|
||||
|
||||
def test_build_task_errors(install_mockery):
|
||||
with pytest.raises(ValueError, match="must be a package"):
|
||||
inst.BuildTask("abc", None, False, 0, 0, 0, set())
|
||||
inst.BuildTask("abc", None, False, 0, 0, 0, [])
|
||||
|
||||
spec = spack.spec.Spec("trivial-install-test-package")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
with pytest.raises(ValueError, match="must have a concrete spec"):
|
||||
inst.BuildTask(pkg_cls(spec), None, False, 0, 0, 0, set())
|
||||
inst.BuildTask(pkg_cls(spec), None, False, 0, 0, 0, [])
|
||||
|
||||
spec.concretize()
|
||||
assert spec.concrete
|
||||
with pytest.raises(ValueError, match="must have a build request"):
|
||||
inst.BuildTask(spec.package, None, False, 0, 0, 0, set())
|
||||
inst.BuildTask(spec.package, None, False, 0, 0, 0, [])
|
||||
|
||||
request = inst.BuildRequest(spec.package, {})
|
||||
with pytest.raises(inst.InstallError, match="Cannot create a build task"):
|
||||
inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_REMOVED, set())
|
||||
inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_REMOVED, [])
|
||||
|
||||
|
||||
def test_build_task_basics(install_mockery):
|
||||
@@ -36,8 +36,8 @@ def test_build_task_basics(install_mockery):
|
||||
|
||||
# Ensure key properties match expectations
|
||||
request = inst.BuildRequest(spec.package, {})
|
||||
task = inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_ADDED, set())
|
||||
assert not task.explicit
|
||||
task = inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_ADDED, [])
|
||||
assert task.explicit # package was "explicitly" requested
|
||||
assert task.priority == len(task.uninstalled_deps)
|
||||
assert task.key == (task.priority, task.sequence)
|
||||
|
||||
@@ -58,7 +58,7 @@ def test_build_task_strings(install_mockery):
|
||||
|
||||
# Ensure key properties match expectations
|
||||
request = inst.BuildRequest(spec.package, {})
|
||||
task = inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_ADDED, set())
|
||||
task = inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_ADDED, [])
|
||||
|
||||
# Cover __repr__
|
||||
irep = task.__repr__()
|
||||
|
@@ -11,7 +11,6 @@
|
||||
|
||||
import spack.caches
|
||||
import spack.cmd.clean
|
||||
import spack.environment as ev
|
||||
import spack.main
|
||||
import spack.package_base
|
||||
import spack.stage
|
||||
@@ -69,20 +68,6 @@ def test_function_calls(command_line, effects, mock_calls_for_clean):
|
||||
assert mock_calls_for_clean[name] == (1 if name in effects else 0)
|
||||
|
||||
|
||||
def test_env_aware_clean(mock_stage, install_mockery, mutable_mock_env_path, monkeypatch):
|
||||
e = ev.create("test", with_view=False)
|
||||
e.add("mpileaks")
|
||||
e.concretize()
|
||||
|
||||
def fail(*args, **kwargs):
|
||||
raise Exception("This should not have been called")
|
||||
|
||||
monkeypatch.setattr(spack.spec.Spec, "concretize", fail)
|
||||
|
||||
with e:
|
||||
clean("mpileaks")
|
||||
|
||||
|
||||
def test_remove_python_cache(tmpdir, monkeypatch):
|
||||
cache_files = ["file1.pyo", "file2.pyc"]
|
||||
source_file = "file1.py"
|
||||
|
@@ -125,8 +125,18 @@ def print_spack_cc(*args):
|
||||
print(os.environ.get("CC", ""))
|
||||
|
||||
|
||||
# `module unload cray-libsci` in test environment causes failure
|
||||
# It does not fail for actual installs
|
||||
# build_environment.py imports module directly, so we monkeypatch it there
|
||||
# rather than in module_cmd
|
||||
def mock_module_noop(*args):
|
||||
pass
|
||||
|
||||
|
||||
def test_dev_build_drop_in(tmpdir, mock_packages, monkeypatch, install_mockery, working_env):
|
||||
monkeypatch.setattr(os, "execvp", print_spack_cc)
|
||||
monkeypatch.setattr(spack.build_environment, "module", mock_module_noop)
|
||||
|
||||
with tmpdir.as_cwd():
|
||||
output = dev_build("-b", "edit", "--drop-in", "sh", "dev-build-test-install@0.0.0")
|
||||
assert "lib/spack/env" in output
|
||||
|
@@ -3,8 +3,12 @@
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
"""Test detection of compiler version"""
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
import spack.compilers.aocc
|
||||
import spack.compilers.arm
|
||||
import spack.compilers.cce
|
||||
@@ -19,6 +23,7 @@
|
||||
import spack.compilers.xl
|
||||
import spack.compilers.xl_r
|
||||
import spack.util.module_cmd
|
||||
from spack.operating_systems.cray_frontend import CrayFrontend
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
@@ -408,6 +413,48 @@ def test_xl_version_detection(version_str, expected_version):
|
||||
assert version == expected_version
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("Not supported on Windows (yet)")
|
||||
@pytest.mark.parametrize(
|
||||
"compiler,version",
|
||||
[
|
||||
("gcc", "8.1.0"),
|
||||
("gcc", "1.0.0-foo"),
|
||||
("pgi", "19.1"),
|
||||
("pgi", "19.1a"),
|
||||
("intel", "9.0.0"),
|
||||
("intel", "0.0.0-foobar"),
|
||||
# ('oneapi', '2021.1'),
|
||||
# ('oneapi', '2021.1-foobar')
|
||||
],
|
||||
)
|
||||
def test_cray_frontend_compiler_detection(compiler, version, tmpdir, monkeypatch, working_env):
|
||||
"""Test that the Cray frontend properly finds compilers form modules"""
|
||||
# setup the fake compiler directory
|
||||
compiler_dir = tmpdir.join(compiler)
|
||||
compiler_exe = compiler_dir.join("cc").ensure()
|
||||
fs.set_executable(str(compiler_exe))
|
||||
|
||||
# mock modules
|
||||
def _module(cmd, *args):
|
||||
module_name = "%s/%s" % (compiler, version)
|
||||
module_contents = "prepend-path PATH %s" % compiler_dir
|
||||
if cmd == "avail":
|
||||
return module_name if compiler in args[0] else ""
|
||||
if cmd == "show":
|
||||
return module_contents if module_name in args else ""
|
||||
|
||||
monkeypatch.setattr(spack.operating_systems.cray_frontend, "module", _module)
|
||||
|
||||
# remove PATH variable
|
||||
os.environ.pop("PATH", None)
|
||||
|
||||
# get a CrayFrontend object
|
||||
cray_fe_os = CrayFrontend()
|
||||
|
||||
paths = cray_fe_os.compiler_search_paths
|
||||
assert paths == [str(compiler_dir)]
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"version_str,expected_version",
|
||||
[
|
||||
|
@@ -28,7 +28,7 @@
|
||||
import spack.variant as vt
|
||||
from spack.concretize import find_spec
|
||||
from spack.spec import CompilerSpec, Spec
|
||||
from spack.version import Version, VersionList, ver
|
||||
from spack.version import Version, ver
|
||||
|
||||
|
||||
def check_spec(abstract, concrete):
|
||||
@@ -2546,53 +2546,6 @@ def test_include_specs_from_externals_and_libcs(
|
||||
|
||||
assert result["deprecated-versions"].satisfies("@1.0.0")
|
||||
|
||||
@pytest.mark.regression("44085")
|
||||
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
|
||||
def test_can_reuse_concrete_externals_for_dependents(self, mutable_config, tmp_path):
|
||||
"""Test that external specs that are in the DB can be reused. This means they are
|
||||
preferred to concretizing another external from packages.yaml
|
||||
"""
|
||||
packages_yaml = {
|
||||
"externaltool": {"externals": [{"spec": "externaltool@2.0", "prefix": "/fake/path"}]}
|
||||
}
|
||||
mutable_config.set("packages", packages_yaml)
|
||||
# Concretize with gcc@9 to get a suboptimal spec, since we have gcc@10 available
|
||||
external_spec = Spec("externaltool@2 %gcc@9").concretized()
|
||||
assert external_spec.external
|
||||
|
||||
root_specs = [Spec("sombrero")]
|
||||
with spack.config.override("concretizer:reuse", True):
|
||||
solver = spack.solver.asp.Solver()
|
||||
setup = spack.solver.asp.SpackSolverSetup()
|
||||
result, _, _ = solver.driver.solve(setup, root_specs, reuse=[external_spec])
|
||||
|
||||
assert len(result.specs) == 1
|
||||
sombrero = result.specs[0]
|
||||
assert sombrero["externaltool"].dag_hash() == external_spec.dag_hash()
|
||||
|
||||
@pytest.mark.only_clingo("Original concretizer cannot reuse")
|
||||
def test_cannot_reuse_host_incompatible_libc(self):
|
||||
"""Test whether reuse concretization correctly fails to reuse a spec with a host
|
||||
incompatible libc."""
|
||||
if not spack.solver.asp.using_libc_compatibility():
|
||||
pytest.skip("This test requires libc nodes")
|
||||
|
||||
# We install b@1 ^glibc@2.30, and b@0 ^glibc@2.28. The former is not host compatible, the
|
||||
# latter is.
|
||||
fst = Spec("b@1").concretized()
|
||||
fst._mark_concrete(False)
|
||||
fst.dependencies("glibc")[0].versions = VersionList(["=2.30"])
|
||||
fst._mark_concrete(True)
|
||||
snd = Spec("b@0").concretized()
|
||||
|
||||
# The spec b@1 ^glibc@2.30 is "more optimal" than b@0 ^glibc@2.28, but due to glibc
|
||||
# incompatibility, it should not be reused.
|
||||
solver = spack.solver.asp.Solver()
|
||||
setup = spack.solver.asp.SpackSolverSetup()
|
||||
result, _, _ = solver.driver.solve(setup, [Spec("b")], reuse=[fst, snd])
|
||||
assert len(result.specs) == 1
|
||||
assert result.specs[0] == snd
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def duplicates_test_repository():
|
||||
|
@@ -1176,46 +1176,3 @@ def test_forward_multi_valued_variant_using_requires(
|
||||
|
||||
for constraint in not_expected:
|
||||
assert not s.satisfies(constraint)
|
||||
|
||||
|
||||
def test_strong_preferences_higher_priority_than_reuse(concretize_scope, mock_packages):
|
||||
"""Tests that strong preferences have a higher priority than reusing specs."""
|
||||
reused_spec = Spec("adios2~bzip2").concretized()
|
||||
reuse_nodes = list(reused_spec.traverse())
|
||||
root_specs = [Spec("ascent+adios2")]
|
||||
|
||||
# Check that without further configuration adios2 is reused
|
||||
with spack.config.override("concretizer:reuse", True):
|
||||
solver = spack.solver.asp.Solver()
|
||||
setup = spack.solver.asp.SpackSolverSetup()
|
||||
result, _, _ = solver.driver.solve(setup, root_specs, reuse=reuse_nodes)
|
||||
ascent = result.specs[0]
|
||||
assert ascent["adios2"].dag_hash() == reused_spec.dag_hash(), ascent
|
||||
|
||||
# If we stick a preference, adios2 is not reused
|
||||
update_packages_config(
|
||||
"""
|
||||
packages:
|
||||
adios2:
|
||||
prefer:
|
||||
- "+bzip2"
|
||||
"""
|
||||
)
|
||||
with spack.config.override("concretizer:reuse", True):
|
||||
solver = spack.solver.asp.Solver()
|
||||
setup = spack.solver.asp.SpackSolverSetup()
|
||||
result, _, _ = solver.driver.solve(setup, root_specs, reuse=reuse_nodes)
|
||||
ascent = result.specs[0]
|
||||
|
||||
assert ascent["adios2"].dag_hash() != reused_spec.dag_hash()
|
||||
assert ascent["adios2"].satisfies("+bzip2")
|
||||
|
||||
# A preference is still preference, so we can override from input
|
||||
with spack.config.override("concretizer:reuse", True):
|
||||
solver = spack.solver.asp.Solver()
|
||||
setup = spack.solver.asp.SpackSolverSetup()
|
||||
result, _, _ = solver.driver.solve(
|
||||
setup, [Spec("ascent+adios2 ^adios2~bzip2")], reuse=reuse_nodes
|
||||
)
|
||||
ascent = result.specs[0]
|
||||
assert ascent["adios2"].dag_hash() == reused_spec.dag_hash(), ascent
|
||||
|
@@ -492,7 +492,7 @@ def test_substitute_date(mock_low_high_config):
|
||||
],
|
||||
)
|
||||
def test_parse_install_tree(config_settings, expected, mutable_config):
|
||||
expected_root = expected[0] or mutable_config.get("config:install_tree:root")
|
||||
expected_root = expected[0] or spack.store.DEFAULT_INSTALL_TREE_ROOT
|
||||
expected_unpadded_root = expected[1] or expected_root
|
||||
expected_proj = expected[2] or spack.directory_layout.default_projections
|
||||
|
||||
@@ -575,7 +575,7 @@ def change_fn(self, section):
|
||||
],
|
||||
)
|
||||
def test_parse_install_tree_padded(config_settings, expected, mutable_config):
|
||||
expected_root = expected[0] or mutable_config.get("config:install_tree:root")
|
||||
expected_root = expected[0] or spack.store.DEFAULT_INSTALL_TREE_ROOT
|
||||
expected_unpadded_root = expected[1] or expected_root
|
||||
expected_proj = expected[2] or spack.directory_layout.default_projections
|
||||
|
||||
@@ -761,20 +761,25 @@ def test_internal_config_from_data():
|
||||
assert config.get("config:checksum", scope="higher") is True
|
||||
|
||||
|
||||
def test_keys_are_ordered(configuration_dir):
|
||||
def test_keys_are_ordered():
|
||||
"""Test that keys in Spack YAML files retain their order from the file."""
|
||||
expected_order = (
|
||||
"./bin",
|
||||
"./man",
|
||||
"./share/man",
|
||||
"./share/aclocal",
|
||||
"./lib/pkgconfig",
|
||||
"./lib64/pkgconfig",
|
||||
"./share/pkgconfig",
|
||||
"./",
|
||||
"bin",
|
||||
"man",
|
||||
"share/man",
|
||||
"share/aclocal",
|
||||
"lib",
|
||||
"lib64",
|
||||
"include",
|
||||
"lib/pkgconfig",
|
||||
"lib64/pkgconfig",
|
||||
"share/pkgconfig",
|
||||
"",
|
||||
)
|
||||
|
||||
config_scope = spack.config.ConfigScope("modules", configuration_dir.join("site"))
|
||||
config_scope = spack.config.ConfigScope(
|
||||
"modules", os.path.join(spack.paths.test_path, "data", "config")
|
||||
)
|
||||
|
||||
data = config_scope.get_section("modules")
|
||||
|
||||
@@ -1487,3 +1492,26 @@ def test_config_path_dsl(path, it_should_work, expected_parsed):
|
||||
else:
|
||||
with pytest.raises(ValueError):
|
||||
spack.config.ConfigPath._validate(path)
|
||||
|
||||
|
||||
def test_compiler_parsing_errors(tmpdir):
|
||||
content = """\
|
||||
packages:
|
||||
gcc:
|
||||
externals:
|
||||
- spec: gcc@8.5.0 languages='c,c++,fortran'
|
||||
prefix: /usr
|
||||
extra_attributes:
|
||||
compilers:
|
||||
c: /usr/bin/gcc
|
||||
cxx: /usr/bin/g++
|
||||
fortran: /usr/bin/gfortran
|
||||
"""
|
||||
|
||||
testscope = join_path(tmpdir.strpath, "packages.yaml")
|
||||
with open(testscope, "w") as f:
|
||||
f.write(content)
|
||||
|
||||
with spack.config.use_configuration(tmpdir.strpath):
|
||||
compilers = spack.compilers.get_compiler_config_from_packages(spack.config.CONFIG)
|
||||
assert spack.spec.Spec(compilers[0]["compiler"]["spec"]).satisfies("gcc@8.5.0")
|
||||
|
@@ -12,7 +12,6 @@
|
||||
import json
|
||||
import os
|
||||
import os.path
|
||||
import pathlib
|
||||
import re
|
||||
import shutil
|
||||
import stat
|
||||
@@ -33,7 +32,6 @@
|
||||
from llnl.util.filesystem import copy_tree, mkdirp, remove_linked_tree, touchp, working_dir
|
||||
|
||||
import spack.binary_distribution
|
||||
import spack.bootstrap.core
|
||||
import spack.caches
|
||||
import spack.cmd.buildcache
|
||||
import spack.compiler
|
||||
@@ -684,34 +682,36 @@ def configuration_dir(tmpdir_factory, linux_os):
|
||||
directory path.
|
||||
"""
|
||||
tmpdir = tmpdir_factory.mktemp("configurations")
|
||||
install_tree_root = tmpdir_factory.mktemp("opt")
|
||||
modules_root = tmpdir_factory.mktemp("share")
|
||||
tcl_root = modules_root.ensure("modules", dir=True)
|
||||
lmod_root = modules_root.ensure("lmod", dir=True)
|
||||
|
||||
# <test_path>/data/config has mock config yaml files in it
|
||||
# copy these to the site config.
|
||||
test_config = pathlib.Path(spack.paths.test_path) / "data" / "config"
|
||||
shutil.copytree(test_config, tmpdir.join("site"))
|
||||
test_config = py.path.local(spack.paths.test_path).join("data", "config")
|
||||
test_config.copy(tmpdir.join("site"))
|
||||
|
||||
# Create temporary 'defaults', 'site' and 'user' folders
|
||||
tmpdir.ensure("user", dir=True)
|
||||
|
||||
# Fill out config.yaml, compilers.yaml and modules.yaml templates.
|
||||
# Slightly modify config.yaml and compilers.yaml
|
||||
if sys.platform == "win32":
|
||||
locks = False
|
||||
else:
|
||||
locks = True
|
||||
|
||||
solver = os.environ.get("SPACK_TEST_SOLVER", "clingo")
|
||||
locks = sys.platform != "win32"
|
||||
config = tmpdir.join("site", "config.yaml")
|
||||
config_template = test_config / "config.yaml"
|
||||
config.write(config_template.read_text().format(install_tree_root, solver, locks))
|
||||
config_yaml = test_config.join("config.yaml")
|
||||
modules_root = tmpdir_factory.mktemp("share")
|
||||
tcl_root = modules_root.ensure("modules", dir=True)
|
||||
lmod_root = modules_root.ensure("lmod", dir=True)
|
||||
content = "".join(config_yaml.read()).format(solver, locks, str(tcl_root), str(lmod_root))
|
||||
t = tmpdir.join("site", "config.yaml")
|
||||
t.write(content)
|
||||
|
||||
target = str(archspec.cpu.host().family)
|
||||
compilers = tmpdir.join("site", "compilers.yaml")
|
||||
compilers_template = test_config / "compilers.yaml"
|
||||
compilers.write(compilers_template.read_text().format(linux_os=linux_os, target=target))
|
||||
|
||||
modules = tmpdir.join("site", "modules.yaml")
|
||||
modules_template = test_config / "modules.yaml"
|
||||
modules.write(modules_template.read_text().format(tcl_root, lmod_root))
|
||||
compilers_yaml = test_config.join("compilers.yaml")
|
||||
content = "".join(compilers_yaml.read()).format(
|
||||
linux_os=linux_os, target=str(archspec.cpu.host().family)
|
||||
)
|
||||
t = tmpdir.join("site", "compilers.yaml")
|
||||
t.write(content)
|
||||
yield tmpdir
|
||||
|
||||
|
||||
@@ -1702,7 +1702,7 @@ def _factory(name, output, subdir=("bin",)):
|
||||
executable_path = executable_dir / name
|
||||
if sys.platform == "win32":
|
||||
executable_path = executable_dir / (name + ".bat")
|
||||
executable_path.write_text(f"{shebang}{output}\n")
|
||||
executable_path.write_text(f"{ shebang }{ output }\n")
|
||||
executable_path.chmod(0o755)
|
||||
return executable_path
|
||||
|
||||
|
@@ -1,5 +1,5 @@
|
||||
concretizer:
|
||||
reuse: true
|
||||
reuse: True
|
||||
targets:
|
||||
granularity: microarchitectures
|
||||
host_compatible: false
|
||||
|
@@ -1,6 +1,6 @@
|
||||
config:
|
||||
install_tree:
|
||||
root: {0}
|
||||
root: $spack/opt/spack
|
||||
template_dirs:
|
||||
- $spack/share/spack/templates
|
||||
- $spack/lib/spack/spack/test/data/templates
|
||||
@@ -13,5 +13,5 @@ config:
|
||||
ssl_certs: $SSL_CERT_FILE
|
||||
checksum: true
|
||||
dirty: false
|
||||
concretizer: {1}
|
||||
locks: {2}
|
||||
concretizer: {0}
|
||||
locks: {1}
|
||||
|
@@ -14,25 +14,29 @@
|
||||
# ~/.spack/modules.yaml
|
||||
# -------------------------------------------------------------------------
|
||||
modules:
|
||||
default: {}
|
||||
prefix_inspections:
|
||||
./bin: [PATH]
|
||||
./man: [MANPATH]
|
||||
./share/man: [MANPATH]
|
||||
./share/aclocal: [ACLOCAL_PATH]
|
||||
./lib/pkgconfig: [PKG_CONFIG_PATH]
|
||||
./lib64/pkgconfig: [PKG_CONFIG_PATH]
|
||||
./share/pkgconfig: [PKG_CONFIG_PATH]
|
||||
./: [CMAKE_PREFIX_PATH]
|
||||
default:
|
||||
roots:
|
||||
tcl: {0}
|
||||
lmod: {1}
|
||||
enable: []
|
||||
tcl:
|
||||
all:
|
||||
autoload: direct
|
||||
lmod:
|
||||
all:
|
||||
autoload: direct
|
||||
hierarchy:
|
||||
- mpi
|
||||
bin:
|
||||
- PATH
|
||||
man:
|
||||
- MANPATH
|
||||
share/man:
|
||||
- MANPATH
|
||||
share/aclocal:
|
||||
- ACLOCAL_PATH
|
||||
lib:
|
||||
- LIBRARY_PATH
|
||||
- LD_LIBRARY_PATH
|
||||
lib64:
|
||||
- LIBRARY_PATH
|
||||
- LD_LIBRARY_PATH
|
||||
include:
|
||||
- CPATH
|
||||
lib/pkgconfig:
|
||||
- PKG_CONFIG_PATH
|
||||
lib64/pkgconfig:
|
||||
- PKG_CONFIG_PATH
|
||||
share/pkgconfig:
|
||||
- PKG_CONFIG_PATH
|
||||
'':
|
||||
- CMAKE_PREFIX_PATH
|
||||
|
@@ -7,7 +7,6 @@
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
from typing import List, Optional, Union
|
||||
|
||||
import py
|
||||
import pytest
|
||||
@@ -45,10 +44,12 @@ def _mock_repo(root, namespace):
|
||||
repodir.ensure(spack.repo.packages_dir_name, dir=True)
|
||||
yaml = repodir.join("repo.yaml")
|
||||
yaml.write(
|
||||
f"""
|
||||
"""
|
||||
repo:
|
||||
namespace: {namespace}
|
||||
"""
|
||||
namespace: {0}
|
||||
""".format(
|
||||
namespace
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@@ -72,21 +73,53 @@ def _true(*args, **kwargs):
|
||||
return True
|
||||
|
||||
|
||||
def create_build_task(
|
||||
pkg: spack.package_base.PackageBase, install_args: Optional[dict] = None
|
||||
) -> inst.BuildTask:
|
||||
request = inst.BuildRequest(pkg, {} if install_args is None else install_args)
|
||||
return inst.BuildTask(pkg, request, False, 0, 0, inst.STATUS_ADDED, set())
|
||||
def create_build_task(pkg, install_args={}):
|
||||
"""
|
||||
Create a built task for the given (concretized) package
|
||||
|
||||
Args:
|
||||
pkg (spack.package_base.PackageBase): concretized package associated with
|
||||
the task
|
||||
install_args (dict): dictionary of kwargs (or install args)
|
||||
|
||||
Return:
|
||||
(BuildTask) A basic package build task
|
||||
"""
|
||||
request = inst.BuildRequest(pkg, install_args)
|
||||
return inst.BuildTask(pkg, request, False, 0, 0, inst.STATUS_ADDED, [])
|
||||
|
||||
|
||||
def create_installer(
|
||||
specs: Union[List[str], List[spack.spec.Spec]], install_args: Optional[dict] = None
|
||||
) -> inst.PackageInstaller:
|
||||
"""Create an installer instance for a list of specs or package names that will be
|
||||
concretized."""
|
||||
_specs = [spack.spec.Spec(s).concretized() if isinstance(s, str) else s for s in specs]
|
||||
_install_args = {} if install_args is None else install_args
|
||||
return inst.PackageInstaller([spec.package for spec in _specs], _install_args)
|
||||
def create_installer(installer_args):
|
||||
"""
|
||||
Create an installer using the concretized spec for each arg
|
||||
|
||||
Args:
|
||||
installer_args (list): the list of (spec name, kwargs) tuples
|
||||
|
||||
Return:
|
||||
spack.installer.PackageInstaller: the associated package installer
|
||||
"""
|
||||
const_arg = [(spec.package, kwargs) for spec, kwargs in installer_args]
|
||||
return inst.PackageInstaller(const_arg)
|
||||
|
||||
|
||||
def installer_args(spec_names, kwargs={}):
|
||||
"""Return a the installer argument with each spec paired with kwargs
|
||||
|
||||
Args:
|
||||
spec_names (list): list of spec names
|
||||
kwargs (dict or None): install arguments to apply to all of the specs
|
||||
|
||||
Returns:
|
||||
list: list of (spec, kwargs), the installer constructor argument
|
||||
"""
|
||||
arg = []
|
||||
for name in spec_names:
|
||||
spec = spack.spec.Spec(name)
|
||||
spec.concretize()
|
||||
assert spec.concrete
|
||||
arg.append((spec, kwargs))
|
||||
return arg
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
@@ -207,7 +240,8 @@ def test_try_install_from_binary_cache(install_mockery, mock_packages, monkeypat
|
||||
|
||||
|
||||
def test_installer_repr(install_mockery):
|
||||
installer = create_installer(["trivial-install-test-package"])
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
irep = installer.__repr__()
|
||||
assert irep.startswith(installer.__class__.__name__)
|
||||
@@ -216,7 +250,8 @@ def test_installer_repr(install_mockery):
|
||||
|
||||
|
||||
def test_installer_str(install_mockery):
|
||||
installer = create_installer(["trivial-install-test-package"])
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
istr = str(installer)
|
||||
assert "#tasks=0" in istr
|
||||
@@ -261,7 +296,8 @@ def _mock_installed(self):
|
||||
builder.add_package("f")
|
||||
|
||||
with spack.repo.use_repositories(builder.root):
|
||||
installer = create_installer(["a"])
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
installer._init_queue()
|
||||
|
||||
@@ -295,7 +331,8 @@ def test_check_last_phase_error(install_mockery):
|
||||
|
||||
|
||||
def test_installer_ensure_ready_errors(install_mockery, monkeypatch):
|
||||
installer = create_installer(["trivial-install-test-package"])
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
|
||||
fmt = r"cannot be installed locally.*{0}"
|
||||
@@ -329,7 +366,8 @@ def test_ensure_locked_err(install_mockery, monkeypatch, tmpdir, capsys):
|
||||
def _raise(lock, timeout=None):
|
||||
raise RuntimeError(mock_err_msg)
|
||||
|
||||
installer = create_installer(["trivial-install-test-package"])
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
|
||||
monkeypatch.setattr(ulk.Lock, "acquire_read", _raise)
|
||||
@@ -344,7 +382,8 @@ def _raise(lock, timeout=None):
|
||||
|
||||
def test_ensure_locked_have(install_mockery, tmpdir, capsys):
|
||||
"""Test _ensure_locked when already have lock."""
|
||||
installer = create_installer(["trivial-install-test-package"], {})
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
pkg_id = inst.package_id(spec)
|
||||
|
||||
@@ -380,7 +419,8 @@ def test_ensure_locked_have(install_mockery, tmpdir, capsys):
|
||||
@pytest.mark.parametrize("lock_type,reads,writes", [("read", 1, 0), ("write", 0, 1)])
|
||||
def test_ensure_locked_new_lock(install_mockery, tmpdir, lock_type, reads, writes):
|
||||
pkg_id = "a"
|
||||
installer = create_installer([pkg_id], {})
|
||||
const_arg = installer_args([pkg_id], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
with tmpdir.as_cwd():
|
||||
ltype, lock = installer._ensure_locked(lock_type, spec.package)
|
||||
@@ -399,7 +439,8 @@ def _pl(db, spec, timeout):
|
||||
return lock
|
||||
|
||||
pkg_id = "a"
|
||||
installer = create_installer([pkg_id], {})
|
||||
const_arg = installer_args([pkg_id], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
|
||||
monkeypatch.setattr(spack.database.SpecLocker, "lock", _pl)
|
||||
@@ -469,7 +510,7 @@ def _conc_spec(compiler):
|
||||
|
||||
def test_update_tasks_for_compiler_packages_as_compiler(mock_packages, config, monkeypatch):
|
||||
spec = spack.spec.Spec("trivial-install-test-package").concretized()
|
||||
installer = inst.PackageInstaller([spec.package], {})
|
||||
installer = inst.PackageInstaller([(spec.package, {})])
|
||||
|
||||
# Add a task to the queue
|
||||
installer._add_init_task(spec.package, installer.build_requests[0], False, {})
|
||||
@@ -653,7 +694,8 @@ def test_check_deps_status_install_failure(install_mockery):
|
||||
for dep in s.traverse(root=False):
|
||||
spack.store.STORE.failure_tracker.mark(dep)
|
||||
|
||||
installer = create_installer(["a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
|
||||
with pytest.raises(inst.InstallError, match="install failure"):
|
||||
@@ -661,7 +703,8 @@ def test_check_deps_status_install_failure(install_mockery):
|
||||
|
||||
|
||||
def test_check_deps_status_write_locked(install_mockery, monkeypatch):
|
||||
installer = create_installer(["a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
|
||||
# Ensure the lock is not acquired
|
||||
@@ -672,7 +715,8 @@ def test_check_deps_status_write_locked(install_mockery, monkeypatch):
|
||||
|
||||
|
||||
def test_check_deps_status_external(install_mockery, monkeypatch):
|
||||
installer = create_installer(["a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
|
||||
# Mock the dependencies as external so assumed to be installed
|
||||
@@ -684,7 +728,8 @@ def test_check_deps_status_external(install_mockery, monkeypatch):
|
||||
|
||||
|
||||
def test_check_deps_status_upstream(install_mockery, monkeypatch):
|
||||
installer = create_installer(["a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
|
||||
# Mock the known dependencies as installed upstream
|
||||
@@ -702,7 +747,8 @@ def _pkgs(compiler, architecture, pkgs):
|
||||
spec = spack.spec.Spec("mpi").concretized()
|
||||
return [(spec.package, True)]
|
||||
|
||||
installer = create_installer(["trivial-install-test-package"], {})
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
all_deps = defaultdict(set)
|
||||
|
||||
@@ -717,7 +763,8 @@ def _pkgs(compiler, architecture, pkgs):
|
||||
|
||||
def test_prepare_for_install_on_installed(install_mockery, monkeypatch):
|
||||
"""Test of _prepare_for_install's early return for installed task path."""
|
||||
installer = create_installer(["dependent-install"], {})
|
||||
const_arg = installer_args(["dependent-install"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
|
||||
install_args = {"keep_prefix": True, "keep_stage": True, "restage": False}
|
||||
@@ -732,7 +779,8 @@ def test_installer_init_requests(install_mockery):
|
||||
"""Test of installer initial requests."""
|
||||
spec_name = "dependent-install"
|
||||
with spack.config.override("config:install_missing_compilers", True):
|
||||
installer = create_installer([spec_name], {})
|
||||
const_arg = installer_args([spec_name], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# There is only one explicit request in this case
|
||||
assert len(installer.build_requests) == 1
|
||||
@@ -741,7 +789,8 @@ def test_installer_init_requests(install_mockery):
|
||||
|
||||
|
||||
def test_install_task_use_cache(install_mockery, monkeypatch):
|
||||
installer = create_installer(["trivial-install-test-package"], {})
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
task = create_build_task(request.pkg)
|
||||
|
||||
@@ -756,7 +805,8 @@ def test_install_task_add_compiler(install_mockery, monkeypatch, capfd):
|
||||
def _add(_compilers):
|
||||
tty.msg(config_msg)
|
||||
|
||||
installer = create_installer(["a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
task = create_build_task(installer.build_requests[0].pkg)
|
||||
task.compiler = True
|
||||
|
||||
@@ -775,7 +825,8 @@ def _add(_compilers):
|
||||
|
||||
def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
|
||||
"""Test _release_lock for supposed write lock with exception."""
|
||||
installer = create_installer(["trivial-install-test-package"], {})
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
pkg_id = "test"
|
||||
with tmpdir.as_cwd():
|
||||
@@ -792,7 +843,8 @@ def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
|
||||
@pytest.mark.parametrize("installed", [True, False])
|
||||
def test_push_task_skip_processed(install_mockery, installed):
|
||||
"""Test to ensure skip re-queueing a processed package."""
|
||||
installer = create_installer(["a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
assert len(list(installer.build_tasks)) == 0
|
||||
|
||||
# Mark the package as installed OR failed
|
||||
@@ -809,7 +861,8 @@ def test_push_task_skip_processed(install_mockery, installed):
|
||||
|
||||
def test_requeue_task(install_mockery, capfd):
|
||||
"""Test to ensure cover _requeue_task."""
|
||||
installer = create_installer(["a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
task = create_build_task(installer.build_requests[0].pkg)
|
||||
|
||||
# temporarily set tty debug messages on so we can test output
|
||||
@@ -839,7 +892,8 @@ def _mktask(pkg):
|
||||
def _rmtask(installer, pkg_id):
|
||||
raise RuntimeError("Raise an exception to test except path")
|
||||
|
||||
installer = create_installer(["a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
|
||||
# Cover task removal happy path
|
||||
@@ -868,7 +922,8 @@ def _chgrp(path, group, follow_symlinks=True):
|
||||
monkeypatch.setattr(prefs, "get_package_group", _get_group)
|
||||
monkeypatch.setattr(fs, "chgrp", _chgrp)
|
||||
|
||||
installer = create_installer(["trivial-install-test-package"], {})
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
|
||||
fs.touchp(spec.prefix)
|
||||
@@ -894,7 +949,8 @@ def test_cleanup_failed_err(install_mockery, tmpdir, monkeypatch, capsys):
|
||||
def _raise_except(lock):
|
||||
raise RuntimeError(msg)
|
||||
|
||||
installer = create_installer(["trivial-install-test-package"], {})
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
monkeypatch.setattr(lk.Lock, "release_write", _raise_except)
|
||||
pkg_id = "test"
|
||||
@@ -910,7 +966,8 @@ def _raise_except(lock):
|
||||
|
||||
def test_update_failed_no_dependent_task(install_mockery):
|
||||
"""Test _update_failed with missing dependent build tasks."""
|
||||
installer = create_installer(["dependent-install"], {})
|
||||
const_arg = installer_args(["dependent-install"], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
|
||||
for dep in spec.traverse(root=False):
|
||||
@@ -921,7 +978,8 @@ def test_update_failed_no_dependent_task(install_mockery):
|
||||
|
||||
def test_install_uninstalled_deps(install_mockery, monkeypatch, capsys):
|
||||
"""Test install with uninstalled dependencies."""
|
||||
installer = create_installer(["dependent-install"], {})
|
||||
const_arg = installer_args(["dependent-install"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Skip the actual installation and any status updates
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _noop)
|
||||
@@ -938,7 +996,8 @@ def test_install_uninstalled_deps(install_mockery, monkeypatch, capsys):
|
||||
|
||||
def test_install_failed(install_mockery, monkeypatch, capsys):
|
||||
"""Test install with failed install."""
|
||||
installer = create_installer(["b"], {})
|
||||
const_arg = installer_args(["b"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Make sure the package is identified as failed
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "has_failed", _true)
|
||||
@@ -953,7 +1012,8 @@ def test_install_failed(install_mockery, monkeypatch, capsys):
|
||||
|
||||
def test_install_failed_not_fast(install_mockery, monkeypatch, capsys):
|
||||
"""Test install with failed install."""
|
||||
installer = create_installer(["a"], {"fail_fast": False})
|
||||
const_arg = installer_args(["a"], {"fail_fast": False})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Make sure the package is identified as failed
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "has_failed", _true)
|
||||
@@ -977,7 +1037,8 @@ def _interrupt(installer, task, install_status, **kwargs):
|
||||
else:
|
||||
installer.installed.add(task.pkg.name)
|
||||
|
||||
installer = create_installer([spec_name], {})
|
||||
const_arg = installer_args([spec_name], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Raise a KeyboardInterrupt error to trigger early termination
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _interrupt)
|
||||
@@ -1003,7 +1064,8 @@ def _install(installer, task, install_status, **kwargs):
|
||||
else:
|
||||
installer.installed.add(task.pkg.name)
|
||||
|
||||
installer = create_installer([spec_name], {})
|
||||
const_arg = installer_args([spec_name], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Raise a KeyboardInterrupt error to trigger early termination
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install)
|
||||
@@ -1029,7 +1091,8 @@ def _install(installer, task, install_status, **kwargs):
|
||||
else:
|
||||
installer.installed.add(task.pkg.name)
|
||||
|
||||
installer = create_installer([spec_name, "a"], {})
|
||||
const_arg = installer_args([spec_name, "a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Raise a KeyboardInterrupt error to trigger early termination
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install)
|
||||
@@ -1043,21 +1106,25 @@ def _install(installer, task, install_status, **kwargs):
|
||||
|
||||
def test_install_fail_fast_on_detect(install_mockery, monkeypatch, capsys):
|
||||
"""Test fail_fast install when an install failure is detected."""
|
||||
b, c = spack.spec.Spec("b").concretized(), spack.spec.Spec("c").concretized()
|
||||
b_id, c_id = inst.package_id(b), inst.package_id(c)
|
||||
|
||||
installer = create_installer([b, c], {"fail_fast": True})
|
||||
const_arg = installer_args(["b"], {"fail_fast": False})
|
||||
const_arg.extend(installer_args(["c"], {"fail_fast": True}))
|
||||
installer = create_installer(const_arg)
|
||||
pkg_ids = [inst.package_id(spec) for spec, _ in const_arg]
|
||||
|
||||
# Make sure all packages are identified as failed
|
||||
# This will prevent b from installing, which will cause the build of c to be skipped.
|
||||
#
|
||||
# This will prevent b from installing, which will cause the build of a
|
||||
# to be skipped.
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "has_failed", _true)
|
||||
|
||||
with pytest.raises(inst.InstallError, match="after first install failure"):
|
||||
installer.install()
|
||||
|
||||
assert b_id in installer.failed, "Expected b to be marked as failed"
|
||||
assert c_id not in installer.failed, "Expected no attempt to install c"
|
||||
assert f"{b_id} failed to install" in capsys.readouterr().err
|
||||
assert pkg_ids[0] in installer.failed, "Expected b to be marked as failed"
|
||||
assert pkg_ids[1] not in installer.failed, "Expected no attempt to install c"
|
||||
|
||||
out = capsys.readouterr()[1]
|
||||
assert "{0} failed to install".format(pkg_ids[0]) in out
|
||||
|
||||
|
||||
def _test_install_fail_fast_on_except_patch(installer, **kwargs):
|
||||
@@ -1070,7 +1137,8 @@ def _test_install_fail_fast_on_except_patch(installer, **kwargs):
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_install_fail_fast_on_except(install_mockery, monkeypatch, capsys):
|
||||
"""Test fail_fast install when an install failure results from an error."""
|
||||
installer = create_installer(["a"], {"fail_fast": True})
|
||||
const_arg = installer_args(["a"], {"fail_fast": True})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Raise a non-KeyboardInterrupt exception to trigger fast failure.
|
||||
#
|
||||
@@ -1093,7 +1161,8 @@ def test_install_lock_failures(install_mockery, monkeypatch, capfd):
|
||||
def _requeued(installer, task, install_status):
|
||||
tty.msg("requeued {0}".format(task.pkg.spec.name))
|
||||
|
||||
installer = create_installer(["b"], {})
|
||||
const_arg = installer_args(["b"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Ensure never acquire a lock
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_ensure_locked", _not_locked)
|
||||
@@ -1112,19 +1181,20 @@ def _requeued(installer, task, install_status):
|
||||
|
||||
def test_install_lock_installed_requeue(install_mockery, monkeypatch, capfd):
|
||||
"""Cover basic install handling for installed package."""
|
||||
b = spack.spec.Spec("b").concretized()
|
||||
const_arg = installer_args(["b"], {})
|
||||
b, _ = const_arg[0]
|
||||
installer = create_installer(const_arg)
|
||||
b_pkg_id = inst.package_id(b)
|
||||
installer = create_installer([b])
|
||||
|
||||
def _prep(installer, task):
|
||||
installer.installed.add(b_pkg_id)
|
||||
tty.msg(f"{b_pkg_id} is installed")
|
||||
tty.msg("{0} is installed".format(b_pkg_id))
|
||||
|
||||
# also do not allow the package to be locked again
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_ensure_locked", _not_locked)
|
||||
|
||||
def _requeued(installer, task, install_status):
|
||||
tty.msg(f"requeued {inst.package_id(task.pkg.spec)}")
|
||||
tty.msg("requeued {0}".format(inst.package_id(task.pkg.spec)))
|
||||
|
||||
# Flag the package as installed
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_prepare_for_install", _prep)
|
||||
@@ -1137,8 +1207,9 @@ def _requeued(installer, task, install_status):
|
||||
|
||||
assert b_pkg_id not in installer.installed
|
||||
|
||||
out = capfd.readouterr()[0]
|
||||
expected = ["is installed", "read locked", "requeued"]
|
||||
for exp, ln in zip(expected, capfd.readouterr().out.splitlines()):
|
||||
for exp, ln in zip(expected, out.split("\n")):
|
||||
assert exp in ln
|
||||
|
||||
|
||||
@@ -1166,7 +1237,8 @@ def _requeued(installer, task, install_status):
|
||||
# Ensure don't continually requeue the task
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_requeue_task", _requeued)
|
||||
|
||||
installer = create_installer(["b"], {})
|
||||
const_arg = installer_args(["b"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
with pytest.raises(inst.InstallError, match="request failed"):
|
||||
installer.install()
|
||||
@@ -1181,19 +1253,25 @@ def _requeued(installer, task, install_status):
|
||||
|
||||
def test_install_skip_patch(install_mockery, mock_fetch):
|
||||
"""Test the path skip_patch install path."""
|
||||
installer = create_installer(["b"], {"fake": False, "skip_patch": True})
|
||||
spec_name = "b"
|
||||
const_arg = installer_args([spec_name], {"fake": False, "skip_patch": True})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
installer.install()
|
||||
assert inst.package_id(installer.build_requests[0].pkg.spec) in installer.installed
|
||||
|
||||
spec, install_args = const_arg[0]
|
||||
assert inst.package_id(spec) in installer.installed
|
||||
|
||||
|
||||
def test_install_implicit(install_mockery, mock_fetch):
|
||||
"""Test the path skip_patch install path."""
|
||||
spec_name = "trivial-install-test-package"
|
||||
installer = create_installer([spec_name], {"fake": False})
|
||||
const_arg = installer_args([spec_name], {"fake": False})
|
||||
installer = create_installer(const_arg)
|
||||
pkg = installer.build_requests[0].pkg
|
||||
assert not create_build_task(pkg, {"explicit": []}).explicit
|
||||
assert create_build_task(pkg, {"explicit": [pkg.spec.dag_hash()]}).explicit
|
||||
assert not create_build_task(pkg).explicit
|
||||
assert not create_build_task(pkg, {"explicit": False}).explicit
|
||||
assert create_build_task(pkg, {"explicit": True}).explicit
|
||||
assert create_build_task(pkg).explicit
|
||||
|
||||
|
||||
def test_overwrite_install_backup_success(temporary_store, config, mock_packages, tmpdir):
|
||||
@@ -1202,7 +1280,8 @@ def test_overwrite_install_backup_success(temporary_store, config, mock_packages
|
||||
of the original prefix, and leave the original spec marked installed.
|
||||
"""
|
||||
# Get a build task. TODO: refactor this to avoid calling internal methods
|
||||
installer = create_installer(["b"])
|
||||
const_arg = installer_args(["b"])
|
||||
installer = create_installer(const_arg)
|
||||
installer._init_queue()
|
||||
task = installer._pop_task()
|
||||
|
||||
@@ -1262,7 +1341,8 @@ def remove(self, spec):
|
||||
self.called = True
|
||||
|
||||
# Get a build task. TODO: refactor this to avoid calling internal methods
|
||||
installer = create_installer(["b"])
|
||||
const_arg = installer_args(["b"])
|
||||
installer = create_installer(const_arg)
|
||||
installer._init_queue()
|
||||
task = installer._pop_task()
|
||||
|
||||
@@ -1295,20 +1375,22 @@ def test_term_status_line():
|
||||
x.clear()
|
||||
|
||||
|
||||
@pytest.mark.parametrize("explicit", [True, False])
|
||||
def test_single_external_implicit_install(install_mockery, explicit):
|
||||
@pytest.mark.parametrize(
|
||||
"explicit_args,is_explicit",
|
||||
[({"explicit": False}, False), ({"explicit": True}, True), ({}, True)],
|
||||
)
|
||||
def test_single_external_implicit_install(install_mockery, explicit_args, is_explicit):
|
||||
pkg = "trivial-install-test-package"
|
||||
s = spack.spec.Spec(pkg).concretized()
|
||||
s.external_path = "/usr"
|
||||
args = {"explicit": [s.dag_hash()] if explicit else []}
|
||||
create_installer([s], args).install()
|
||||
assert spack.store.STORE.db.get_record(pkg).explicit == explicit
|
||||
create_installer([(s, explicit_args)]).install()
|
||||
assert spack.store.STORE.db.get_record(pkg).explicit == is_explicit
|
||||
|
||||
|
||||
def test_overwrite_install_does_install_build_deps(install_mockery, mock_fetch):
|
||||
"""When overwrite installing something from sources, build deps should be installed."""
|
||||
s = spack.spec.Spec("dtrun3").concretized()
|
||||
create_installer([s]).install()
|
||||
create_installer([(s, {})]).install()
|
||||
|
||||
# Verify there is a pure build dep
|
||||
edge = s.edges_to_dependencies(name="dtbuild3").pop()
|
||||
@@ -1319,7 +1401,7 @@ def test_overwrite_install_does_install_build_deps(install_mockery, mock_fetch):
|
||||
build_dep.package.do_uninstall()
|
||||
|
||||
# Overwrite install the root dtrun3
|
||||
create_installer([s], {"overwrite": [s.dag_hash()]}).install()
|
||||
create_installer([(s, {"overwrite": [s.dag_hash()]})]).install()
|
||||
|
||||
# Verify that the build dep was also installed.
|
||||
assert build_dep.installed
|
||||
|
@@ -278,8 +278,8 @@ def test_symlinks_false(self, stage):
|
||||
def test_allow_broken_symlinks(self, stage):
|
||||
"""Test installing with a broken symlink."""
|
||||
with fs.working_dir(str(stage)):
|
||||
symlink("nonexistant.txt", "source/broken")
|
||||
fs.install_tree("source", "dest", symlinks=True)
|
||||
symlink("nonexistant.txt", "source/broken", allow_broken_symlinks=True)
|
||||
fs.install_tree("source", "dest", symlinks=True, allow_broken_symlinks=True)
|
||||
assert os.path.islink("dest/broken")
|
||||
assert not os.path.exists(readlink("dest/broken"))
|
||||
|
||||
|
@@ -20,7 +20,7 @@ def test_symlink_file(tmpdir):
|
||||
fd, real_file = tempfile.mkstemp(prefix="real", suffix=".txt", dir=test_dir)
|
||||
link_file = str(tmpdir.join("link.txt"))
|
||||
assert os.path.exists(link_file) is False
|
||||
symlink.symlink(real_file, link_file)
|
||||
symlink.symlink(source_path=real_file, link_path=link_file)
|
||||
assert os.path.exists(link_file)
|
||||
assert symlink.islink(link_file)
|
||||
|
||||
@@ -32,12 +32,11 @@ def test_symlink_dir(tmpdir):
|
||||
real_dir = os.path.join(test_dir, "real_dir")
|
||||
link_dir = os.path.join(test_dir, "link_dir")
|
||||
os.mkdir(real_dir)
|
||||
symlink.symlink(real_dir, link_dir)
|
||||
symlink.symlink(source_path=real_dir, link_path=link_dir)
|
||||
assert os.path.exists(link_dir)
|
||||
assert symlink.islink(link_dir)
|
||||
|
||||
|
||||
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
|
||||
def test_symlink_source_not_exists(tmpdir):
|
||||
"""Test the symlink.symlink method for the case where a source path does not exist"""
|
||||
with tmpdir.as_cwd():
|
||||
@@ -45,7 +44,7 @@ def test_symlink_source_not_exists(tmpdir):
|
||||
real_dir = os.path.join(test_dir, "real_dir")
|
||||
link_dir = os.path.join(test_dir, "link_dir")
|
||||
with pytest.raises(symlink.SymlinkError):
|
||||
symlink._windows_symlink(real_dir, link_dir)
|
||||
symlink.symlink(source_path=real_dir, link_path=link_dir, allow_broken_symlinks=False)
|
||||
|
||||
|
||||
def test_symlink_src_relative_to_link(tmpdir):
|
||||
@@ -62,16 +61,18 @@ def test_symlink_src_relative_to_link(tmpdir):
|
||||
fd, real_file = tempfile.mkstemp(prefix="real", suffix=".txt", dir=subdir_2)
|
||||
link_file = os.path.join(subdir_1, "link.txt")
|
||||
|
||||
symlink.symlink(f"b/{os.path.basename(real_file)}", f"a/{os.path.basename(link_file)}")
|
||||
symlink.symlink(
|
||||
source_path=f"b/{os.path.basename(real_file)}",
|
||||
link_path=f"a/{os.path.basename(link_file)}",
|
||||
)
|
||||
assert os.path.exists(link_file)
|
||||
assert symlink.islink(link_file)
|
||||
# Check dirs
|
||||
assert not os.path.lexists(link_dir)
|
||||
symlink.symlink("b", "a/c")
|
||||
symlink.symlink(source_path="b", link_path="a/c")
|
||||
assert os.path.lexists(link_dir)
|
||||
|
||||
|
||||
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
|
||||
def test_symlink_src_not_relative_to_link(tmpdir):
|
||||
"""Test the symlink.symlink functionality where the source value does not exist relative to
|
||||
the link and not relative to the cwd. NOTE that this symlink api call is EXPECTED to raise
|
||||
@@ -87,18 +88,19 @@ def test_symlink_src_not_relative_to_link(tmpdir):
|
||||
link_file = str(tmpdir.join("link.txt"))
|
||||
# Expected SymlinkError because source path does not exist relative to link path
|
||||
with pytest.raises(symlink.SymlinkError):
|
||||
symlink._windows_symlink(
|
||||
f"d/{os.path.basename(real_file)}", f"a/{os.path.basename(link_file)}"
|
||||
symlink.symlink(
|
||||
source_path=f"d/{os.path.basename(real_file)}",
|
||||
link_path=f"a/{os.path.basename(link_file)}",
|
||||
allow_broken_symlinks=False,
|
||||
)
|
||||
assert not os.path.exists(link_file)
|
||||
# Check dirs
|
||||
assert not os.path.lexists(link_dir)
|
||||
with pytest.raises(symlink.SymlinkError):
|
||||
symlink._windows_symlink("d", "a/c")
|
||||
symlink.symlink(source_path="d", link_path="a/c", allow_broken_symlinks=False)
|
||||
assert not os.path.lexists(link_dir)
|
||||
|
||||
|
||||
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
|
||||
def test_symlink_link_already_exists(tmpdir):
|
||||
"""Test the symlink.symlink method for the case where a link already exists"""
|
||||
with tmpdir.as_cwd():
|
||||
@@ -106,10 +108,10 @@ def test_symlink_link_already_exists(tmpdir):
|
||||
real_dir = os.path.join(test_dir, "real_dir")
|
||||
link_dir = os.path.join(test_dir, "link_dir")
|
||||
os.mkdir(real_dir)
|
||||
symlink._windows_symlink(real_dir, link_dir)
|
||||
symlink.symlink(real_dir, link_dir, allow_broken_symlinks=False)
|
||||
assert os.path.exists(link_dir)
|
||||
with pytest.raises(symlink.SymlinkError):
|
||||
symlink._windows_symlink(real_dir, link_dir)
|
||||
symlink.symlink(source_path=real_dir, link_path=link_dir, allow_broken_symlinks=False)
|
||||
|
||||
|
||||
@pytest.mark.skipif(not symlink._windows_can_symlink(), reason="Test requires elevated privileges")
|
||||
@@ -120,7 +122,7 @@ def test_symlink_win_file(tmpdir):
|
||||
test_dir = str(tmpdir)
|
||||
fd, real_file = tempfile.mkstemp(prefix="real", suffix=".txt", dir=test_dir)
|
||||
link_file = str(tmpdir.join("link.txt"))
|
||||
symlink.symlink(real_file, link_file)
|
||||
symlink.symlink(source_path=real_file, link_path=link_file)
|
||||
# Verify that all expected conditions are met
|
||||
assert os.path.exists(link_file)
|
||||
assert symlink.islink(link_file)
|
||||
@@ -138,7 +140,7 @@ def test_symlink_win_dir(tmpdir):
|
||||
real_dir = os.path.join(test_dir, "real")
|
||||
link_dir = os.path.join(test_dir, "link")
|
||||
os.mkdir(real_dir)
|
||||
symlink.symlink(real_dir, link_dir)
|
||||
symlink.symlink(source_path=real_dir, link_path=link_dir)
|
||||
# Verify that all expected conditions are met
|
||||
assert os.path.exists(link_dir)
|
||||
assert symlink.islink(link_dir)
|
||||
|
77
lib/spack/spack/test/operating_system.py
Normal file
77
lib/spack/spack/test/operating_system.py
Normal file
@@ -0,0 +1,77 @@
|
||||
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import spack.operating_systems.cray_backend as cray_backend
|
||||
|
||||
|
||||
def test_read_cle_release_file(tmpdir, monkeypatch):
|
||||
"""test reading the Cray cle-release file"""
|
||||
cle_release_path = tmpdir.join("cle-release")
|
||||
with cle_release_path.open("w") as f:
|
||||
f.write(
|
||||
"""\
|
||||
RELEASE=6.0.UP07
|
||||
BUILD=6.0.7424
|
||||
DATE=20190611
|
||||
ARCH=noarch
|
||||
NETWORK=ari
|
||||
PATCHSET=35-201906112304
|
||||
DUMMY=foo=bar
|
||||
"""
|
||||
)
|
||||
|
||||
monkeypatch.setattr(cray_backend, "_cle_release_file", str(cle_release_path))
|
||||
attrs = cray_backend.read_cle_release_file()
|
||||
|
||||
assert attrs["RELEASE"] == "6.0.UP07"
|
||||
assert attrs["BUILD"] == "6.0.7424"
|
||||
assert attrs["DATE"] == "20190611"
|
||||
assert attrs["ARCH"] == "noarch"
|
||||
assert attrs["NETWORK"] == "ari"
|
||||
assert attrs["PATCHSET"] == "35-201906112304"
|
||||
assert attrs["DUMMY"] == "foo=bar"
|
||||
|
||||
assert cray_backend.CrayBackend._detect_crayos_version() == 6
|
||||
|
||||
|
||||
def test_read_clerelease_file(tmpdir, monkeypatch):
|
||||
"""test reading the Cray clerelease file"""
|
||||
clerelease_path = tmpdir.join("clerelease")
|
||||
with clerelease_path.open("w") as f:
|
||||
f.write("5.2.UP04\n")
|
||||
|
||||
monkeypatch.setattr(cray_backend, "_clerelease_file", str(clerelease_path))
|
||||
v = cray_backend.read_clerelease_file()
|
||||
|
||||
assert v == "5.2.UP04"
|
||||
|
||||
assert cray_backend.CrayBackend._detect_crayos_version() == 5
|
||||
|
||||
|
||||
def test_cle_release_precedence(tmpdir, monkeypatch):
|
||||
"""test that cle-release file takes precedence over clerelease file."""
|
||||
cle_release_path = tmpdir.join("cle-release")
|
||||
clerelease_path = tmpdir.join("clerelease")
|
||||
|
||||
with cle_release_path.open("w") as f:
|
||||
f.write(
|
||||
"""\
|
||||
RELEASE=6.0.UP07
|
||||
BUILD=6.0.7424
|
||||
DATE=20190611
|
||||
ARCH=noarch
|
||||
NETWORK=ari
|
||||
PATCHSET=35-201906112304
|
||||
DUMMY=foo=bar
|
||||
"""
|
||||
)
|
||||
|
||||
with clerelease_path.open("w") as f:
|
||||
f.write("5.2.UP04\n")
|
||||
|
||||
monkeypatch.setattr(cray_backend, "_clerelease_file", str(clerelease_path))
|
||||
monkeypatch.setattr(cray_backend, "_cle_release_file", str(cle_release_path))
|
||||
|
||||
assert cray_backend.CrayBackend._detect_crayos_version() == 6
|
@@ -272,29 +272,6 @@ def test_breadth_first_versus_depth_first_tree(abstract_specs_chain):
|
||||
]
|
||||
|
||||
|
||||
@pytest.mark.parametrize("cover", ["nodes", "edges"])
|
||||
@pytest.mark.parametrize("depth_first", [True, False])
|
||||
def test_tree_traversal_with_key(cover, depth_first, abstract_specs_chain):
|
||||
"""Compare two multisource traversals of the same DAG. In one case the DAG consists of unique
|
||||
Spec instances, in the second case there are identical copies of nodes and edges. Traversal
|
||||
should be equivalent when nodes are identified by dag_hash."""
|
||||
a = abstract_specs_chain["chain-a"]
|
||||
c = abstract_specs_chain["chain-c"]
|
||||
kwargs = {"cover": cover, "depth_first": depth_first}
|
||||
dag_hash = lambda s: s.dag_hash()
|
||||
|
||||
# Traverse DAG spanned by a unique set of Spec instances
|
||||
first = traverse.traverse_tree([a, c], key=id, **kwargs)
|
||||
|
||||
# Traverse equivalent DAG with copies of Spec instances included, keyed by dag hash.
|
||||
second = traverse.traverse_tree([a, c.copy()], key=dag_hash, **kwargs)
|
||||
|
||||
# Check that the same nodes are discovered at the same depth
|
||||
node_at_depth_first = [(depth, dag_hash(edge.spec)) for (depth, edge) in first]
|
||||
node_at_depth_second = [(depth, dag_hash(edge.spec)) for (depth, edge) in second]
|
||||
assert node_at_depth_first == node_at_depth_second
|
||||
|
||||
|
||||
def test_breadth_first_versus_depth_first_printing(abstract_specs_chain):
|
||||
"""Test breadth-first versus depth-first tree printing."""
|
||||
s = abstract_specs_chain["chain-a"]
|
||||
|
@@ -563,10 +563,10 @@ def traverse_tree(
|
||||
# identical to DFS, which is much more efficient then.
|
||||
if not depth_first and cover == "edges":
|
||||
edges, parents = breadth_first_to_tree_edges(specs, deptype, key)
|
||||
return traverse_breadth_first_tree_edges(None, edges, parents, key)
|
||||
return traverse_breadth_first_tree_edges(None, edges, parents)
|
||||
elif not depth_first and cover == "nodes":
|
||||
edges = breadth_first_to_tree_nodes(specs, deptype, key)
|
||||
return traverse_breadth_first_tree_nodes(None, edges, key)
|
||||
return traverse_breadth_first_tree_nodes(None, edges)
|
||||
|
||||
return traverse_edges(specs, order="pre", cover=cover, deptype=deptype, key=key, depth=True)
|
||||
|
||||
|
@@ -79,7 +79,7 @@ spack:
|
||||
- openfoam
|
||||
- osu-micro-benchmarks
|
||||
- parallel
|
||||
# - paraview
|
||||
- paraview
|
||||
- picard
|
||||
- quantum-espresso
|
||||
- raja
|
||||
|
@@ -85,7 +85,7 @@ spack:
|
||||
- openfoam
|
||||
- osu-micro-benchmarks
|
||||
- parallel
|
||||
# - paraview
|
||||
- paraview
|
||||
- picard
|
||||
- quantum-espresso
|
||||
# Build broken for gcc@7.3.1 x86_64_v4 (error: '_mm512_loadu_epi32' was not declared in this scope)
|
||||
|
@@ -25,16 +25,16 @@ spack:
|
||||
- paraview_specs:
|
||||
- matrix:
|
||||
- - paraview +raytracing
|
||||
- - +qt ^[virtuals=gl] glx # GUI Support w/ GLX Rendering
|
||||
- ~qt ^[virtuals=gl] glx # GLX Rendering
|
||||
- ^[virtuals=gl] osmesa # OSMesa Rendering
|
||||
- - +qt~osmesa # GUI Support w/ GLX Rendering
|
||||
- ~qt~osmesa # GLX Rendering
|
||||
- ~qt+osmesa # OSMesa Rendering
|
||||
- visit_specs:
|
||||
- matrix:
|
||||
- - visit~gui
|
||||
- - ^[virtuals=gl] glx # GLX Rendering
|
||||
- ^[virtuals=gl] osmesa # OSMesa Rendering
|
||||
- - visit
|
||||
- - ~gui~osmesa # GLX Rendering
|
||||
- ~gui+osmesa # OSMesa Rendering
|
||||
# VisIt GUI does not work with Qt 5.14.2
|
||||
# - +gui ^[virtuals=gl] glx # GUI Support w/ GLX Rendering
|
||||
# - +gui~osmesa # GUI Support w/ GLX Rendering
|
||||
- sdk_base_spec:
|
||||
- matrix:
|
||||
- - ecp-data-vis-sdk +ascent +adios2 +cinema +darshan +faodel +hdf5 +pnetcdf
|
||||
|
@@ -50,7 +50,7 @@ spack:
|
||||
variants: +termlib
|
||||
paraview:
|
||||
# Don't build GUI support or GLX rendering for HPC/container deployments
|
||||
require: "@5.11 ~qt ^[virtuals=gl] osmesa"
|
||||
require: "@5.11 ~qt+osmesa"
|
||||
python:
|
||||
version: [3.8.13]
|
||||
trilinos:
|
||||
|
@@ -43,7 +43,7 @@ spack:
|
||||
variants: +termlib
|
||||
paraview:
|
||||
# Don't build GUI support or GLX rendering for HPC/container deployments
|
||||
require: "@5.11 ~qt ^[virtuals=gl] osmesa"
|
||||
require: "@5.11 ~qt+osmesa"
|
||||
python:
|
||||
version: [3.8.13]
|
||||
trilinos:
|
||||
|
@@ -51,21 +51,21 @@ spack:
|
||||
|
||||
specs:
|
||||
# CPU
|
||||
# - adios
|
||||
# - alquimia
|
||||
# - aml
|
||||
- adios
|
||||
- alquimia
|
||||
- aml
|
||||
- amrex
|
||||
- arborx
|
||||
# - argobots
|
||||
- argobots
|
||||
- ascent # ecp dav
|
||||
- axom
|
||||
# - bolt
|
||||
# - boost
|
||||
- bolt
|
||||
- boost
|
||||
- butterflypack
|
||||
- cabana
|
||||
- caliper
|
||||
- chai
|
||||
# - charliecloud
|
||||
- charliecloud
|
||||
- conduit
|
||||
- cp2k +mpi
|
||||
- datatransferkit
|
||||
@@ -73,15 +73,15 @@ spack:
|
||||
- ecp-data-vis-sdk ~cuda ~rocm +adios2 +ascent +cinema +darshan +faodel +hdf5 ~paraview +pnetcdf +sz +unifyfs +veloc ~visit +vtkm +zfp # +visit: ?
|
||||
- exaworks
|
||||
- flecsi
|
||||
# - flit
|
||||
# - flux-core
|
||||
- flit
|
||||
- flux-core
|
||||
- fortrilinos
|
||||
# - gasnet
|
||||
- gasnet
|
||||
- ginkgo
|
||||
# - globalarrays
|
||||
# - gmp
|
||||
# - gotcha
|
||||
# - gptune ~mpispawn
|
||||
- globalarrays
|
||||
- gmp
|
||||
- gotcha
|
||||
- gptune ~mpispawn
|
||||
- gromacs +cp2k ^cp2k +mpi +dlaf build_system=cmake
|
||||
- h5bench
|
||||
- hdf5-vol-async
|
||||
@@ -110,10 +110,10 @@ spack:
|
||||
- nco
|
||||
- netlib-scalapack
|
||||
- nrm
|
||||
# - nvhpc
|
||||
- nvhpc
|
||||
- omega-h
|
||||
- openfoam
|
||||
# - openmpi
|
||||
- openmpi
|
||||
- openpmd-api
|
||||
- papi
|
||||
- papyrus
|
||||
@@ -144,35 +144,35 @@ spack:
|
||||
- sundials
|
||||
- superlu
|
||||
- superlu-dist
|
||||
# - swig@4.0.2-fortran
|
||||
- swig@4.0.2-fortran
|
||||
- sz3
|
||||
- tasmanian
|
||||
- tau +mpi +python +syscall
|
||||
- trilinos +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
|
||||
- turbine
|
||||
# - umap
|
||||
- umap
|
||||
- umpire
|
||||
- upcxx
|
||||
# - veloc
|
||||
- veloc
|
||||
- wannier90
|
||||
- xyce +mpi +shared +pymi +pymi_static_tpls
|
||||
# INCLUDED IN ECP DAV CPU
|
||||
# - adios2
|
||||
# - darshan-runtime
|
||||
# - darshan-util
|
||||
# - faodel
|
||||
# - hdf5
|
||||
# - libcatalyst
|
||||
# - parallel-netcdf
|
||||
- adios2
|
||||
- darshan-runtime
|
||||
- darshan-util
|
||||
- faodel
|
||||
- hdf5
|
||||
- libcatalyst
|
||||
- parallel-netcdf
|
||||
# - paraview
|
||||
# - py-cinemasci
|
||||
# - sz
|
||||
# - unifyfs
|
||||
# - visit # silo: https://github.com/spack/spack/issues/39538
|
||||
# - vtk-m
|
||||
# - zfp
|
||||
# --
|
||||
- py-cinemasci
|
||||
- sz
|
||||
- unifyfs
|
||||
- laghos
|
||||
# - visit # silo: https://github.com/spack/spack/issues/39538
|
||||
- vtk-m
|
||||
- zfp
|
||||
# --
|
||||
# - bricks ~cuda # not respecting target=aarch64?
|
||||
# - dealii # slepc: make[1]: *** internal error: invalid --jobserver-auth string 'fifo:/tmp/GMfifo1313'.
|
||||
# - geopm # geopm: https://github.com/spack/spack/issues/38795
|
||||
@@ -181,25 +181,25 @@ spack:
|
||||
# - variorum # variorum: https://github.com/spack/spack/issues/38786
|
||||
|
||||
# PYTHON PACKAGES
|
||||
# - opencv +python3
|
||||
# - py-horovod
|
||||
# - py-jax
|
||||
# - py-jupyterlab
|
||||
# - py-matplotlib
|
||||
# - py-mpi4py
|
||||
# - py-notebook
|
||||
# - py-numba
|
||||
# - py-numpy
|
||||
# - py-openai
|
||||
# - py-pandas
|
||||
# - py-plotly
|
||||
# - py-pooch
|
||||
# - py-pytest
|
||||
# - py-scikit-learn
|
||||
# - py-scipy
|
||||
# - py-seaborn
|
||||
# - py-tensorflow
|
||||
# - py-torch
|
||||
- opencv +python3
|
||||
- py-horovod
|
||||
- py-jax
|
||||
- py-jupyterlab
|
||||
- py-matplotlib
|
||||
- py-mpi4py
|
||||
- py-notebook
|
||||
- py-numba
|
||||
- py-numpy
|
||||
- py-openai
|
||||
- py-pandas
|
||||
- py-plotly
|
||||
- py-pooch
|
||||
- py-pytest
|
||||
- py-scikit-learn
|
||||
- py-scipy
|
||||
- py-seaborn
|
||||
- py-tensorflow
|
||||
- py-torch
|
||||
|
||||
# CUDA NOARCH
|
||||
- flux-core +cuda
|
||||
@@ -210,6 +210,100 @@ spack:
|
||||
# - bricks +cuda # not respecting target=aarch64?
|
||||
# - legion +cuda # legion: needs NVIDIA driver
|
||||
|
||||
# CUDA 75
|
||||
- amrex +cuda cuda_arch=75
|
||||
- arborx +cuda cuda_arch=75 ^kokkos +wrapper
|
||||
- cabana +cuda cuda_arch=75 ^kokkos +wrapper +cuda_lambda +cuda cuda_arch=75
|
||||
- caliper +cuda cuda_arch=75
|
||||
- chai +cuda cuda_arch=75 ^umpire ~shared
|
||||
# - cp2k +mpi +cuda cuda_arch=75 # cp2k: cp2k only supports cuda_arch ('35', '37', '60', '70', '80')
|
||||
- flecsi +cuda cuda_arch=75
|
||||
- ginkgo +cuda cuda_arch=75
|
||||
- gromacs +cuda cuda_arch=75
|
||||
- heffte +cuda cuda_arch=75
|
||||
- hpx +cuda cuda_arch=75
|
||||
- hypre +cuda cuda_arch=75
|
||||
- kokkos +wrapper +cuda cuda_arch=75
|
||||
- kokkos-kernels +cuda cuda_arch=75 ^kokkos +wrapper +cuda cuda_arch=75
|
||||
- magma +cuda cuda_arch=75
|
||||
- mfem +cuda cuda_arch=75
|
||||
- mgard +serial +openmp +timing +unstructured +cuda cuda_arch=75
|
||||
- omega-h +cuda cuda_arch=75
|
||||
- parsec +cuda cuda_arch=75
|
||||
- petsc +cuda cuda_arch=75
|
||||
- raja +cuda cuda_arch=75
|
||||
- slate +cuda cuda_arch=75
|
||||
- strumpack ~slate +cuda cuda_arch=75
|
||||
- sundials +cuda cuda_arch=75
|
||||
- superlu-dist +cuda cuda_arch=75
|
||||
- tasmanian +cuda cuda_arch=75
|
||||
- trilinos +cuda cuda_arch=75
|
||||
- umpire ~shared +cuda cuda_arch=75
|
||||
# INCLUDED IN ECP DAV CUDA
|
||||
- adios2 +cuda cuda_arch=75
|
||||
# - paraview +cuda cuda_arch=75
|
||||
- vtk-m +cuda cuda_arch=75
|
||||
- zfp +cuda cuda_arch=75
|
||||
# --
|
||||
# - ascent +cuda cuda_arch=75 # ascent: https://github.com/spack/spack/issues/38045
|
||||
# - axom +cuda cuda_arch=75 # axom: https://github.com/spack/spack/issues/29520
|
||||
# - cusz +cuda cuda_arch=75 # cusz: https://github.com/spack/spack/issues/38787
|
||||
# - dealii +cuda cuda_arch=75 # slepc: make[1]: *** internal error: invalid --jobserver-auth string 'fifo:/tmp/GMfifo1313'.
|
||||
# - ecp-data-vis-sdk +adios2 +hdf5 +vtkm +zfp +paraview +cuda cuda_arch=75 # embree: https://github.com/spack/spack/issues/39534
|
||||
# - lammps +cuda cuda_arch=75 # lammps: needs NVIDIA driver
|
||||
# - lbann +cuda cuda_arch=75 # lbann: https://github.com/spack/spack/issues/38788
|
||||
# - libpressio +bitgrooming +bzip2 +fpzip +hdf5 +libdistributed +lua +openmp +python +sz +sz3 +unix +zfp +json +remote +netcdf ~cusz +mgard +cuda cuda_arch=75 # libpressio: CMake Error at CMakeLists.txt:498 (find_library): Could not find CUFile_LIBRARY using the following names: cufile ; +cusz: https://github.com/spack/spack/issues/38787
|
||||
# - py-torch +cuda cuda_arch=75 # skipped, installed by other means
|
||||
# - slepc +cuda cuda_arch=75 # slepc: make[1]: *** internal error: invalid --jobserver-auth string 'fifo:/tmp/GMfifo1313'.
|
||||
# - upcxx +cuda cuda_arch=75 # upcxx: needs NVIDIA driver
|
||||
|
||||
# CUDA 80
|
||||
- amrex +cuda cuda_arch=80
|
||||
- arborx +cuda cuda_arch=80 ^kokkos +wrapper
|
||||
- cabana +cuda cuda_arch=80 ^kokkos +wrapper +cuda_lambda +cuda cuda_arch=80
|
||||
- caliper +cuda cuda_arch=80
|
||||
- chai +cuda cuda_arch=80 ^umpire ~shared
|
||||
# - cp2k +mpi +cuda cuda_arch=80 # cp2k: Error: KeyError: 'Point environment variable LIBSMM_PATH to the absolute path of the libsmm.a file'
|
||||
- flecsi +cuda cuda_arch=80
|
||||
- ginkgo +cuda cuda_arch=80
|
||||
- gromacs +cuda cuda_arch=80
|
||||
- heffte +cuda cuda_arch=80
|
||||
- hpx +cuda cuda_arch=80
|
||||
- hypre +cuda cuda_arch=80
|
||||
- kokkos +wrapper +cuda cuda_arch=80
|
||||
- kokkos-kernels +cuda cuda_arch=80 ^kokkos +wrapper +cuda cuda_arch=80
|
||||
- magma +cuda cuda_arch=80
|
||||
- mfem +cuda cuda_arch=80
|
||||
- mgard +serial +openmp +timing +unstructured +cuda cuda_arch=80
|
||||
- omega-h +cuda cuda_arch=80
|
||||
- parsec +cuda cuda_arch=80
|
||||
- petsc +cuda cuda_arch=80
|
||||
- raja +cuda cuda_arch=80
|
||||
- slate +cuda cuda_arch=80
|
||||
- strumpack ~slate +cuda cuda_arch=80
|
||||
- sundials +cuda cuda_arch=80
|
||||
- superlu-dist +cuda cuda_arch=80
|
||||
- tasmanian +cuda cuda_arch=80
|
||||
- trilinos +cuda cuda_arch=80
|
||||
- umpire ~shared +cuda cuda_arch=80
|
||||
# INCLUDED IN ECP DAV CUDA
|
||||
- adios2 +cuda cuda_arch=80
|
||||
# - paraview +cuda cuda_arch=80
|
||||
- vtk-m +cuda cuda_arch=80
|
||||
- zfp +cuda cuda_arch=80
|
||||
# --
|
||||
# - ascent +cuda cuda_arch=80 # ascent: https://github.com/spack/spack/issues/38045
|
||||
# - axom +cuda cuda_arch=80 # axom: https://github.com/spack/spack/issues/29520
|
||||
# - cusz +cuda cuda_arch=80 # cusz: https://github.com/spack/spack/issues/38787
|
||||
# - dealii +cuda cuda_arch=80 # slepc: make[1]: *** internal error: invalid --jobserver-auth string 'fifo:/tmp/GMfifo1313'.
|
||||
# - ecp-data-vis-sdk +adios2 +hdf5 +vtkm +zfp +paraview +cuda cuda_arch=80 # embree: https://github.com/spack/spack/issues/39534
|
||||
# - lammps +cuda cuda_arch=80 # lammps: needs NVIDIA driver
|
||||
# - lbann +cuda cuda_arch=80 # lbann: https://github.com/spack/spack/issues/38788
|
||||
# - libpressio +bitgrooming +bzip2 +fpzip +hdf5 +libdistributed +lua +openmp +python +sz +sz3 +unix +zfp +json +remote +netcdf ~cusz +mgard +cuda cuda_arch=80 # libpressio: CMake Error at CMakeLists.txt:498 (find_library): Could not find CUFile_LIBRARY using the following names: cufile ; +cusz: https://github.com/spack/spack/issues/38787
|
||||
# - py-torch +cuda cuda_arch=80 # skipped, installed by other means
|
||||
# - slepc +cuda cuda_arch=80 # slepc: make[1]: *** internal error: invalid --jobserver-auth string 'fifo:/tmp/GMfifo1313'.
|
||||
# - upcxx +cuda cuda_arch=80 # upcxx: needs NVIDIA driver
|
||||
|
||||
# CUDA 90
|
||||
- amrex +cuda cuda_arch=90
|
||||
- arborx +cuda cuda_arch=90 ^kokkos +wrapper
|
||||
|
@@ -168,6 +168,7 @@ spack:
|
||||
- hdf5
|
||||
- libcatalyst
|
||||
- parallel-netcdf
|
||||
- paraview
|
||||
- py-cinemasci
|
||||
- sz
|
||||
- unifyfs
|
||||
|
@@ -21,7 +21,7 @@ spack:
|
||||
variants: threads=openmp
|
||||
paraview:
|
||||
# Don't build GUI support or GLX rendering for HPC/container deployments
|
||||
require: "@5.11 ~qt ^[virtuals=gl] osmesa"
|
||||
require: "@5.11 ~qt+osmesa"
|
||||
|
||||
# ROCm 5.4.3
|
||||
comgr:
|
||||
|
@@ -52,7 +52,7 @@ spack:
|
||||
version: [11.8.0]
|
||||
paraview:
|
||||
# Don't build GUI support or GLX rendering for HPC/container deployments
|
||||
require: "@5.11 ~qt ^[virtuals=gl] osmesa"
|
||||
require: "@5.11 ~qt+osmesa"
|
||||
|
||||
specs:
|
||||
# CPU
|
||||
|
@@ -31,7 +31,7 @@ spack:
|
||||
variants: threads=openmp
|
||||
paraview:
|
||||
# Don't build GUI support or GLX rendering for HPC/container deployments
|
||||
require: "@5.11 ~qt ^[virtuals=gl] osmesa"
|
||||
require: "@5.11 ~qt+osmesa"
|
||||
trilinos:
|
||||
require: +amesos +amesos2 +anasazi +aztec +boost +epetra +epetraext
|
||||
+ifpack +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu
|
||||
|
@@ -1,16 +0,0 @@
|
||||
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class Sombrero(Package):
|
||||
"""Simple package with a dependency on an external spec."""
|
||||
|
||||
homepage = "http://www.example.com"
|
||||
url = "http://www.example.com/b-1.0.tar.gz"
|
||||
|
||||
version("1.0", md5="0123456789abcdef0123456789abcdef")
|
||||
depends_on("externaltool")
|
@@ -37,6 +37,7 @@ class _7zip(SourceforgePackage, Package):
|
||||
|
||||
conflicts("platform=linux")
|
||||
conflicts("platform=darwin")
|
||||
conflicts("platform=cray")
|
||||
|
||||
# TODO: Patch on WinSDK version 10.0.20348.0 when SDK is introduced to Spack
|
||||
# This patch solves a known bug in that SDK version on the 7zip side
|
||||
|
@@ -36,40 +36,6 @@
|
||||
}
|
||||
|
||||
_versions = {
|
||||
"24.04": {
|
||||
"RHEL-7": (
|
||||
"064c3ecfd71cba3d8bf639448e899388f58eb7faef4b38f3c1aace625ace8b1e",
|
||||
"https://developer.arm.com/-/media/Files/downloads/hpc/arm-compiler-for-linux/24-04/arm-compiler-for-linux_24.04_RHEL-7_aarch64.tar",
|
||||
),
|
||||
"RHEL-8": (
|
||||
"38f46a3549667d0fbccd947653d3a1a56b630d3bbb1251888c674c463f00dac3",
|
||||
"https://developer.arm.com/-/media/Files/downloads/hpc/arm-compiler-for-linux/24-04/arm-compiler-for-linux_24.04_RHEL-8_aarch64.tar",
|
||||
),
|
||||
"RHEL-9": (
|
||||
"d335db82c8310e1d79c96dc09a19e4d509c5ab17eb6027214bb79cfc75d8229e",
|
||||
"https://developer.arm.com/-/media/Files/downloads/hpc/arm-compiler-for-linux/24-04/arm-compiler-for-linux_24.04_RHEL-9_aarch64.tar",
|
||||
),
|
||||
"SLES-15": (
|
||||
"6f2e090efcd8da2cbeaf63272fac5917f637713f1e86d73cde2ad7268e3a05a2",
|
||||
"https://developer.arm.com/-/media/Files/downloads/hpc/arm-compiler-for-linux/24-04/arm-compiler-for-linux_24.04_SLES-15_aarch64.tar",
|
||||
),
|
||||
"Ubuntu-20.04": (
|
||||
"0d782e6a69a11f90bf3b392313c885a2376c5761f227bf2f68e34e9848ec8e97",
|
||||
"https://developer.arm.com/-/media/Files/downloads/hpc/arm-compiler-for-linux/24-04/arm-compiler-for-linux_24.04_Ubuntu-20.04_aarch64.tar",
|
||||
),
|
||||
"Ubuntu-22.04": (
|
||||
"0bab2e89f0a2359746f89a01251dca763305c5b0dee95cf47b0968dd1cb5f6f6",
|
||||
"https://developer.arm.com/-/media/Files/downloads/hpc/arm-compiler-for-linux/24-04/arm-compiler-for-linux_24.04_Ubuntu-22.04_aarch64.tar",
|
||||
),
|
||||
"AmazonLinux-2": (
|
||||
"cf0bebe2d7123749c919a5f4e36100ad21f08ffbad3b53e477205c08ae973a2d",
|
||||
"https://developer.arm.com/-/media/Files/downloads/hpc/arm-compiler-for-linux/24-04/arm-compiler-for-linux_24.04_AmazonLinux-2_aarch64.tar",
|
||||
),
|
||||
"AmazonLinux-2023": (
|
||||
"035dae8c41a1ac86c8885837978cb712306aa75dc5d26d17aca843b84eaee9f4",
|
||||
"https://developer.arm.com/-/media/Files/downloads/hpc/arm-compiler-for-linux/24-04/arm-compiler-for-linux_24.04_AmazonLinux-2023_aarch64.tar",
|
||||
),
|
||||
},
|
||||
"23.10": {
|
||||
"RHEL-7": (
|
||||
"c3bd4df3e5f6c97369237b0067e0a421dceb9c167d73f22f3da87f5025258314",
|
||||
@@ -226,22 +192,34 @@ def get_armpl_version_to_3(spec):
|
||||
|
||||
|
||||
def get_armpl_prefix(spec):
|
||||
ver = get_armpl_version_to_3(spec)
|
||||
os = get_os(spec.version.string)
|
||||
if spec.version.string.startswith("22."):
|
||||
return join_path(spec.prefix, f"armpl-{ver}_AArch64_{os}_arm-linux-compiler_aarch64-linux")
|
||||
return join_path(
|
||||
spec.prefix,
|
||||
"armpl-{}_AArch64_{}_arm-linux-compiler_aarch64-linux".format(
|
||||
get_armpl_version_to_3(spec), get_os(spec.version.string)
|
||||
),
|
||||
)
|
||||
else:
|
||||
return join_path(spec.prefix, f"armpl-{ver}_{os}_arm-linux-compiler")
|
||||
return join_path(
|
||||
spec.prefix,
|
||||
"armpl-{}_{}_arm-linux-compiler".format(
|
||||
get_armpl_version_to_3(spec), get_os(spec.version.string)
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def get_acfl_prefix(spec):
|
||||
os = get_os(spec.version.string)
|
||||
if spec.version.string.startswith("22."):
|
||||
return join_path(
|
||||
spec.prefix, f"arm-linux-compiler-{spec.version}_Generic-AArch64_{os}_aarch64-linux"
|
||||
spec.prefix,
|
||||
"arm-linux-compiler-{0}_Generic-AArch64_{1}_aarch64-linux".format(
|
||||
spec.version, get_os(spec.version.string)
|
||||
),
|
||||
)
|
||||
else:
|
||||
return join_path(spec.prefix, f"arm-linux-compiler-{spec.version}_{os}")
|
||||
return join_path(
|
||||
spec.prefix, f"arm-linux-compiler-{spec.version}_{get_os(spec.version.string)}"
|
||||
)
|
||||
|
||||
|
||||
def get_gcc_prefix(spec):
|
||||
@@ -249,16 +227,6 @@ def get_gcc_prefix(spec):
|
||||
return join_path(spec.prefix, next(dir for dir in dirlist if dir.startswith("gcc")))
|
||||
|
||||
|
||||
def get_armpl_suffix(spec):
|
||||
suffix = ""
|
||||
if spec.satisfies("@24:"):
|
||||
suffix += "_ilp64" if spec.satisfies("+ilp64") else "_lp64"
|
||||
else:
|
||||
suffix += "_ilp64" if spec.satisfies("+ilp64") else ""
|
||||
suffix += "_mp" if spec.satisfies("threads=openmp") else ""
|
||||
return suffix
|
||||
|
||||
|
||||
class Acfl(Package, CompilerPackage):
|
||||
"""Arm Compiler combines the optimized tools and libraries from Arm
|
||||
with a modern LLVM-based compiler framework.
|
||||
@@ -267,7 +235,7 @@ class Acfl(Package, CompilerPackage):
|
||||
homepage = "https://developer.arm.com/Tools%20and%20Software/Arm%20Compiler%20for%20Linux"
|
||||
url = "https://developer.arm.com/-/media/Files/downloads/hpc/arm-compiler-for-linux/23-10/arm-compiler-for-linux_23.10_Ubuntu-22.04_aarch64.tar"
|
||||
|
||||
maintainers("paolotricerri")
|
||||
maintainers("annop-w")
|
||||
|
||||
# Build Versions
|
||||
for ver, packages in _versions.items():
|
||||
@@ -342,7 +310,10 @@ def fortran(self):
|
||||
|
||||
@property
|
||||
def lib_suffix(self):
|
||||
return get_armpl_suffix(self.spec)
|
||||
suffix = ""
|
||||
suffix += "_ilp64" if self.spec.satisfies("+ilp64") else ""
|
||||
suffix += "_mp" if self.spec.satisfies("threads=openmp") else ""
|
||||
return suffix
|
||||
|
||||
@property
|
||||
def blas_libs(self):
|
||||
@@ -418,9 +389,8 @@ def setup_run_environment(self, env):
|
||||
def check_install(self):
|
||||
arm_dir = get_acfl_prefix(self.spec)
|
||||
armpl_dir = get_armpl_prefix(self.spec)
|
||||
suffix = get_armpl_suffix(self.spec)
|
||||
gcc_dir = get_gcc_prefix(self.spec)
|
||||
armpl_example_dir = join_path(armpl_dir, f"examples{suffix}")
|
||||
armpl_example_dir = join_path(armpl_dir, "examples")
|
||||
# run example makefile
|
||||
make(
|
||||
"-C",
|
||||
@@ -429,7 +399,6 @@ def check_install(self):
|
||||
"F90=" + self.fortran,
|
||||
"CPATH=" + join_path(arm_dir, "include"),
|
||||
"COMPILER_PATH=" + gcc_dir,
|
||||
"ARMPL_DIR=" + armpl_dir,
|
||||
)
|
||||
# clean up
|
||||
make("-C", armpl_example_dir, "clean")
|
||||
|
@@ -42,8 +42,6 @@ class Acts(CMakePackage, CudaPackage):
|
||||
# Supported Acts versions
|
||||
version("main", branch="main")
|
||||
version("master", branch="main", deprecated=True) # For compatibility
|
||||
version("35.0.0", commit="352b423ec31934f825deb9897780246d60ffc44e", submodules=True)
|
||||
version("34.1.0", commit="8e1b7a659d912cd98db9d700906ff59e708da574", submodules=True)
|
||||
version("34.0.0", commit="daafd83adf0ce50f9667f3c9d4791a459e39fd1b", submodules=True)
|
||||
version("33.1.0", commit="00591a593a648430820e980b031301d25c18f1c7", submodules=True)
|
||||
version("33.0.0", commit="f6ed9013e76120137ae456583a04b554d88d9452", submodules=True)
|
||||
@@ -200,13 +198,7 @@ class Acts(CMakePackage, CudaPackage):
|
||||
"examples",
|
||||
default=False,
|
||||
description="Build the examples",
|
||||
when="@17:34 +fatras +identification +json +tgeo",
|
||||
)
|
||||
variant(
|
||||
"examples",
|
||||
default=False,
|
||||
description="Build the examples",
|
||||
when="@35: +fatras +json +tgeo",
|
||||
when="@17: +fatras +identification +json +tgeo",
|
||||
)
|
||||
variant("integration_tests", default=False, description="Build the integration tests")
|
||||
variant("unit_tests", default=False, description="Build the unit tests")
|
||||
@@ -241,9 +233,7 @@ class Acts(CMakePackage, CudaPackage):
|
||||
)
|
||||
variant("fatras_geant4", default=False, description="Build Geant4 Fatras package")
|
||||
variant("geomodel", default=False, description="Build GeoModel plugin", when="@33:")
|
||||
variant(
|
||||
"identification", default=False, description="Build the Identification plugin", when="@:34"
|
||||
)
|
||||
variant("identification", default=False, description="Build the Identification plugin")
|
||||
variant("json", default=False, description="Build the Json plugin")
|
||||
variant("legacy", default=False, description="Build the Legacy package")
|
||||
variant("mlpack", default=False, description="Build MLpack plugin", when="@25:31")
|
||||
@@ -262,11 +252,8 @@ class Acts(CMakePackage, CudaPackage):
|
||||
description="Enable memory profiling using gperftools",
|
||||
when="@19.3:",
|
||||
)
|
||||
variant("sycl", default=False, description="Build the SyCL plugin", when="@1:34")
|
||||
variant(
|
||||
"tgeo", default=False, description="Build the TGeo plugin", when="@:34 +identification"
|
||||
)
|
||||
variant("tgeo", default=False, description="Build the TGeo plugin", when="@35:")
|
||||
variant("sycl", default=False, description="Build the SyCL plugin", when="@1:")
|
||||
variant("tgeo", default=False, description="Build the TGeo plugin", when="+identification")
|
||||
|
||||
# Variants that only affect Acts examples for now
|
||||
variant(
|
||||
@@ -491,8 +478,6 @@ def plugin_cmake_variant(plugin_name, spack_variant):
|
||||
cuda_arch = spec.variants["cuda_arch"].value
|
||||
if cuda_arch != "none":
|
||||
args.append(f"-DCUDA_FLAGS=-arch=sm_{cuda_arch[0]}")
|
||||
arch_str = ";".join(self.spec.variants["cuda_arch"].value)
|
||||
args.append(self.define("CMAKE_CUDA_ARCHITECTURES", arch_str))
|
||||
|
||||
args.append(self.define_from_variant("CMAKE_CXX_STANDARD", "cxxstd"))
|
||||
|
||||
|
@@ -26,11 +26,10 @@ class Adios2(CMakePackage, CudaPackage, ROCmPackage):
|
||||
|
||||
version("master", branch="master")
|
||||
version(
|
||||
"2.10.1",
|
||||
sha256="ce776f3a451994f4979c6bd6d946917a749290a37b7433c0254759b02695ad85",
|
||||
"2.10.0",
|
||||
sha256="e5984de488bda546553dd2f46f047e539333891e63b9fe73944782ba6c2d95e4",
|
||||
preferred=True,
|
||||
)
|
||||
version("2.10.0", sha256="e5984de488bda546553dd2f46f047e539333891e63b9fe73944782ba6c2d95e4")
|
||||
version("2.9.2", sha256="78309297c82a95ee38ed3224c98b93d330128c753a43893f63bbe969320e4979")
|
||||
version("2.9.1", sha256="ddfa32c14494250ee8a48ef1c97a1bf6442c15484bbbd4669228a0f90242f4f9")
|
||||
version("2.9.0", sha256="69f98ef58c818bb5410133e1891ac192653b0ec96eb9468590140f2552b6e5d1")
|
||||
@@ -149,7 +148,7 @@ class Adios2(CMakePackage, CudaPackage, ROCmPackage):
|
||||
conflicts("+rocm", when="~kokkos", msg="ADIOS2 does not support HIP without Kokkos")
|
||||
conflicts("+sycl", when="~kokkos", msg="ADIOS2 does not support SYCL without Kokkos")
|
||||
|
||||
for _platform in ["linux", "darwin"]:
|
||||
for _platform in ["linux", "darwin", "cray"]:
|
||||
depends_on("pkgconfig", type="build", when=f"platform={_platform}")
|
||||
variant(
|
||||
"pic",
|
||||
|
@@ -143,6 +143,7 @@ class Amber(Package, CudaPackage):
|
||||
depends_on("cuda@7.5.18", when="@:16+cuda")
|
||||
|
||||
# conflicts
|
||||
conflicts("+x11", when="platform=cray", msg="x11 amber applications not available for cray")
|
||||
conflicts("+openmp", when="%clang", msg="OpenMP not available for the clang compiler")
|
||||
conflicts(
|
||||
"+openmp", when="%apple-clang", msg="OpenMP not available for the Apple clang compiler"
|
||||
|
@@ -13,14 +13,13 @@ class Amdsmi(CMakePackage):
|
||||
applications to monitor and control AMD device."""
|
||||
|
||||
homepage = "https://github.com/ROCm/amdsmi"
|
||||
url = "https://github.com/ROCm/amdsmi/archive/refs/tags/rocm-6.1.1.tar.gz"
|
||||
url = "https://github.com/ROCm/amdsmi/archive/refs/tags/rocm-6.0.2.tar.gz"
|
||||
|
||||
tags = ["rocm"]
|
||||
maintainers("srekolam", "renjithravindrankannath")
|
||||
libraries = ["libamd_smi"]
|
||||
|
||||
license("MIT")
|
||||
version("6.1.1", sha256="10ece6b1ca8bb36ab3ae987fc512838f30a92ab788a2200410e9c1707fe0166b")
|
||||
version("6.1.0", sha256="5bd1f150a2191b1703ff2670e40f6fed730f59f155623d6e43b7f64c39ae0967")
|
||||
version("6.0.2", sha256="aeadf07750def0325a0eaa29e767530b2ec94f3d45dc3b7452fd7a2493769428")
|
||||
version("6.0.0", sha256="2626e3af9d60dec245c61af255525a0c0841a73fb7ec2836477c0ce5793de39c")
|
||||
|
@@ -3,31 +3,28 @@
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import os
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class Amduprof(Package):
|
||||
"""AMD uProf ("MICRO-prof") is a software profiling analysis tool for x86
|
||||
applications running on Windows, Linux and FreeBSD operating systems and
|
||||
provides event information unique to the AMD "Zen"-based processors and AMD
|
||||
Instinct(tm) MI Series accelerators. AMD uProf enables the developer to better
|
||||
understand the limiters of application performance and evaluate
|
||||
improvements."""
|
||||
"""AMD uProf ("MICRO-prof") is a software profiling analysis tool for
|
||||
x86 applications running on Windows, Linux and FreeBSD operating systems
|
||||
and provides event information unique to the AMD "Zen"-based processors
|
||||
and AMD Instinct(tm) MI Series accelerators.
|
||||
"""
|
||||
|
||||
homepage = "https://developer.amd.com/amd-uprof/"
|
||||
url = f"file://{os.getcwd()}/AMDuProf_Linux_x64_4.2.850.tar.bz2"
|
||||
manual_download = True
|
||||
url = "https://download.amd.com/developer/eula/uprof/AMDuProf_Linux_x64_4.2.850.tar.bz2"
|
||||
|
||||
maintainers("amd-toolchain-support")
|
||||
maintainers("zzzoom")
|
||||
|
||||
version("4.2.850", sha256="f2d7c4eb9ec9c32845ff8f19874c1e6bcb0fa8ab2c12e73addcbf23a6d1bd623")
|
||||
|
||||
depends_on("binutils@2.27:", type="run")
|
||||
# TODO: build Power Profiling driver on Linux
|
||||
# TODO: ROCm for GPU tracing and profiling
|
||||
# TODO: BCC and eBPF for OS tracing
|
||||
|
||||
conflicts("platform=darwin")
|
||||
requires("target=x86_64:", msg="AMD uProf available only on x86_64")
|
||||
|
||||
def install(self, spec, prefix):
|
||||
install_tree(".", prefix)
|
||||
|
@@ -19,6 +19,7 @@ class AppleLibunwind(Package):
|
||||
# Darwin must be expressed by listing a conflict with every
|
||||
# platform that isn't Darwin/macOS
|
||||
conflicts("platform=linux")
|
||||
conflicts("platform=cray")
|
||||
|
||||
# Override the fetcher method to throw a useful error message;
|
||||
# avoids GitHub issue (#7061) in which the opengl placeholder
|
||||
|
@@ -17,6 +17,7 @@ class AppleLibuuid(BundlePackage):
|
||||
|
||||
# Only supported on 'platform=darwin'
|
||||
conflicts("platform=linux")
|
||||
conflicts("platform=cray")
|
||||
conflicts("platform=windows")
|
||||
|
||||
@property
|
||||
|
@@ -4,9 +4,6 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
|
||||
from glob import glob
|
||||
from os.path import basename
|
||||
|
||||
from spack.package import *
|
||||
from spack.pkg.builtin.singularityce import SingularityBase
|
||||
|
||||
@@ -36,8 +33,6 @@ class Apptainer(SingularityBase):
|
||||
)
|
||||
|
||||
version("main", branch="main")
|
||||
version("1.3.1", sha256="6956c689c4a8f148789c5c34b33c15ad8f3460b4cee3f48022119fd872eacee9")
|
||||
version("1.2.5", sha256="606b67ef97683e1420401718687d258b1034fdf2edae72eeacd0828dffbfc2c2")
|
||||
version("1.1.9", sha256="c615777539154288542cf393d3fd44c04ccb3260bc6330dc324d4e4ebe902bfa")
|
||||
version("1.1.7", sha256="e6d3956a26c3965703402e17f153ba07f59bf710068806462b314d2d04e825e7")
|
||||
version("1.1.6", sha256="5f32d305279a51ce8bdbe69e733c4ac12b1efdcb77758fab8ec9463e96a8fd82")
|
||||
@@ -46,15 +41,8 @@ class Apptainer(SingularityBase):
|
||||
version("1.1.3", sha256="c7bf7f4d5955e1868739627928238d02f94ca9fd0caf110b0243d65548427899")
|
||||
version("1.0.2", sha256="2d7a9d0a76d5574459d249c3415e21423980d9154ce85e8c34b0600782a7dfd3")
|
||||
|
||||
depends_on("e2fsprogs@1.47:+fuse2fs", type="run")
|
||||
depends_on("go@1.17.5:", when="@1.1.0:")
|
||||
depends_on("go@1.19:", when="@1.2:")
|
||||
depends_on("go@1.20:", when="@1.3:")
|
||||
depends_on("gocryptfs@2.4:", type="run", when="@1.3:")
|
||||
depends_on("squashfuse", type="run")
|
||||
depends_on("squashfuse@0.5.1:", type="run", when="@1.3:")
|
||||
depends_on("fuse-overlayfs", type="run")
|
||||
depends_on("fuse-overlayfs@1.13:", type="run", when="@1.3:")
|
||||
|
||||
singularity_org = "apptainer"
|
||||
singularity_name = "apptainer"
|
||||
@@ -76,19 +64,3 @@ def flag_handler(self, name, flags):
|
||||
# Certain go modules this build pulls in cannot be built with anything
|
||||
# other than -O0. Best to just discard any injected flags.
|
||||
return (None, flags, None)
|
||||
|
||||
# They started vendoring the fuse bits and assume they'll be in the
|
||||
# libexec/apptainer prefix as a result. When singularity is run with
|
||||
# suid it doesn't search the user's $PATH for security reasons.
|
||||
# Since we don't use the vendored deps and instead install them in
|
||||
# their own prefixes they are not found by default.
|
||||
# This is likely only relevant for 1.3:, but it should be fine everywhere
|
||||
@run_after("install")
|
||||
def fix_binary_path(self):
|
||||
for i in [
|
||||
s for s in ["e2fsprogs", "gocryptfs", "squashfuse", "fuse-overlayfs"] if s in self.spec
|
||||
]:
|
||||
for binary in glob(join_path(self.spec[i].prefix.bin, "*")):
|
||||
symlink(
|
||||
binary, join_path(self.spec.prefix.libexec.apptainer.bin, basename(binary))
|
||||
)
|
||||
|
@@ -8,20 +8,6 @@
|
||||
from spack.package import *
|
||||
|
||||
_versions = {
|
||||
"6.1.1": {
|
||||
"apt": (
|
||||
"faa5dae914fc63f0c8d0c2be28b7ec502db487004bdff0fe88dd15432efc5401",
|
||||
"https://repo.radeon.com/rocm/apt/6.1.1/pool/main/h/hsa-amd-aqlprofile/hsa-amd-aqlprofile_1.0.0.60101.60101-90~20.04_amd64.deb",
|
||||
),
|
||||
"yum": (
|
||||
"cc247e15ceff625c94d6c7104ffea3990a4acbcd2f9114914ab7ab829fae4aeb",
|
||||
"https://repo.radeon.com/rocm/yum/6.1.1/main/hsa-amd-aqlprofile-1.0.0.60101.60101-90.el7.x86_64.rpm",
|
||||
),
|
||||
"zyp": (
|
||||
"9af82841be1765d6334b06a463583570653b6a36d0de29cfc00c5c4b6560b956",
|
||||
"https://repo.radeon.com/rocm/zyp/6.1.1/main/hsa-amd-aqlprofile-1.0.0.60101.60101-sles154.90.x86_64.rpm",
|
||||
),
|
||||
},
|
||||
"6.1.0": {
|
||||
"apt": (
|
||||
"0ef862503245f12721384443f8347528f3d5c2c7762289c770521f3235ba36c9",
|
||||
|
@@ -16,7 +16,6 @@ class Armadillo(CMakePackage):
|
||||
|
||||
license("Apache-2.0")
|
||||
|
||||
version("12.8.3", sha256="2922589f6387796504b340da6bb954bef3d87574c298515893289edd2d890151")
|
||||
version("12.8.2", sha256="03b62f8c09e4f5d74643b478520741b8e27b55e7e4525978fcae2f5d791ac3bf")
|
||||
version("12.8.1", sha256="2781dd3a6cc5f9a49c91a4519dde2b1c24335a5bfe0cc1c9881b6363142452b4")
|
||||
version("12.4.0", sha256="9905282781ced3f99769b0e45a705ecb50192ca1622300707b3302ea167dc883")
|
||||
|
@@ -21,7 +21,7 @@
|
||||
"amzn2023": "RHEL-7",
|
||||
}
|
||||
|
||||
_os_map_before_24 = {
|
||||
_os_map = {
|
||||
"ubuntu20.04": "Ubuntu-20.04",
|
||||
"ubuntu22.04": "Ubuntu-22.04",
|
||||
"sles15": "SLES-15",
|
||||
@@ -36,27 +36,7 @@
|
||||
"amzn2023": "AmazonLinux-2023",
|
||||
}
|
||||
|
||||
_os_pkg_map = {
|
||||
"ubuntu20.04": "deb",
|
||||
"ubuntu22.04": "deb",
|
||||
"sles15": "rpm",
|
||||
"centos7": "rpm",
|
||||
"centos8": "rpm",
|
||||
"rhel7": "rpm",
|
||||
"rhel8": "rpm",
|
||||
"rhel9": "rpm",
|
||||
"rocky8": "rpm",
|
||||
"rocky9": "rpm",
|
||||
"amzn2": "rpm",
|
||||
"amzn2023": "rpm",
|
||||
}
|
||||
|
||||
_versions = {
|
||||
"24.04": {
|
||||
"deb": ("a323074cd08af82f4d79988cc66088b18e47dea4b93323b1b8a0f994f769f2f0"),
|
||||
"macOS": ("228bf3a2c25dbd45c2f89c78f455ee3c7dfb25e121c20d2765138b5174e688dc"),
|
||||
"rpm": ("d3917523034cf5a35e4f31f9a8bf4e53e7cc97892e89739d5757cb65ce40dc2e"),
|
||||
},
|
||||
"23.10_gcc-12.2": {
|
||||
"RHEL-7": ("e5e2c69ad281a676f2a06c835fbf31d4f9fdf46aa3f3f7c8aafff46985f64902"),
|
||||
"RHEL-8": ("cc0f3572ead93d1e31797b7a39a40cff3414878df9bd24a452bf4877dc35ca4c"),
|
||||
@@ -247,32 +227,28 @@
|
||||
}
|
||||
|
||||
|
||||
def get_os_or_pkg_manager(ver):
|
||||
def get_os(ver):
|
||||
platform = spack.platforms.host()
|
||||
if platform.name == "darwin":
|
||||
return "macOS"
|
||||
if ver.startswith("22."):
|
||||
return _os_map_before_23.get(platform.default_os, "")
|
||||
elif ver.startswith("23."):
|
||||
return _os_map_before_24.get(platform.default_os, "RHEL-7")
|
||||
else:
|
||||
return _os_pkg_map.get(platform.default_os, "rpm")
|
||||
return _os_map.get(platform.default_os, "RHEL-7")
|
||||
|
||||
|
||||
def get_package_url_before_24(base_url, version):
|
||||
def get_package_url(version):
|
||||
base_url = "https://developer.arm.com/-/media/Files/downloads/hpc/arm-performance-libraries/"
|
||||
armpl_version = version.split("_")[0]
|
||||
armpl_version_dashed = armpl_version.replace(".", "-")
|
||||
compiler_version = version.split("_", 1)[1]
|
||||
os = get_os_or_pkg_manager(armpl_version)
|
||||
os = get_os(armpl_version)
|
||||
if os == "macOS":
|
||||
if armpl_version.startswith("23.06"):
|
||||
return (
|
||||
f"{base_url}/{armpl_version_dashed}/"
|
||||
+ f"armpl_{armpl_version}_{compiler_version}.dmg"
|
||||
)
|
||||
return f"{base_url}{armpl_version_dashed}/armpl_{armpl_version}_{compiler_version}.dmg"
|
||||
else:
|
||||
filename = f"arm-performance-libraries_{armpl_version}_macOS.dmg"
|
||||
return f"{base_url}/{armpl_version_dashed}/macos/{filename}"
|
||||
return f"{base_url}{armpl_version_dashed}/macos/{filename}"
|
||||
filename = f"arm-performance-libraries_{armpl_version}_{os}_{compiler_version}.tar"
|
||||
os_short = ""
|
||||
if armpl_version.startswith("22.0."):
|
||||
@@ -281,51 +257,11 @@ def get_package_url_before_24(base_url, version):
|
||||
os_short = os.split(".")[0].lower()
|
||||
if "amazonlinux" in os_short:
|
||||
os_short = os_short.replace("amazonlinux", "al")
|
||||
return f"{base_url}/{armpl_version_dashed}/{os_short}/{filename}"
|
||||
|
||||
|
||||
def get_package_url_from_24(base, version):
|
||||
pkg_system = get_os_or_pkg_manager(version)
|
||||
os = "macOS" if pkg_system == "macOS" else "linux"
|
||||
|
||||
extension = "tgz" if pkg_system == "macOS" else "tar"
|
||||
|
||||
full_name_library = f"arm-performance-libraries_{version}_{pkg_system}"
|
||||
|
||||
if pkg_system != "macOS":
|
||||
full_name_library = f"{full_name_library}_gcc"
|
||||
file_name = f"{full_name_library}.{extension}"
|
||||
|
||||
vn = version.replace(".", "-")
|
||||
url_parts = f"{base}/{vn}/{os}/{file_name}"
|
||||
return url_parts
|
||||
|
||||
|
||||
def get_package_url(version):
|
||||
base_url = "https://developer.arm.com/-/media/Files/downloads/hpc/arm-performance-libraries"
|
||||
if version[:2] >= "24":
|
||||
return get_package_url_from_24(base_url, version)
|
||||
else:
|
||||
return get_package_url_before_24(base_url, version)
|
||||
return f"{base_url}{armpl_version_dashed}/{os_short}/{filename}"
|
||||
|
||||
|
||||
def get_armpl_prefix(spec):
|
||||
armpl_dir = [
|
||||
d
|
||||
for d in os.listdir(spec.prefix)
|
||||
if os.path.isdir(os.path.join(spec.prefix, d)) and d.startswith("armpl_")
|
||||
][0]
|
||||
return os.path.join(spec.prefix, armpl_dir)
|
||||
|
||||
|
||||
def get_armpl_suffix(spec):
|
||||
suffix = ""
|
||||
if spec.satisfies("@24:"):
|
||||
suffix += "_ilp64" if spec.satisfies("+ilp64") else "_lp64"
|
||||
else:
|
||||
suffix += "_ilp64" if spec.satisfies("+ilp64") else ""
|
||||
suffix += "_mp" if spec.satisfies("threads=openmp") else ""
|
||||
return suffix
|
||||
return os.path.join(spec.prefix, "armpl_" + spec.version.string)
|
||||
|
||||
|
||||
class ArmplGcc(Package):
|
||||
@@ -333,12 +269,12 @@ class ArmplGcc(Package):
|
||||
high-performance computing applications on Arm processors."""
|
||||
|
||||
homepage = "https://developer.arm.com/tools-and-software/server-and-hpc/downloads/arm-performance-libraries"
|
||||
url = "https://developer.arm.com/-/media/Files/downloads/hpc/arm-performance-libraries/24-04/linux/arm-performance-libraries_24.04_deb_gcc.tar"
|
||||
url = "https://developer.arm.com/-/media/Files/downloads/hpc/arm-performance-libraries/23-04-1/ubuntu-22/arm-performance-libraries_23.04.1_Ubuntu-22.04_gcc-12.2.tar"
|
||||
|
||||
maintainers("paolotricerri")
|
||||
maintainers("annop-w")
|
||||
|
||||
for ver, packages in _versions.items():
|
||||
key = get_os_or_pkg_manager(ver)
|
||||
key = get_os(ver)
|
||||
sha256sum = packages.get(key)
|
||||
url = get_package_url(ver)
|
||||
if sha256sum:
|
||||
@@ -405,17 +341,10 @@ def install(self, spec, prefix):
|
||||
hdiutil = which("hdiutil")
|
||||
# Mount image
|
||||
mountpoint = os.path.join(self.stage.path, "mount")
|
||||
if spec.satisfies("@:23"):
|
||||
dmg_file = self.stage.archive_file
|
||||
else:
|
||||
# The archive file only extracts to one .dmg file
|
||||
dmg_file = os.path.join(
|
||||
self.stage.source_path, os.listdir(self.stage.source_path)[0]
|
||||
)
|
||||
hdiutil("attach", "-mountpoint", mountpoint, dmg_file)
|
||||
hdiutil("attach", "-mountpoint", mountpoint, self.stage.archive_file)
|
||||
try:
|
||||
# Run installer
|
||||
exe_name = [f for f in os.listdir(mountpoint) if f.endswith(".sh")][0]
|
||||
exe_name = f"armpl_{spec.version.string}_install.sh"
|
||||
installer = Executable(os.path.join(mountpoint, exe_name))
|
||||
installer("-y", f"--install_dir={prefix}")
|
||||
finally:
|
||||
@@ -430,21 +359,15 @@ def install(self, spec, prefix):
|
||||
with when("@23:"):
|
||||
armpl_version = spec.version.string.split("_")[0]
|
||||
|
||||
if spec.satisfies("@:23"):
|
||||
exe = Executable(
|
||||
f"./arm-performance-libraries_{armpl_version}_"
|
||||
+ f"{get_os_or_pkg_manager(armpl_version)}.sh"
|
||||
)
|
||||
else:
|
||||
package_type = (
|
||||
"deb" if spack.platforms.host().default_os.startswith("ubuntu") else "rpm"
|
||||
)
|
||||
exe = Executable(f"./arm-performance-libraries_{armpl_version}_{package_type}.sh")
|
||||
exe = Executable(f"./arm-performance-libraries_{armpl_version}_{get_os(armpl_version)}.sh")
|
||||
exe("--accept", "--force", "--install-to", prefix)
|
||||
|
||||
@property
|
||||
def lib_suffix(self):
|
||||
return get_armpl_suffix(self.spec)
|
||||
suffix = ""
|
||||
suffix += "_ilp64" if self.spec.satisfies("+ilp64") else ""
|
||||
suffix += "_mp" if self.spec.satisfies("threads=openmp") else ""
|
||||
return suffix
|
||||
|
||||
@property
|
||||
def blas_libs(self):
|
||||
@@ -480,10 +403,7 @@ def libs(self):
|
||||
def headers(self):
|
||||
armpl_dir = get_armpl_prefix(self.spec)
|
||||
|
||||
if self.spec.satisfies("@24:"):
|
||||
suffix = "include"
|
||||
else:
|
||||
suffix = "include" + self.lib_suffix
|
||||
suffix = "include" + self.lib_suffix
|
||||
|
||||
incdir = join_path(armpl_dir, suffix)
|
||||
|
||||
@@ -501,9 +421,7 @@ def setup_run_environment(self, env):
|
||||
@run_after("install")
|
||||
def check_install(self):
|
||||
armpl_dir = get_armpl_prefix(self.spec)
|
||||
suffix = get_armpl_suffix(self.spec)
|
||||
armpl_example_dir = join_path(armpl_dir, f"examples{suffix}")
|
||||
|
||||
armpl_example_dir = join_path(armpl_dir, "examples")
|
||||
# run example makefile
|
||||
if self.spec.platform == "darwin":
|
||||
# Fortran examples on MacOS requires flang-new which is
|
||||
|
@@ -17,8 +17,6 @@ class Arrow(CMakePackage, CudaPackage):
|
||||
|
||||
license("Apache-2.0")
|
||||
|
||||
version("16.1.0", sha256="9762d9ecc13d09de2a03f9c625a74db0d645cb012de1e9a10dfed0b4ddc09524")
|
||||
version("15.0.2", sha256="4735b349845bff1fe95ed11abbfed204eb092cabc37523aa13a80cb830fe5b5e")
|
||||
version("14.0.2", sha256="07cdb4da6795487c800526b2865c150ab7d80b8512a31793e6a7147c8ccd270f")
|
||||
version("14.0.1", sha256="a48e54a09d58168bc04d86b13e7dab04f0aaba18a6f7e4dadf3e9c7bb835c8f1")
|
||||
version("14.0.0", sha256="39e3388bbaba23faa7a5e8a82ebba7fe4c38ace2c394d6a3f26559715b30f401")
|
||||
@@ -44,7 +42,6 @@ class Arrow(CMakePackage, CudaPackage):
|
||||
depends_on("boost@1.60: +filesystem +system")
|
||||
depends_on("cmake@3.2.0:", type="build")
|
||||
depends_on("flatbuffers")
|
||||
conflicts("%gcc@14", when="@:15.0.1") # https://github.com/apache/arrow/issues/40009
|
||||
depends_on("llvm@:11 +clang", when="+gandiva @:3", type="build")
|
||||
depends_on("llvm@:12 +clang", when="+gandiva @:4", type="build")
|
||||
depends_on("llvm@:13 +clang", when="+gandiva @:7", type="build")
|
||||
|
@@ -19,9 +19,6 @@ class Asio(AutotoolsPackage):
|
||||
license("BSL-1.0")
|
||||
|
||||
# As uneven minor versions of asio are not considered stable, they wont be added anymore
|
||||
version("1.30.2", sha256="755bd7f85a4b269c67ae0ea254907c078d408cce8e1a352ad2ed664d233780e8")
|
||||
version("1.30.1", sha256="94b121cc2016680f2314ef58eadf169c2d34fff97fba01df325a192d502d3a58")
|
||||
version("1.30.0", sha256="df6674bd790842b3a7422e9cc4c5d3212ac268cebdb5d38f3e783e4918313c7b")
|
||||
version("1.28.2", sha256="5705a0e403017eba276625107160498518838064a6dd7fd8b00b2e30c0ffbdee")
|
||||
version("1.28.1", sha256="5ff6111ec8cbe73a168d997c547f562713aa7bd004c5c02326f0e9d579a5f2ce")
|
||||
version("1.28.0", sha256="226438b0798099ad2a202563a83571ce06dd13b570d8fded4840dbc1f97fa328")
|
||||
|
@@ -100,6 +100,9 @@ def cmake_args(self):
|
||||
args.append(self.define_from_variant("ENABLE_IBM_BBAPI", "bbapi"))
|
||||
args.append(self.define_from_variant("ENABLE_CRAY_DW", "dw"))
|
||||
args.append(self.define_from_variant("BUILD_SHARED_LIBS", "shared"))
|
||||
else:
|
||||
if spec.satisfies("platform=cray"):
|
||||
args.append(self.define("AXL_LINK_STATIC", True))
|
||||
|
||||
if spec.satisfies("@0.6.0:"):
|
||||
args.append(self.define_from_variant("ENABLE_PTHREADS", "pthreads"))
|
||||
|
@@ -18,7 +18,6 @@ class Benchmark(CMakePackage):
|
||||
# first properly installed CMake config packages in
|
||||
# 1.2.0 release: https://github.com/google/benchmark/issues/363
|
||||
version("main", branch="main")
|
||||
version("1.8.4", sha256="3e7059b6b11fb1bbe28e33e02519398ca94c1818874ebed18e504dc6f709be45")
|
||||
version("1.8.3", sha256="6bc180a57d23d4d9515519f92b0c83d61b05b5bab188961f36ac7b06b0d9e9ce")
|
||||
version("1.8.2", sha256="2aab2980d0376137f969d92848fbb68216abb07633034534fc8c65cc4e7a0e93")
|
||||
version("1.8.1", sha256="e9ff65cecfed4f60c893a1e8a1ba94221fad3b27075f2f80f47eb424b0f8c9bd")
|
||||
|
@@ -238,7 +238,7 @@ def test_binaries(self):
|
||||
class AutotoolsBuilder(spack.build_systems.autotools.AutotoolsBuilder):
|
||||
def configure_args(self):
|
||||
known_targets = {"x86_64": "x86_64", "aarch64": "aarch64", "ppc64le": "powerpc"}
|
||||
known_platforms = {"linux": "linux-gnu", "darwin": "apple-darwin"}
|
||||
known_platforms = {"linux": "linux-gnu", "cray": "linux-gnu", "darwin": "apple-darwin"}
|
||||
|
||||
family = str(self.spec.target.family)
|
||||
platform = self.spec.platform
|
||||
|
33
var/spack/repos/builtin/packages/boost/bootstrap-path.patch
Normal file
33
var/spack/repos/builtin/packages/boost/bootstrap-path.patch
Normal file
@@ -0,0 +1,33 @@
|
||||
Remove the spack wrapper directories from PATH for the bootstrap step.
|
||||
This was breaking the build for Cray (and other cross-compile) because
|
||||
bjam was built for the BE and died on SIGILL on the FE. See issue
|
||||
#9613.
|
||||
|
||||
This only affects building bjam. The boost libraries are still built
|
||||
the normal spack way with the spack wrappers.
|
||||
|
||||
|
||||
diff -Naurb boost_1_66_0.orig/bootstrap.sh boost_1_66_0/bootstrap.sh
|
||||
--- boost_1_66_0.orig/bootstrap.sh 2017-12-13 17:56:35.000000000 -0600
|
||||
+++ boost_1_66_0/bootstrap.sh 2019-01-09 13:51:56.407553214 -0600
|
||||
@@ -7,6 +7,20 @@
|
||||
|
||||
# boostinspect:notab - Tabs are required for the Makefile.
|
||||
|
||||
+NEWPATH=
|
||||
+OLDIFS="$IFS"
|
||||
+IFS=:
|
||||
+
|
||||
+for dir in $PATH ; do
|
||||
+ case "x$dir" in
|
||||
+ *lib*spack*env* ) ;;
|
||||
+ * ) NEWPATH="${NEWPATH}:${dir}" ;;
|
||||
+ esac
|
||||
+done
|
||||
+
|
||||
+IFS="$OLDIFS"
|
||||
+PATH="$NEWPATH"
|
||||
+
|
||||
BJAM=""
|
||||
TOOLSET=""
|
||||
BJAM_CONFIG=""
|
@@ -342,6 +342,9 @@ def libs(self):
|
||||
# Patch: https://github.com/boostorg/process/commit/6a4d2ff72114ef47c7afaf92e1042aca3dfa41b0.patch
|
||||
patch("1.72_boost_process.patch", level=2, when="@1.72.0")
|
||||
|
||||
# Fix the bootstrap/bjam build for Cray
|
||||
patch("bootstrap-path.patch", when="@1.39.0: platform=cray")
|
||||
|
||||
# Patch fix for warnings from commits 2d37749, af1dc84, c705bab, and
|
||||
# 0134441 on https://github.com/boostorg/system.
|
||||
patch("system-non-virtual-dtor-include.patch", when="@1.69.0", level=2)
|
||||
@@ -525,6 +528,10 @@ def determine_bootstrap_options(self, spec, with_libs, options):
|
||||
# wrappers. Since Boost doesn't use the MPI C++ bindings,
|
||||
# that can be used as a compiler option instead.
|
||||
mpi_line = "using mpi : %s" % spec["mpi"].mpicxx
|
||||
|
||||
if "platform=cray" in spec:
|
||||
mpi_line += " : <define>MPICH_SKIP_MPICXX"
|
||||
|
||||
f.write(mpi_line + " ;\n")
|
||||
|
||||
if "+python" in spec:
|
||||
|
@@ -17,7 +17,6 @@ class Brpc(CMakePackage):
|
||||
|
||||
license("BSL-1.0")
|
||||
|
||||
version("1.9.0", sha256="85856da0216773e1296834116f69f9e80007b7ff421db3be5c9d1890ecfaea74")
|
||||
version("0.9.7", sha256="722cd342baf3b05189ca78ecf6c56ea6ffec22e62fc2938335e4e5bab545a49c")
|
||||
version("0.9.6", sha256="b872ca844999e0ba768acd823b409761f126590fb34cb0183da915a595161446")
|
||||
version("0.9.5", sha256="11ca8942242a4c542c11345b7463a4aea33a11ca33e91d9a2f64f126df8c70e9")
|
||||
|
@@ -47,6 +47,7 @@ class Bzip2(Package, SourcewarePackage):
|
||||
depends_on("diffutils", type="build")
|
||||
|
||||
depends_on("gmake", type="build", when="platform=linux")
|
||||
depends_on("gmake", type="build", when="platform=cray")
|
||||
depends_on("gmake", type="build", when="platform=darwin")
|
||||
|
||||
@classmethod
|
||||
|
@@ -0,0 +1,43 @@
|
||||
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class CandleBenchmarks(Package):
|
||||
"""ECP-CANDLE Benchmarks"""
|
||||
|
||||
homepage = "https://github.com/ECP-CANDLE/Benchmarks"
|
||||
url = "https://github.com/ECP-CANDLE/Benchmarks/archive/v0.1.tar.gz"
|
||||
|
||||
tags = ["proxy-app", "ecp-proxy-app"]
|
||||
|
||||
license("MIT")
|
||||
|
||||
version("0.5.1", sha256="3d8c4f5a8304ee238e93e88e871a8b4d47d6b377159c048ac6d3ed01b6ffc245")
|
||||
version("0.1", sha256="767f74f43ee3a5d4e0f26750f2a96b8433e25a9cd4f2d29938ac8acf263ab58d")
|
||||
version("0.0", sha256="faa0d24355071de0e375d72ed1a39dcf30006602210cf8cf09db568b5d0b679f")
|
||||
|
||||
variant("mpi", default=True, description="Build with MPI support")
|
||||
|
||||
extends("python")
|
||||
depends_on("python@2.7:")
|
||||
depends_on("py-theano +cuda", type=("build", "run"))
|
||||
depends_on("py-keras", type=("build", "run"))
|
||||
depends_on("py-matplotlib +image@:2.2.3", type=("build", "run"))
|
||||
depends_on("py-tqdm", type=("build", "run"))
|
||||
depends_on("py-scikit-learn", type=("build", "run"))
|
||||
depends_on("opencv@3.2.0: +highgui +imgproc +jpeg +png +tiff ~dnn ~eigen ~gtk")
|
||||
depends_on("py-mdanalysis", type=("build", "run"))
|
||||
depends_on("py-mpi4py", when="+mpi", type=("build", "run"))
|
||||
depends_on("py-h5py~mpi", when="~mpi", type=("build", "run"))
|
||||
depends_on("py-h5py+mpi", when="+mpi", type=("build", "run"))
|
||||
depends_on("py-requests", type=("build", "run"))
|
||||
|
||||
# see #3244, but use external for now
|
||||
# depends_on('tensorflow')
|
||||
|
||||
def install(self, spec, prefix):
|
||||
install_tree(self.stage.source_path, prefix.bin)
|
@@ -31,6 +31,6 @@ def url_for_version(self, version):
|
||||
|
||||
def install(self, spec, prefix):
|
||||
# The Makefile isn't portable; use our own instead
|
||||
makeargs = ["-f", "Makefile.spack", f"PREFIX={prefix}"]
|
||||
makeargs = ["-f", "Makefile.spack", "PREFIX=%s" % prefix]
|
||||
make(*makeargs)
|
||||
make("install", *makeargs)
|
||||
|
@@ -12,7 +12,6 @@ class Centrifuge(MakefilePackage):
|
||||
homepage = "https://ccb.jhu.edu/software/centrifuge/index.shtml"
|
||||
url = "https://github.com/DaehwanKimLab/centrifuge/archive/refs/tags/v1.0.4.tar.gz"
|
||||
|
||||
version("1.0.4.1", sha256="638cc6701688bfdf81173d65fa95332139e11b215b2d25c030f8ae873c34e5cc")
|
||||
version("1.0.4", sha256="929daed0f84739f7636cc1ea2757527e83373f107107ffeb5937a403ba5201bc")
|
||||
|
||||
def build(self, spec, prefix):
|
||||
|
@@ -15,6 +15,5 @@ class Cjson(CMakePackage):
|
||||
|
||||
license("MIT")
|
||||
|
||||
version("1.7.18", sha256="cc6d93cc3b659037c34193ecc7be5a874a18c2ac67b24efe82db6a759b486b5d")
|
||||
version("1.7.17", sha256="51f3b07aece8d1786e74b951fd92556506586cb36670741b6bfb79bf5d484216")
|
||||
version("1.7.15", sha256="c55519316d940757ef93a779f1db1ca809dbf979c551861f339d35aaea1c907c")
|
||||
|
@@ -58,6 +58,12 @@ class ClingoBootstrap(Clingo):
|
||||
when="platform=linux",
|
||||
msg="GCC or clang are required to bootstrap clingo on Linux",
|
||||
)
|
||||
requires(
|
||||
"%gcc",
|
||||
"%clang",
|
||||
when="platform=cray",
|
||||
msg="GCC or clang are required to bootstrap clingo on Cray",
|
||||
)
|
||||
conflicts("%gcc@:5", msg="C++14 support is required to bootstrap clingo")
|
||||
|
||||
# On Darwin we bootstrap with Apple Clang
|
||||
|
@@ -53,6 +53,7 @@ class Clingo(CMakePackage):
|
||||
depends_on("bison@2.5:", type="build", when="platform=linux")
|
||||
depends_on("bison@2.5:", type="build", when="platform=darwin")
|
||||
depends_on("bison@2.5:", type="build", when="platform=freebsd")
|
||||
depends_on("bison@2.5:", type="build", when="platform=cray")
|
||||
|
||||
with when("platform=windows"):
|
||||
depends_on("re2c@0.13:", type="build")
|
||||
@@ -66,6 +67,7 @@ class Clingo(CMakePackage):
|
||||
depends_on("py-cffi", type=("build", "run"), when="@5.5.0: platform=linux")
|
||||
depends_on("py-cffi", type=("build", "run"), when="@5.5.0: platform=darwin")
|
||||
depends_on("py-cffi", type=("build", "run"), when="@5.5.0: platform=freebsd")
|
||||
depends_on("py-cffi", type=("build", "run"), when="@5.5.0: platform=cray")
|
||||
|
||||
patch("python38.patch", when="@5.3:5.4.0")
|
||||
patch("size-t.patch", when="%msvc")
|
||||
|
@@ -30,7 +30,6 @@ class Cmake(Package):
|
||||
|
||||
version("master", branch="master")
|
||||
version("3.29.2", sha256="36db4b6926aab741ba6e4b2ea2d99c9193222132308b4dc824d4123cb730352e")
|
||||
version("3.28.4", sha256="eb9c787e078848dc493f4f83f8a4bbec857cd1f38ab6425ce8d2776a9f6aa6fb")
|
||||
version("3.27.9", sha256="609a9b98572a6a5ea477f912cffb973109ed4d0a6a6b3f9e2353d2cdc048708e")
|
||||
version("3.26.6", sha256="070b9a2422e666d2c1437e2dab239a236e8a63622d0a8d0ffe9e389613d2b76a")
|
||||
version("3.25.3", sha256="cc995701d590ca6debc4245e9989939099ca52827dd46b5d3592f093afe1901c")
|
||||
@@ -67,18 +66,6 @@ class Cmake(Package):
|
||||
version(
|
||||
"3.29.0", sha256="a0669630aae7baa4a8228048bf30b622f9e9fd8ee8cedb941754e9e38686c778"
|
||||
)
|
||||
version(
|
||||
"3.28.3", sha256="72b7570e5c8593de6ac4ab433b73eab18c5fb328880460c86ce32608141ad5c1"
|
||||
)
|
||||
version(
|
||||
"3.28.2", sha256="1466f872dc1c226f373cf8fba4230ed216a8f108bd54b477b5ccdfd9ea2d124a"
|
||||
)
|
||||
version(
|
||||
"3.28.1", sha256="15e94f83e647f7d620a140a7a5da76349fc47a1bfed66d0f5cdee8e7344079ad"
|
||||
)
|
||||
version(
|
||||
"3.28.0", sha256="e1dcf9c817ae306e73a45c2ba6d280c65cf4ec00dd958eb144adaf117fb58e71"
|
||||
)
|
||||
version(
|
||||
"3.27.8", sha256="fece24563f697870fbb982ea8bf17482c9d5f855d8c9bf0b82463d76c9e8d0cc"
|
||||
)
|
||||
@@ -422,6 +409,7 @@ class Cmake(Package):
|
||||
depends_on("ninja", when="platform=windows")
|
||||
depends_on("gmake", when="platform=linux")
|
||||
depends_on("gmake", when="platform=darwin")
|
||||
depends_on("gmake", when="platform=cray")
|
||||
depends_on("gmake", when="platform=freebsd")
|
||||
|
||||
# We default ownlibs to true because it greatly speeds up the CMake
|
||||
@@ -468,7 +456,7 @@ class Cmake(Package):
|
||||
with when("~ownlibs"):
|
||||
depends_on("expat")
|
||||
# expat/zlib are used in CMake/CTest, so why not require them in libarchive.
|
||||
for plat in ["darwin", "linux"]:
|
||||
for plat in ["darwin", "cray", "linux"]:
|
||||
with when("platform=%s" % plat):
|
||||
depends_on("libarchive@3.1.0: xar=expat compression=zlib")
|
||||
depends_on("libarchive@3.3.3:", when="@3.15.0:")
|
||||
|
@@ -15,7 +15,6 @@ class Cmark(CMakePackage):
|
||||
|
||||
license("BSD-2-Clause")
|
||||
|
||||
version("0.31.0", sha256="bbcb8f8c03b5af33fcfcf11a74e9499f20a9043200b8552f78a6e8ba76e04d11")
|
||||
version("0.29.0", sha256="2558ace3cbeff85610de3bda32858f722b359acdadf0c4691851865bb84924a6")
|
||||
version("0.28.3", sha256="acc98685d3c1b515ff787ac7c994188dadaf28a2d700c10c1221da4199bae1fc")
|
||||
version("0.28.2", sha256="fe4b04fcccb2dc72641096de02a8eefb53059e85f9dd904f0386dc86326cc414")
|
||||
|
@@ -30,7 +30,6 @@ def url_for_version(self, version):
|
||||
license("NCSA")
|
||||
|
||||
version("master", branch="amd-stg-open")
|
||||
version("6.1.1", sha256="f1a67efb49f76a9b262e9735d3f75ad21e3bd6a05338c9b15c01e6c625c4460d")
|
||||
version("6.1.0", sha256="6bd9912441de6caf6b26d1323e1c899ecd14ff2431874a2f5883d3bc5212db34")
|
||||
version("6.0.2", sha256="737b110d9402509db200ee413fb139a78369cf517453395b96bda52d0aa362b9")
|
||||
version("6.0.0", sha256="04353d27a512642a5e5339532a39d0aabe44e0964985de37b150a2550385800a")
|
||||
@@ -84,7 +83,6 @@ def url_for_version(self, version):
|
||||
"6.0.0",
|
||||
"6.0.2",
|
||||
"6.1.0",
|
||||
"6.1.1",
|
||||
"master",
|
||||
]:
|
||||
# llvm libs are linked statically, so this *could* be a build dep
|
||||
@@ -94,18 +92,7 @@ def url_for_version(self, version):
|
||||
# that a conditional dependency
|
||||
depends_on(f"rocm-device-libs@{ver}", when=f"@{ver} ^llvm-amdgpu ~rocm-device-libs")
|
||||
|
||||
for ver in [
|
||||
"5.5.0",
|
||||
"5.5.1",
|
||||
"5.6.0",
|
||||
"5.6.1",
|
||||
"5.7.0",
|
||||
"5.7.1",
|
||||
"6.0.0",
|
||||
"6.0.2",
|
||||
"6.1.0",
|
||||
"6.1.1",
|
||||
]:
|
||||
for ver in ["5.5.0", "5.5.1", "5.6.0", "5.6.1", "5.7.0", "5.7.1", "6.0.0", "6.0.2", "6.1.0"]:
|
||||
depends_on(f"rocm-core@{ver}", when=f"@{ver}")
|
||||
|
||||
@property
|
||||
|
@@ -23,7 +23,6 @@ class CommonsLang3(Package):
|
||||
|
||||
license("Apache-2.0")
|
||||
|
||||
version("3.14.0", sha256="317c3e3fcd5fcca3781a7996ff1e0c50c13244ee961e94e5f6f6d84b84733b16")
|
||||
version("3.12.0", sha256="33012465dfcb7f790aca333e09ebf105e2a5fb95c2c638b3df790d3efa908e28")
|
||||
version("3.7", sha256="94dc8289ce90b77b507d9257784d9a43b402786de40c164f6e3990e221a2a4d2")
|
||||
|
||||
@@ -31,4 +30,4 @@ class CommonsLang3(Package):
|
||||
depends_on("java@7:", type="run")
|
||||
|
||||
def install(self, spec, prefix):
|
||||
install(f"commons-lang3-{self.version}.jar", prefix)
|
||||
install("commons-lang3-{0}.jar".format(self.version), prefix)
|
||||
|
@@ -13,13 +13,12 @@ class ComposableKernel(CMakePackage):
|
||||
|
||||
homepage = "https://github.com/ROCm/composable_kernel"
|
||||
git = "https://github.com/ROCm/composable_kernel.git"
|
||||
url = "https://github.com/ROCm/composable_kernel/archive/refs/tags/rocm-6.1.1.tar.gz"
|
||||
url = "https://github.com/ROCm/composable_kernel/archive/refs/tags/rocm-6.1.0.tar.gz"
|
||||
maintainers("srekolam", "afzpatel")
|
||||
|
||||
license("MIT")
|
||||
|
||||
version("master", branch="develop")
|
||||
version("6.1.1", sha256="f55643c6eee0878e8f2d14a382c33c8b84af0bdf8f31b37b6092b377f7a9c6b5")
|
||||
version("6.1.0", sha256="355a4514b96b56aa9edf78198a3e22067e7397857cfe29d9a64d9c5557b9f83d")
|
||||
version("6.0.2", sha256="f648a99388045948b7d5fbf8eb8da6a1803c79008b54d406830b7f9119e1dcf6")
|
||||
version("6.0.0", sha256="a8f736f2f2a8afa4cddd06301205be27774d85f545429049b4a2bbbe6fcd67df")
|
||||
@@ -52,7 +51,6 @@ class ComposableKernel(CMakePackage):
|
||||
|
||||
for ver in [
|
||||
"master",
|
||||
"6.1.1",
|
||||
"6.1.0",
|
||||
"6.0.2",
|
||||
"6.0.0",
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user