* `f.tell` on a `TextIOWrapper` does not return the offset in bytes, but
an opaque integer that can only be used for `f.seek` on the same
object. Spack assumes it's a byte offset.
* Do not open in a locale dependent way, but assume utf-8 (and allow
users to override that)
* Use tempfile to generate a backup/temporary file in a safe way
* Comparison between None and str is valid and on purpose.
Follow-up to #47956
* Rename `token.py` -> `tokenize.py`
* Rename `parser.py` -> `spec_parser.py`
* Move common code related to iterating over tokens into `tokenize.py`
* Add "unexpected character token" (i.e. `.`) to `SpecTokens` by default instead of having a separate tokenizer / regex.
* Python: deprecate 3.8
* Remove preference for EOL Python versions
* Explicitly deprecate things requiring EOL Python
* More deprecations
* deprecate old versions of slepc, py-petsc4py, py-slepc4py in sync with old versions of petsc
---------
Co-authored-by: Satish Balay <balay@mcs.anl.gov>
The use of `^` in `depends_on` directives has never been allowed, since
the dawn of Spack.
Up to now, we used to have an audit to catch this kind of issue, mainly
because in that way we could easily collect all issues and report them
to packagers at once.
Due to implementation details, this audit doesn't work if a dependency
without a `^` is followed by the same dependency with a `^`.
This PR makes this pattern an error, which will be reported eagerly, and
removes the corresponding audit. It also fixes a package using the wrong
idiom.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
So far, the ESMF package recipe in spack assumes that the spack
compilers clang and apple-clang are using gfortran as the Fortran
compiler. But with the latest improvements to the LLVM compilers,
we need to also support clang with flang.
Reorganize the pipeline generation aspect of the ci module,
mostly to separate the representation, generation, and
pruning of pipeline graphs from platform-specific output
formatting.
Introduce a pipeline generation registry to support generating
pipelines for other platforms, though gitlab is still the only
supported format currently.
Fix a long-existing bug in pipeline pruning where only direct
dependencies were added to any nodes dependency list.
* Set the "build_jobs" on concretization/generate for CI
build_jobs also controls the concretization pool size. Set this
in the config section for CI generate.
This config is overwritten by build_job CI using the SPACK_BUILD_JOBS
environment variable. This implicitly will drop the default build
CPU request on all "default" grouped build jobs from (max) 16 to 8.
* Add default allocations for build jobs
* Add common jobs and concretize args to ci generate and rebuild
* CI: Specify parallel concretize and build jobs via argument
* Increase power and cray concretization limits
Lowering limits for these stacks creates timeout
* py-nbclassic: add v1.1
* py-nbclassic: reduce explicit dependencies for v1.1.0
Having all the 'excess' packages listed did not break anything, as
they were needed for `py-jupyter-server` (pulled in via `py-notebook-shim`)
anyway, but the change makes it more clear on why things are being pulled in.
* geant4: add v11.3.0
* geant4: rm deprecated 11.3.0.beta
* geant4: add 11.3.0 and associated data library versions
- Data library versions taken from:
- https://gitlab.cern.ch/geant4/geant4/-/blob/v11.3.0/cmake/Modules/G4DatasetDefinitions.cmake?ref_type=tags
- Variants etc otherwise unchanged.
- 11.3.0-beta version removed, release version marked as preffered.
* g4channeling: f-strings
---------
Co-authored-by: Ben Morgan <ben.morgan@warwick.ac.uk>
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>