Compare commits

...

447 Commits

Author SHA1 Message Date
Gregory Becker
0e39c8b1b7 bugfix for relative dev path 2021-12-07 15:06:32 -08:00
Harmen Stoppels
3d1b9e4dbc "spack buildcache install": don't catch exception (#27674)
Remove a try/catch for an error with no handling. If the affected
code doesn't execute successfully, then the associated variable
is undefined and another (more-obscure) error occurs shortly after.
2021-12-07 13:17:17 -08:00
Massimiliano Culpo
f81d84dfc6 Release procedure: add a step to update docs (#27734) 2021-12-07 11:30:14 +00:00
Asher Mancinelli
7254d0fe94 HiOp: add versions, variants for rocm (#27824) 2021-12-07 11:38:10 +01:00
Glenn Johnson
3160ab66db 3dtk: set boost constraint (#27806) 2021-12-07 11:28:41 +01:00
Axel Huebl
941e6505d0 WarpX: 21.12 (#27800)
Update `warpx` & `py-warpx` to the latest release, `21.12`.
2021-12-06 17:16:46 -08:00
iarspider
687bc7d40e py-rich: add version 10.9.0 (#27826) 2021-12-06 15:32:01 -07:00
Manuela Kuhn
949923c03f r-blob: add 1.2.2 (#27797) 2021-12-06 22:00:59 +00:00
Manuela Kuhn
1d8575cd83 r-colorspace: add 2.0-2 (#27815) 2021-12-06 15:43:26 -06:00
Manuela Kuhn
3949cd567d r-jquerylib: add new package (#27716) 2021-12-06 15:42:20 -06:00
Manuela Kuhn
4ece86c64a r-brio: add 1.1.3 (#27801) 2021-12-06 15:25:43 -06:00
Manuela Kuhn
1a2d522eaa r-conquer: add 1.2.1 (#27802) 2021-12-06 15:24:56 -06:00
Manuela Kuhn
c18e91d771 r-crayon: 1.4.2 (#27803) 2021-12-06 15:23:32 -06:00
Manuela Kuhn
451a4f2cfa r-brms: add 2.16.3 (#27809) 2021-12-06 15:22:34 -06:00
Manuela Kuhn
88175417bb r-cli: add 3.1.0 (#27810) 2021-12-06 15:20:26 -06:00
Manuela Kuhn
4de7c6a901 r-car: add 3.0-12 (#27811) 2021-12-06 15:14:21 -06:00
Manuela Kuhn
0d5eb2657a r-colourpicker: add 1.1.1 (#27816) 2021-12-06 14:09:58 -06:00
Manuela Kuhn
9b3ac00c2d r-backports: add 1.4.0 (#27710) 2021-12-06 14:08:31 -06:00
Manuela Kuhn
b4f47cbdec r-bh: add 1.75.0-0 (#27711) 2021-12-06 14:07:01 -06:00
Manuela Kuhn
ccda43673a r-rcpparmadillo: add 0.10.7.3.0 (#27712) 2021-12-06 14:04:55 -06:00
Jen Herting
3b8a34240e [py-pyspellchecker] New package (#27788)
Co-authored-by: Sid Pendelberry <sid@rit.edu>
2021-12-06 10:57:50 -06:00
Jen Herting
d0fffa9212 apktool: new package (#27782) 2021-12-06 17:20:31 +01:00
Hector Barrios
0106e6ec9c mumps: fix problem when using Intel OneAPI mkl static lib (#27725) 2021-12-06 17:14:33 +01:00
Glenn Johnson
37a4c0ff59 r-prettydoc: fix dependency (#27791)
The markdown dependency is `r-rmarkdown` rather than `r-markdown`.
2021-12-06 17:13:44 +01:00
Glenn Johnson
30007f7897 r-affy: set r version constraint (#27792) 2021-12-06 17:00:03 +01:00
Glenn Johnson
caba5c4692 Remove the autotools resources (#27793)
This essentially reverts #18917 as it seems to no longer be necessary
due to recent autotools changes in core spack.
2021-12-06 16:59:25 +01:00
Wouter Deconinck
bc3a3d9249 opencascade: add v7.6.0 (#27799) 2021-12-06 16:44:19 +01:00
Glenn Johnson
599b8c3533 suite-sparse: Add CXSparse and SPQR to targets (#27804) 2021-12-06 16:40:07 +01:00
Michael Kuhn
e6432d2380 glib: add 2.70.2 (#27812) 2021-12-06 16:34:26 +01:00
Glenn Johnson
39ebf9c5a7 Set preferred version of vtk to 9.0.3 (#27817)
There is a library symbol issue with libioss and python with version
9.1.0 so set preferred version to 9.0.3.
2021-12-06 16:28:35 +01:00
Wouter Deconinck
a332c26780 acts: use variants['cuda_arch'] only when +cuda (#27813)
Since #27185, the cuda_arch variant values are conditional on +cuda. This means that for -cuda specs, the installation fails with:
```
==> acts: Executing phase: 'cmake'
==> Error: KeyError: 'cuda_arch'

/home/wdconinc/git/spack/var/spack/repos/builtin/packages/acts/package.py:222, in cmake_args:
        219        log_failure_threshold = spec.variants['log_failure_threshold'].value
        220        args.append("-DACTS_LOG_FAILURE_THRESHOLD={0}".format(log_failure_threshold))
        221
  >>    222        cuda_arch = spec.variants['cuda_arch'].value
        223        if cuda_arch != 'none':
        224            args.append('-DCUDA_FLAGS=-arch=sm_{0}'.format(cuda_arch[0]))
        225
```
2021-12-06 16:25:27 +01:00
Glenn Johnson
a8a226b748 bohrium: missing dependencies (#27807) 2021-12-06 16:23:25 +01:00
Wouter Deconinck
9b1b38d2de librsvg,libcroco: add doc variant to avoid unsolvable constraints due to gtkplus (#27790)
* [libcroco] add variant doc, depends_on gtk-doc when +doc
* [librsvg] add variant doc, depends_on gtk-doc when +doc
2021-12-06 14:08:22 +01:00
Wouter Deconinck
2bb075c850 rivet: hepmc=3: Fix prefix of --with-hepmc3 (#27814) 2021-12-06 10:42:54 +01:00
Glenn Johnson
235edd0742 new package: py-tensorflow-datasets (#27746)
* new package: py-tensorflow-datasets

- includes new dependency package: py-tensorflow-metadata

* Update var/spack/repos/builtin/packages/py-tensorflow-datasets/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-tensorflow-metadata/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-12-05 12:06:43 -06:00
Glenn Johnson
252cd48b9c py-astropy: set version constraint on cfitsio (#27805) 2021-12-05 12:06:09 -06:00
Ben Bergen
c2def5c2f4 cgdb: add depends_on('gdb', type='run') (#27753)
* Added gdb Dependency

When using spack to install cgdb, a spack-built gdb is necessary to
avoid dynamic link errors.

- Added maintainer: tuxfan
- Set preferred to 'master' (best version for spack currently)

* Update: The gdb dependency added by this PR is for runtime

Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2021-12-05 08:02:23 -07:00
Manuela Kuhn
25a9888102 r-v8: add 3.6.0 (#27714)
* r-v8: add 3.6.0

* Add conflict and deprecate version 3.4.0
2021-12-05 00:56:47 -06:00
Manuela Kuhn
1eec99ce1d py-imagesize: add 1.3.0 (#27787) 2021-12-03 21:05:05 -07:00
Manuela Kuhn
c0dab7617f py-jeepney: add 0.7.1 (#27786) 2021-12-03 20:16:56 -07:00
Chung ShinYee
041973d1d5 julia: update versions and add znver3 to Julia's CPU target (#27770)
* julia: new versions.

* julia: Support AMD Milan CPU znver3.
2021-12-03 18:31:47 -07:00
kwryankrattiger
203ccdd976 LLVM: Revert build_llvm_dylib to llvm_dylib (#27761) 2021-12-03 23:47:05 +01:00
Tamara Dahlgren
6f9dc1c926 berkeley-db: Add minimal external detection (#27752) 2021-12-03 23:42:45 +01:00
Manuela Kuhn
324e046f06 r-tictoc: add 1.0.1 (#27715) 2021-12-03 16:14:29 -06:00
Asher Mancinelli
21d1e03e37 ExaGO: add new versions (#27768) 2021-12-03 13:59:16 -08:00
Mark W. Krentel
34c1d196a4 hpctoolkit: add support for smoke tests (#27783)
This adds support in spack for both build/install tests (spack install
--run-tests) and post-install smoke tests (spack test run).

Hpctoolkit itself only recently added tests, so for now, this only
applies to branch master.
2021-12-03 12:52:34 -08:00
Jen Herting
bd2b5cc7aa [libzmq] added version 4.3.4 (#27780) 2021-12-03 13:44:04 -07:00
Simo Tuomisto
cfb8d6f8ac New package: mujoco (#27656) 2021-12-03 11:40:09 -08:00
Weiqun Zhang
0a656de60b amrex: add new release 21.12 (#27781) 2021-12-03 11:00:14 -08:00
Hans Pabst
23bd0f3178 LIBXSMM 1.17 (#27775) 2021-12-03 10:48:40 -08:00
Hans Pabst
0e7220b2f2 Fixed typo (NAMD). (#27774) 2021-12-03 10:40:34 -08:00
Bernhard Kaindl
53eb24af9b singularity, singularityce: Add 3.8.5 and ce-3.9.1 (#27778) 2021-12-03 11:23:08 -07:00
Richarda Butler
86b17193de Parallel-netcdf: Update Spack test dir (#27232)
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2021-12-03 10:03:14 -08:00
Filippo Spiga
96a50db344 CUDA: add v11.5.1 (#27689) 2021-12-03 06:13:51 -07:00
Ethan Stam
4c922ec572 ParaView: add dependencies needed in paraview@master (#27745)
* The proj and nlohmann-json packages are needed in paraview@master
* ParaView: bump master subdir version to 5.10
2021-12-02 17:18:23 -08:00
Terry Cojean
b80b085575 Ginkgo package: add oneAPI support (#26868)
To use this, you can "spack install intel-oneapi-compilers" and then
"spack compiler add" the new compiler. You would need to install with
"spack install ginkgo+oneapi%dpcpp"
2021-12-02 17:13:25 -08:00
Michael Kuhn
fbd67feead nss: add 3.73 (#27767) 2021-12-03 00:55:57 +01:00
Peter Scheibel
edb971a10e Support packages which need to explicitly refer to dpcpp by name (#27168)
* Hack to support packages which need to explicitly refer to dpcpp by name
* cc script needs to know about dpcpp
2021-12-02 15:49:20 -08:00
Michael Kuhn
693a15ea00 nspr: add 4.32 (#27766) 2021-12-03 00:36:50 +01:00
Bernhard Kaindl
4d1a6cd362 libseccomp package: add version 2.5.3 (#27756)
- Use .tar.gz archive
- Update 2.3.3 to use .tar.gz archive (and update checksum)
- autoreconf dependency is no-longer required
- The new version depends on gperf
2021-12-02 14:52:23 -08:00
Bernhard Kaindl
9e1e7fa1d7 lvm2 package: add version 2.03.14 (#27760) 2021-12-02 14:37:59 -08:00
Ben Darwin
07ab4623e6 New package: f3d (add version 1.1.1) (#27755) 2021-12-02 14:35:55 -08:00
Jen Herting
66c5199362 New package: py-fire (#27731)
* dependency for stylegan2

* fixed copyright

* [py-fire]

- fixed some formatting
- url -> pypi

* [py-fire] removed extra https://

* [py-fire]

- removed python version restriction
- enum34 -> py-enum34

* [py-fire] removed .999

Co-authored-by: Sid Pendelberry <sid@rit.edu>
2021-12-02 14:58:25 -06:00
Jen Herting
ca32b08825 New package: py-vector-quantize-pytorch (#27759)
* [py-vector-quantize-pytorch] New package

* [py-vector-quantize-pytorch] flake8

* [py-vector-quantize-pytorch] removed unnecessary dependency

Co-authored-by: Sid Pendelberry <sid@rit.edu>
2021-12-02 14:53:49 -06:00
Ye Luo
1cdb764422 QMCPACK: fix use of MPI wrappers (#27744) 2021-12-01 18:39:06 -08:00
Tiziano Müller
d0beab8012 elpa package: add version 2021.05.002_bugfix (#27748)
* elpa: filter _bugfix in version for include dir
* libxc: add version 5.1.7
2021-12-01 18:29:23 -08:00
Bernhard Kaindl
04519d261d New package: goshimmer (#27339) 2021-12-01 17:05:30 -08:00
Axel Huebl
e590ec6f3f Add: openPMD-viewer (#27516)
* Add: openPMD-viewer

Add the `py-openpmd-viewer` package.

* openPMD-viewer: Improve Variant Control

* openPMD-viewer: Update to Source from Pip/PyPA

https://spack.readthedocs.io/en/latest/build_systems/pythonpackage.html#pypi-vs-github

* Fix style: comment

* openPMD-viewer: fix requirements

add reviewer feedback
2021-12-01 16:19:54 -07:00
Tiziano Müller
08b00f8804 cp2k: fix build issues without cuda, and with elpa on openSUSE (#27738) 2021-12-01 17:00:01 -06:00
Vanessasaurus
326acea29d py-tern: new package (#27599)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-12-01 14:41:05 -07:00
Jen Herting
97126cac4b New package: py-yq (#27729)
* pypi build of yq

* [py-yq]

- removed duplicate py-yaml dependency
- fixed py-yaml version number

* [py-yq] fix py-xmltodict version range

* [py-yq] self assign maintainer

* [py-yq] setuptools is runtime dependency

Co-authored-by: Sid Pendelberry <sid@rit.edu>
2021-12-01 13:25:24 -08:00
Jen Herting
2ddc522864 New package: py-retry (#27730)
* package dependency

* [py-retry]

- added homepage
- removed unnecessary depends_on

* [py-retry] added dependencies back

* [py-retry] removed .999

Co-authored-by: Sid Pendelberry <sid@rit.edu>
2021-12-01 14:14:55 -07:00
Joseph Wang
d7b21fc40b madgraph5amc package: use Spack compiler wrapper (#27523)
This fix allows madgraph5amc to build with Clang.
2021-12-01 12:18:07 -08:00
kwryankrattiger
041f91b176 ECP-DAV package: Add sensei to SDK (#27537)
- Add sensei to the SDK with appropriate propagations
- Rework variants to SENSEI package to avoid providing broken builds
- Turn off miniapps by default, these are examples and not critical to using sensei
2021-12-01 12:10:22 -08:00
Thomas Green
ab85e79a91 py-pybind11: add CMake dependency (#27573) 2021-12-01 11:57:27 -08:00
Manuela Kuhn
faa04a03c0 seacas: fix x11 dependency (#27737) 2021-12-01 10:11:14 -08:00
Mark W. Krentel
820d49b84e libmonitor: add version 2021.11.08 (#27740) 2021-12-01 10:09:49 -08:00
Marc Joos
81bdf1a386 wi4mpi: add new package (#27530) 2021-12-01 09:29:08 -07:00
Jen Herting
5a9bc4bfb2 New package: py-gin-config (#27732)
* [py-gin-config] created template

* [py-gin-config]

- depends on setup tools
- added homepage
- added description
- removed fixmes
2021-12-01 09:52:59 -06:00
Severin Strobl
67c8a63a0a adol-c: add variant stdczero (#27721) 2021-12-01 16:24:15 +01:00
Luis Yanes
4859649229 portcullis: add v1.2.3 (#27661)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2021-12-01 05:34:52 -07:00
Vicente Bolea
6e90a853f4 ParaView: Add 5.10.0-RC2 release (#27696) 2021-12-01 11:53:54 +01:00
Howard Pritchard
aacbc64a23 OPENMPI: add 4.0.7 release (#27697)
add the 4.0.7 release to the OMPI spack package

Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2021-12-01 11:53:16 +01:00
ravil-mobile
658f42ef0f dpcpp: added a new package (#25130)
dpcpp stands for Data Parallel C++
dpcpp is Intel's implementation of SYCL

Co-authored-by: ravil <ravil.dorozhinskii@tum.de>
2021-12-01 11:43:38 +01:00
Ye Luo
787cc535e9 qmcpack: more restriction on openblas variants (#27723) 2021-12-01 11:30:58 +01:00
Vasileios Karakasis
452a693aab Add more ReFrame versions (#27728) 2021-11-30 15:56:03 -07:00
Ye Luo
23206fa4b5 quantum-espresso: add CMake and libxc variants (#27724) 2021-11-30 23:26:39 +01:00
Glenn Johnson
bcaa87574d vtk package: add version 9.1.0; add new dependencies (#27467)
This PR updates the vtk package to use new variable names and adds some
dependencies.

- add version 9.1.0
- add version 1.4.2 to gl2ps package. This is needed to use gl2ps as a
  dependency.
- new package: utf8cpp, used as a dependency for version 9.
- add dependencies when possible
    - pugixml
    - libogg
    - libtheora
    - utf8cpp
    - gl2ps
    - proj
- turn off configuring MPI if ~mpi
- always use the package-provided FindHDF5.cmake for versions up to 8.
  Version @9: already does this so does not need a patch.
- use new CMake variables for version 9
- remove unused CMake variables depending on version
2021-11-30 15:14:05 -07:00
Piotr Luszczek
d57bcc0b85 cpio: reuse patch for %gcc10: for %clang as well (#27589)
and replace patch download with simple filter_file() to fix the duplicate definition.

Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2021-11-30 23:09:00 +01:00
Sebastian Schmitt
c96a6f990e Bump Brian2 (#27700)
* Bump Brian2

* Remove doc variant; the documentation is not build anyways
2021-11-30 11:32:04 -07:00
Marco De La Pierre
88cf4ce865 updated recipe for quantum-espresso 6.8 (#27607) 2021-11-30 10:05:53 -08:00
Manuela Kuhn
5e193618cb gtk-doc: fix source url and add 1.33.2 (#27719) 2021-11-30 09:29:06 -07:00
Davide Mancusi
0e83c0e792 gtkplus: fix generation of configure_args (#27718) 2021-11-30 17:26:39 +01:00
Massimiliano Culpo
645a7dc14c spack audit: fix API calls (#27713)
This broke in #24858
2021-11-30 14:59:55 +01:00
Adam J. Stewart
5044df88ab Add citation information to GitHub (#27518) 2021-11-30 01:37:50 -07:00
Manuela Kuhn
ab2bc933b8 py-idna: add 3.3 (#27708) 2021-11-29 21:28:57 -07:00
Valentin Volkl
b13778c2c6 py-iminuit: add new versions 2.8.4 (#27680)
* py-iminuit: add new versions 2.8.4

* py-iminuit: update python dependency

* py-iminuit: fix typo

* flake8
2021-11-29 20:19:52 -07:00
Manuela Kuhn
54df6d0a56 py-setuptools: add 59.4.0 (#27692) 2021-11-29 19:52:50 -07:00
Manuela Kuhn
44746dfbd6 r-tidyverse: add 1.3.1 (#27291) 2021-11-29 18:24:44 -06:00
Harmen Stoppels
a86d574208 dsfmt package: fix shared libs (#27628)
Also:

* Remove 2.2.3 (doesn't build shared libs)
* Add versions 2.2.4 and 2.2.5
2021-11-29 15:57:32 -08:00
Satish Balay
cb71308d22 petsc: archive configure.log make.log from the build (#27677)
Also enable verbose build
2021-11-29 16:10:50 -07:00
Ben Darwin
02b4a65086 py-intensity-normalization: add package (at version 2.1.1) (#27593)
* py-scikit-fuzzy: new package (at version 0.4.2)

* py-intensity-normalization: new package (at version 2.1.1)
2021-11-29 15:33:16 -06:00
Manuela Kuhn
838294b10d py-gevent: add 21.8.0 (#27699) 2021-11-29 15:29:09 -06:00
Manuela Kuhn
22eebff261 py-pyparsing: add 3.0.6 (#27693)
* py-pyparsing: add 3.0.6

* Add suggestions
2021-11-29 14:02:08 -07:00
Manuela Kuhn
a98e8ad4b5 py-packaging: add 21.3 (#27691)
* py-packaging: add 21.3

* Update var/spack/repos/builtin/packages/py-packaging/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-29 13:56:09 -07:00
Bernhard Kaindl
2d34082b0c llvm: Fix building llvm@4:9 using %clang@6: and %gcc@10: (#27233)
Add z3 variant, fix @:9%gcc@9: with glibc2.31, fix no_cyclades range
2021-11-29 12:48:18 -08:00
Manuela Kuhn
1f1da6806c py-tifffile: add 2021.11.2 (#27695) 2021-11-29 12:52:55 -07:00
Manuela Kuhn
647a26bc9e py-httpretty: add 1.1.4 (#27687) 2021-11-29 11:42:58 -06:00
Manuela Kuhn
561b0ca029 py-zope-interface: add 5.4.0 (#27596) 2021-11-29 11:42:00 -06:00
Manuela Kuhn
3c5295ee8a py-gast: add 0.5.3 (#27686) 2021-11-29 11:40:59 -06:00
Manuela Kuhn
914cf5ff87 py-importlib-metadata: add 4.8.2 (#27690) 2021-11-29 11:40:17 -06:00
Thomas Madlener
ecb588740a Speed up install of environments with dev packages (#27167)
* only check file modification times once per dev package
2021-11-29 09:34:23 -08:00
Manuela Kuhn
4858a26400 py-humanize: add 3.12.0 (#27688) 2021-11-29 10:58:09 -06:00
Eric Brugger
995a29d2f3 VisIt: remove upper range restriction on llvm. (#27648) 2021-11-29 17:38:31 +01:00
Adam J. Stewart
fcd5f960f8 py-matplotlib: add v3.5.0 (#27486)
* py-matplotlib: add v3.5.0

* Remove deprecated versions, update setup.cfg filename

* Add additional dependencies, save config file

* Add patch to fix matplotlibrc
2021-11-29 17:19:01 +01:00
Howard Pritchard
54d88f1a1c OpenMPI: add version 4.1.2 (#27645)
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2021-11-29 16:45:20 +01:00
Adam J. Stewart
fee8ac9721 py-scipy: add v1.7.3 (#27649) 2021-11-29 16:44:17 +01:00
Manuela Kuhn
1326e9d538 py-greenlet: add v1.1.2 (#27594) 2021-11-29 16:43:38 +01:00
Bernhard Kaindl
bd0960cda8 gpgme: Add 0.16.0 and address test suite issues (#27604)
A part of the gpgme testsuite by default even runs during normal
make and make install phases, creating a public keyring in ~/.gnupg.
Prevent this and avoid build failures in containers due to another
problem of the test suite and fix a test case of the new 0.16.0 release.
2021-11-29 16:43:12 +01:00
Ethan Stam
cfd2ea0cff ParaView: Build edition strings should be uppercase (#27534) 2021-11-29 16:38:25 +01:00
Adam J. Stewart
e5e9dc2d92 py-pytorch-lightning: add v1.5.3 (#27650) 2021-11-29 16:31:58 +01:00
Derek Ryan Strong
f8d03b1060 pixz: add v1.0.7 (#27651) 2021-11-29 16:29:10 +01:00
Harmen Stoppels
037ece674b distro: don't use deprecated linux_distribution (#27658) 2021-11-29 16:26:19 +01:00
Nicolas Cornu
87cf38a427 Random123: add v1.14.0 from github (#27660) 2021-11-29 16:13:50 +01:00
Massimiliano Culpo
0d10408a25 bootstrap: restrict patchelf to v0.13.x (#27685) 2021-11-29 15:41:25 +01:00
Bernhard Kaindl
48d98b4c9e podman: new package (with dependencies) (#27603)
When the uidmap tools are installed on a system, this allows to run
containers as unprivileged user (rootless and daemonless) slimilar
to singularity, but using a familiar CLI: "alias docker=podman"

This is helpful to run e.g. spack builds in containers to reproduce
build failures from CI without requiring a installation of docker.
The required dependencies of podman are added as well.
2021-11-29 15:39:19 +01:00
Valentin Volkl
64c4fbd220 edm4hep: update hepmc dependency (#27678) 2021-11-29 14:55:07 +01:00
Bernhard Kaindl
af0ba4e3ef gtkplus: fix build of GTK2 versions with gcc-11, skip X tests (#26970)
Fix to not attempt to patch a nonexisting file for old versions
when building with gcc-11. Skip the build-time tests as all access
the X DISPLAY and open many windows on the screen.
2021-11-29 14:46:51 +01:00
Harmen Stoppels
5dce4d79bd patchelf: add v0.13.1, v0.14, v0.14.1 (#27681) 2021-11-29 13:46:15 +01:00
Paul Ferrell
c0edb17b93 Handle byte sequences which are not encoded as UTF8 while logging. (#21447)
Fix builds which produce a lines with non-UTF8 output while logging
The alternative is to read in binary mode, and then decode while
ignoring errors.
2021-11-29 13:27:02 +01:00
Vanessasaurus
bdde70c9d3 "branch" spelling mistake in libbeato (#27682)
I was browsing package metadata, as one does on a Sunday, and stumbled across a new kind of version attribute - brancch! I suspect this is supposed to be "branch."
2021-11-29 13:17:40 +01:00
iarspider
8ff81d4362 Fix py-astroid recipe (#27670) 2021-11-28 11:07:53 -06:00
Mark Grondona
1b71ceb384 add flux-core v0.31.0 and flux-sched v0.20.0 (#27336)
Update flux-core and flux-sched package.py to include latest releases.

For flux-sched:

 - Add patch to disable false-positive-happy valgrind test
 - pin yaml-cpp to 0.6.3 due to issue described at:
   https://github.com/flux-framework/flux-sched/issues/886
2021-11-28 02:18:00 +01:00
Joseph Wang
0ab5828d0d gdk-pixbuf: Fix 2.42.2 and add 2.42.6 (#27254)
Starting with meson 0.60, unknown args produce errors and
the -Dx11 arg is only present in @:2.40
https://gitlab.gnome.org/GNOME/gtk-osx/-/issues/44

Add tiff variant: Default disabled since fails the tests in part.
Only libtiff@:3.9 pass, but these old versions have severe security issues.

Deprecate @:2.41 as they are affected by the high-severity CVE-2021-20240:
https://nvd.nist.gov/vuln/detail/CVE-2021-20240
2021-11-28 01:30:14 +01:00
iarspider
d5b243de14 py-scipy: update pybind11 versions (#27679) 2021-11-27 16:03:43 -06:00
iarspider
1d8d975721 Apply requested fix from #27643 (#27672) 2021-11-26 16:45:38 -06:00
iarspider
96ce0e393d New version: py-send2trash 1.8.0 (#27583) 2021-11-26 14:35:03 -07:00
Jannis Schönleber
cf3762369c dyninst: use spack libiberty path (#25779) 2021-11-26 15:58:53 -05:00
iarspider
8138ebaa32 New version: py-tomlkit 0.7.2 (#27633) 2021-11-26 13:11:20 -07:00
iarspider
eb42514124 New version: py-threadpoolctl 3.0.0 (#27632) 2021-11-26 13:10:58 -07:00
Harmen Stoppels
4d6891162d Use bash in setup_git.sh (#27676) 2021-11-26 18:03:05 +00:00
iarspider
978f091822 Fix py-aiohttp recipe (#27671) 2021-11-26 08:17:01 -07:00
Massimiliano Culpo
a96f2f603b Bootstrap patchelf like GnuPG (#27532)
Remove a custom bootstrapping procedure to
use spack.bootstrap instead

Modifications:
* Reference count the bootstrap context manager
* Avoid SpackCommand to make the bootstrapping
  procedure more transparent
* Put back requirement on patchelf being in PATH for unit tests
* Add an e2e test to check bootstrapping patchelf
2021-11-26 15:32:13 +01:00
iarspider
2cdc758860 Fix version constraint in py-ipykernel (#27665)
* Fix version constrains in py-ipykernel and py-ipython

Before the fix:
```
$ spack spec  py-ipykernel@6.4.1  ^py-jupyter-client@7.0.6
==> Error: py-ipykernel@6.4.1 ^py-jupyter-client@7.0.6 is unsatisfiable, conflicts are:
  no version satisfies the given constraints
```

After the fix:
```
```
(thanks god the old concretizer is still there - it provides sane error messages!)

* Fix py-ipython recipe

* Revert "Fix py-ipython recipe"

This reverts commit d65071665f.
2021-11-25 17:58:43 -07:00
Seth R. Johnson
9904159356 New package: py-sphinx-argparse (#27663) 2021-11-25 13:54:34 -06:00
Maxim Belkin
6e095a9741 module_file_support: update format for configuration (#27598) 2021-11-25 08:41:32 +01:00
iarspider
90b4f55001 New version: py-theano 1.0.5 (#27631) 2021-11-24 18:16:48 -07:00
iarspider
60e683f5d3 New version: py-requests 2.26.0 (redo of #27579) (#27647) 2021-11-24 17:38:01 -07:00
iarspider
a9a6e00d14 New version: py-rsa 4.7.2 (#27581) 2021-11-24 16:52:44 -07:00
iarspider
6fa0feb7de New version: py-rich 10.14.0 (redo of #27580) (#27646) 2021-11-24 16:04:53 -07:00
iarspider
0db93a5dea py-virtualenv package: add version 20.10.0; new dependency py-distlib (#27637)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-24 15:38:03 -07:00
Massimiliano Culpo
270ba10962 spack flake8: remove deprecated command (#27290)
The "spack flake8" command wwas deprecated in favor
of "spack style". The deprecation wwarning is in the
0.17.X series, so removing it for v0.18.x
2021-11-24 14:20:11 -08:00
iarspider
f8025d4f88 New version: py-requests-oauthlib 1.3.0 (#27578) 2021-11-24 16:20:02 -06:00
liuyangzhuan
83f97e8eba Adding packages for gptune and its dependencies (#26936)
* added package gptune with all its dependencies: adding py-autotune, pygmo, py-pyaml, py-autotune, py-gpy, py-lhsmdu, py-hpbandster, pagmo2, py-opentuner; modifying superlu-dist, py-scikit-optimize

* adding gptune package

* minor fix for macos spack test

* update patch for py-scikit-optimize; update test files for gptune

* fixing gptune package style error

* fixing unit tests

* a few changes reviewed in the PR

* improved gptune package.py with a few newly added/improved dependencies

* fixed a few style errors

* minor fix on package name py-pyro4

* fixing more style errors

* Update var/spack/repos/builtin/packages/py-scikit-optimize/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* resolved a few issues in the PR

* fixing file permissions

* a few minor changes

* style correction

* minor correction to jq package file

* Update var/spack/repos/builtin/packages/py-pyro4/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* fixing a few issues in the PR

* adding py-selectors34 required by py-pyro4

* improved the superlu-dist package

* improved the superlu-dist package

* moree changes to gptune and py-selectors34 based on the PR

* Update var/spack/repos/builtin/packages/py-selectors34/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-24 16:06:43 -06:00
Brian Spilner
d6b0c838dd cdo package: add version 2.0.1 (#27608) 2021-11-24 13:20:28 -08:00
Paul
94bcd21ccd Go package: add versions 1.17.3 and 1.16.10 (#27614) 2021-11-24 13:10:23 -08:00
Satish Balay
d8ac578ce1 petsc: add variant kokkos (with +cuda) (#27624) 2021-11-24 13:09:14 -08:00
Tim Haines
f9bae91dea Dyninst package: Add versions 12.0.0 and 12.0.1 (#27623) 2021-11-24 12:23:30 -08:00
iarspider
6147b47719 py-terminado package: add version 0.12.1 (#27629) 2021-11-24 12:03:04 -08:00
iarspider
191de76ccb py-testpath package: add version 0.5.0 (#27630) 2021-11-24 12:02:18 -08:00
iarspider
b500d40a25 py-uproot package: add version 4.1.8 (#27635) 2021-11-24 12:01:25 -08:00
iarspider
43c663c5cf py-virtualenv-clone package: add version 0.5.7 (#27636) 2021-11-24 11:58:41 -08:00
iarspider
3fdab96728 py-virtualenvwrapper package: add version 4.8.4 (#27638) 2021-11-24 11:45:36 -08:00
iarspider
fcd51d81d9 py-wcwidth package: add version 0.2.5 (#27639) 2021-11-24 11:44:51 -08:00
iarspider
83a2014ede py-websocket-client package: add version 1.2.1 (#27640) 2021-11-24 11:43:42 -08:00
iarspider
1c38810e11 py-wheel package: add version 0.37.0 (#27641) 2021-11-24 11:42:02 -08:00
Maciej Wójcik
10ca5f1cd2 GROMACS-SWAXS package: add versions including 2021.4-0.1 (#27642) 2021-11-24 11:41:16 -08:00
iarspider
e7eaebebd4 py-yarl package: add version 1.7.2 (#27643) 2021-11-24 11:40:06 -08:00
Robert Cohn
76ad803f25 intel-oneapi-mkl: add cluster libs and option for static linking (#26256) 2021-11-24 11:04:05 -08:00
Harmen Stoppels
dbf67d912c Make patchelf test use the realpath (#27618)
I think this test should be removed, but when it stays, it should at
least follow the symlink, cause it fails for me if I let spack build
patchelf and have a symlink in a view.
2021-11-24 11:17:23 +01:00
Massimiliano Culpo
5c3dfacdc5 Update distro to v1.6.0 (#27263) 2021-11-24 10:10:11 +00:00
albestro
635984f077 umpire: fix gcc@10.3.0 conflict (#27620)
Make sure that `gcc@10.3.0-[number]` also matches the conflict.
2021-11-24 02:49:51 -07:00
Massimiliano Culpo
70d5d234db Update Jinja2 to v2.11.3 and MarkupSafe to v1.1.1 (#27264) 2021-11-24 10:21:35 +01:00
Massimiliano Culpo
12da0a9a69 Update six to v1.16.0 (#27265) 2021-11-24 10:20:04 +01:00
Harmen Stoppels
97ebcb99b4 libblastrampoline: use tarballs and add maintainer (#27619)
* Use tarballs for libblastrampoline

* Add myself as maintainer
2021-11-24 09:45:57 +01:00
Vanessasaurus
a5bd6acd90 Adding oras go package (#27605)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-11-24 07:06:29 +01:00
Sreenivasa Murthy Kolam
d6d78b8457 rocm-tensile: Add new GPUs to tensile-architecture list (#26913) 2021-11-23 18:35:16 -08:00
Thomas Dickerson
c263b64d2e Racket package: disable parallel build; add variants (#27506)
- Prevent `-j` flags to `make`, which has been known to cause problems
  with Racket builds.
- Add variants for various common build flags, including support
  for both versions of the Racket VM environment.

In addition:

- Prefer the minimal release to improve install times. Bells and
  whistles carry their own runtime dependencies and should be
  installed via `raco`. An enterprising user may even create a
  `RacoPackage` class to make spack aware of `raco` installed packages.
- Match the official version numbering scheme.
2021-11-23 16:45:18 -08:00
Harmen Stoppels
dfc95cdf1c llvm: use ninja by default (#27521)
* llvm: use ninja by default

* Use ninja for omp when it's not a runtime
2021-11-23 16:28:17 -08:00
John Wohlbier
0f1c04ed7e Add encryption library packages: helib, palisade-development, and seal (#27495)
Add packages for three common homomorphic encryption libraries.
HElib, Palisade, and Seal. Also add package for number theoretic library NTL.
2021-11-23 15:11:37 -08:00
Harmen Stoppels
cced832cac Fix leaky tests (#27616)
* fix: cc.py should use a function not session scope
* fix: don't let build env vars leak to other tests
* fix: don't leak build env in dev_build test
2021-11-23 14:10:48 -08:00
Manuela Kuhn
2d20e557df py-flask: add 2.0.2 (#27615) 2021-11-23 11:11:12 -07:00
Vanessasaurus
14607b352c adding new package go cosign (#27606)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-11-23 09:56:45 -08:00
Massimiliano Culpo
fa7189b480 Remove support for Python 2.6 (#27256)
Modifications:
- [x] Removed `centos:6` unit test, adjusted vermin checks
- [x] Removed backport of `collections.OrderedDict`
- [x] Removed backport of `functools.total_ordering`
- [x] Removed Python 2.6 specific skip markers in unit tests
- [x] Fixed a few minor Python 2.6 related TODOs in code

Updating the vendored dependencies will be done in separate PRs
2021-11-23 09:06:17 -08:00
Manuela Kuhn
812663de62 py-cython: add 3.0.0a9 (#27609)
* py-cython: add 3.0.0a9

* Update var/spack/repos/builtin/packages/py-cython/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-23 16:33:29 +00:00
Larry Knox
f96e3371a0 Update to latest commit hash for vol-log-based. (#27612)
Update minimum hdf5 version to 1.13.0.
2021-11-23 10:09:19 -05:00
Larry Knox
8476a2e4a1 Add hdf5-vol-external-passthrough package. (#27289)
* Add hdf5-vol-external-passthrough package.

* Add version v1.0 for vol-external-passthrough release.

* Remove incorrect comment.

* Add source url and checksum.  Update HDF5 minimum version.
2021-11-23 10:06:48 -05:00
Nathan Hanford
2104f1273a bugfix: Allow legacy tests to be read after hash break (#26078)
* added a test case

Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
2021-11-22 21:49:41 -07:00
iarspider
feb66cba01 New version: py-stevedore 3.5.0 (#27586) 2021-11-22 18:31:31 -06:00
Manuela Kuhn
b961491ad2 py-zope-event: add 4.5.0 (#27595) 2021-11-22 16:05:13 -07:00
iarspider
411b3ecc37 New version: py-singledispatch 3.7.0 (#27584)
* New version: py-singledispatch 3.7.0

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-22 15:44:07 -07:00
Robert Blake
3e63bc08ee ncurses: fix external package detection on OS X (#27490) 2021-11-22 14:37:58 -08:00
Piotr Luszczek
e90d5ad6cf Intel packages: add support for LLVM OpenMP (#26517) 2021-11-22 13:41:43 -08:00
Harmen Stoppels
abec10fcd5 suite-sparse package: add support for blas symbol suffix (#27510) 2021-11-22 13:18:39 -08:00
Massimiliano Culpo
6f8fd5b7a7 raja: fix recipe for old versions (#27591)
Raja didn't depend on camp prior to v0.12.0
2021-11-22 22:07:54 +01:00
Massimiliano Culpo
5eba5dc271 Make CUDA and ROCm architecture conditional (#27185)
* Make CUDA and ROCm architecture conditional

fixes #14337

The variant to specify which architecture to use
for CUDA and ROCm are now conditional on +cuda and
+rocm respectively.

* cp2k: make all CUDA related variants conditional on +cuda
2021-11-22 07:54:19 -05:00
Wouter Deconinck
5f10562ad1 [geant4] new version geant4@10.7.3 (#27568)
* [geant4] new version geant4@10.7.3

Release notes: https://geant4-data.web.cern.ch/ReleaseNotes/Patch4.10.7-3.txt, no build system changes.

* [geant4] depends_on geant4-data@10.7.3

* [geant4-data] new version @10.7.3
2021-11-22 12:37:11 +00:00
Hadrien G
596f2e7c4d acts: add v15, embrace conditional variants (#27529) 2021-11-22 10:49:54 +01:00
Mahendra Paipuri
1a95b979d8 Disable rsmi for hwloc when rocm is not enabled (#27547)
Co-authored-by: mahendrapaipuri <mahendra.paipuri@inria.fr>
2021-11-22 10:47:56 +01:00
Harmen Stoppels
0024e5cc9b Make _enable_or_disable(...) return an empty array for conditional variants whose condition is not met (#27504) 2021-11-22 10:47:09 +01:00
Adam J. Stewart
cda9d6d981 py-rapidfuzz: add new package (#27559) 2021-11-22 10:38:33 +01:00
Manuela Kuhn
d3b35c4063 py-itsdangerous: add 2.0.1 (#27556) 2021-11-22 10:38:10 +01:00
Harmen Stoppels
3bfd78e6b8 ccache: speed up builds, don't build tests (#27567) 2021-11-22 10:28:05 +01:00
psakievich
c167b7d532 nalu-wind: remove cxxstd from trilinos dependency to enable testing for cpp17 (#27563) 2021-11-22 10:18:30 +01:00
Wouter Deconinck
6d30d87d7e mesa: add v21.2.4 and v21.2.5 (#27571)
Last bugfix versions before just released version @21.3.0 which will be part of a separate PR.
2021-11-22 10:14:00 +01:00
Thomas Green
e7b97c9473 harminv: update for blas/lapack (#27572) 2021-11-22 09:51:46 +01:00
Sebastian Ehlert
8d1e66a627 fpm: add v0.5.0 (#27575) 2021-11-22 09:48:10 +01:00
Adam J. Stewart
7c14c86ffd py-laspy: add new package (#27576) 2021-11-22 09:45:30 +01:00
Tracy-Pantleo
49b0952371 portage, tangram, wonton: update dependencies (#26886)
* tangram/wonton spack

* portage spack update

* wonton spack updated

* tangram spack updated

* white space removed

* Update package.py

* Update package.py

* Update package.py

* Update package.py

portage spack upstream

* Update package.py

for loop test

* Update package.py

style errors

* Update package.py

* tangram

* Update package.py

* Update package.py

* Update package.py

* Update package.py

* Update package.py

edits

* Update package.py

* Update package.py

remove @master

* Update package.py

remove @master

* Update var/spack/repos/builtin/packages/portage/package.py

Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>

* Update var/spack/repos/builtin/packages/wonton/package.py

Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>

* Update package.py

* Update package.py

* Update package.py

* Update package.py

Co-authored-by: Tracy Nicole Pantleo <tpantleo@varan.lanl.gov>
Co-authored-by: Tracy Nicole Pantleo - 361537 <tpantleo@darwin-fe1.lanl.gov>
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
2021-11-19 22:09:56 -05:00
iarspider
e47fa05eb8 New version: py-pyzmq 22.3.0 (#27543)
* New version: py-pyzmq 22.3.0

* Update var/spack/repos/builtin/packages/py-pyzmq/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-19 13:53:02 -07:00
Adam J. Stewart
2c5fc14bf9 py-progressbar2: add v3.55.0 (#27558) 2021-11-19 14:35:44 -06:00
Adam J. Stewart
fd0f0e564d py-ruamel-yaml-clib: link to python (#27557) 2021-11-19 14:35:31 -06:00
Manuela Kuhn
aa934d00e3 py-jinja2: add 3.0.3 (#27560) 2021-11-19 14:35:18 -06:00
Raghu Raja
f26d25e24b libfabric: Add v1.14.0 release (#27562)
Signed-off-by: Raghu Raja <raghu@enfabrica.net>
2021-11-19 13:35:00 -07:00
Jen Herting
99f954f1c1 [py-pickle5] New package (#27533)
Co-authored-by: Sid Pendelberry <sid@rit.edu>
2021-11-19 14:31:22 -06:00
Joseph Snyder
d759612523 Add connection specification to mirror creation (#24244)
* Add connection specification to mirror creation

This allows each mirror to contain information about the credentials
used to access it.

Update command and tests based on comments

Switch to only "long form" flags for the s3 connection information.
Use the "any" function instead of checking for an empty list when looking
for s3 connection information.

Split test to use the access token separately from the access id and key.
Use long flag form in test.

Add endpoint_url to available S3 options.

Extend the special parameters for an S3 mirror to accept the
endpoint_url parameter.

Add a test.

* Add connection information per URL not per mirror

Expand the mirror-based connection information to be per-URL.
This will allow a user to specify different S3 connection information
for both the fetch and the push URLs.

Add a parameter for "profile", another way of storing the id/secret pair.

* Switch from "access_profile" to "profile"
2021-11-19 15:28:34 -05:00
Manuela Kuhn
a5c50220db py-werkzeug: add 2.0.2 (#27555) 2021-11-19 12:17:06 -07:00
Manuela Kuhn
cdc5b32240 py-cycler: add 0.11.0 and get sources from pypi (#27550)
* py-cycler: add 0.11.0

* Update var/spack/repos/builtin/packages/py-cycler/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-19 11:20:14 -07:00
Manuela Kuhn
efe635a3f8 py-debugpy: add 1.5.1 (#27551) 2021-11-19 11:05:12 -07:00
iarspider
0288ef8733 New version: py-pyaml 6.0 (#27542) 2021-11-19 11:35:50 -06:00
iarspider
e08ebff139 New version: py-qtconsole 5.2.0 (#27554)
* New version: py-qtconsole 5.2.0

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-19 10:10:59 -07:00
Jen Herting
af1b61569f New package: py-lws (#27536)
* [py-lws] added package

* added cython support

* updates for upstream

* updated py-lws with cython only option

* changes to py-lws for flake8

* [py-lws]

- update copyright
- url -> pypi

Co-authored-by: Sid Pendelberry <sid@rit.edu>
2021-11-19 11:03:52 -06:00
Manuela Kuhn
7784ff55d6 py-filelock: add 3.4.0 (#27553) 2021-11-19 09:35:07 -07:00
iarspider
7be632b558 New version: openldap 2.6.0; (#27520)
* New version: openldap 2.6.0;
fix recipe for groff (requires pkg-config to find uchardet);
fix recipe for openldap (requires groff to build documentation)

* Restrict openldap versions of py-python-ldap and percona-server

* Update var/spack/repos/builtin/packages/groff/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-19 10:31:40 -06:00
iarspider
047062eedf New versopn: py-python-daemon 2.3.0 (#27519)
* New versopn: py-python-daemon 2.3.0

* Update package.py
2021-11-19 10:30:47 -06:00
iarspider
f472716e96 New version: py-pythran 0.10.0 (#27528)
* New version: py-pythran 0.10.0

* Update package.py
2021-11-19 10:21:21 -06:00
Manuela Kuhn
347da5f7b3 py-babel: add 2.9.1 (#27544) 2021-11-19 10:20:58 -06:00
Manuela Kuhn
4a18957093 py-coverage: add 6.1.2 (#27545) 2021-11-19 10:20:07 -06:00
Harmen Stoppels
3fcc6c669f Spack requires Python 2.7: from 0.18.0: (#27505)
* Spack requires Python 2.7: from 0.18.0:

* Better bounds
2021-11-19 10:18:21 -06:00
Manuela Kuhn
33b380937a py-cryptography: add 3.4.8 and 35.0.0 (#27548) 2021-11-19 10:16:21 -06:00
Manuela Kuhn
eb515b87d7 py-click: add 8.0.3 (#27549) 2021-11-19 10:13:24 -06:00
Harmen Stoppels
c5aee4d9b4 define_from_variant: return an empty string for non-existing variants (#27503)
This permits to use conditional variants without a lot of boilerplate.
2021-11-19 14:10:00 +01:00
iarspider
3db918b1cd New version: py-pytools 2021.2.9 (#27531) 2021-11-19 02:23:03 -07:00
Michael Kuhn
eefff81289 otf: specify zlib prefix explicitly (#27491)
This causes `otfconfig` to also list zlib's library directory.
Otherwise, `-lz` cannot be found. Also remove `--with-zlibsymbols`,
which doesn't seem to be supported anymore.
2021-11-19 09:37:27 +01:00
Michael Davis
3375db12a5 Adding --reuse to dev-build command. (#27487) 2021-11-19 09:25:45 +01:00
Diego Alvarez
4d059ead8c nextflow: add v21.10.1 (#27538) 2021-11-19 09:21:14 +01:00
Eric Brugger
4cf71b2c7f VisIt: update for building with 3.2.1. (#27036)
Include several several patches to vtk 8.1 for building on a 
system with no system install X11 libraries or include files.

Specify specific versions of dependent packages that are known to work with 3.2.1.

Tested on spock.olcf.ornl.gov. The GUI came up and rendered images 
and an image was successfully saved using off screen rendering from 
data from curv2d.silo.
2021-11-19 09:05:13 +01:00
Massimiliano Culpo
57d3b02800 Remove spurious debug print (#27541) 2021-11-19 09:02:22 +01:00
haralmha
4d1f09c752 thepeg: fix version condition bug (#27501) 2021-11-19 08:56:00 +01:00
iarspider
c183111a61 py-rapidjson: add version 1.5 (#27527) 2021-11-18 17:52:06 -08:00
Manuela Kuhn
0b6ce17dc4 py-ply: get sources from pypi (#27526)
* py-ply: use pypi

* Add py-setuptools dependency
2021-11-18 16:52:51 -07:00
Massimiliano Culpo
e3cd91af53 Refactor bootstrap of "spack style" dependencies (#27485)
Remove the "get_executable" function from the
spack.bootstrap module. Now "flake8", "isort",
"mypy" and "black" will use the same
bootstrapping method as GnuPG.
2021-11-18 15:23:09 +01:00
iarspider
d0c4aeb0c0 Fix cyrus-sasl recipe: requires groff to build documentation (#27522) 2021-11-18 15:10:37 +01:00
Massimiliano Culpo
f981682bdc Allow recent pytest versions to be used with Spack (#25371)
Currently Spack vendors `pytest` at a version which is three major 
versions behind the latest (3.2.5 vs. 6.2.4). We do that since v3.2.5 
is the latest version supporting Python 2.6. Remaining so much 
behind the currently supported versions though might introduce 
some incompatibilities and is surely a technical debt.

This PR modifies Spack to:
- Use the vendored `pytest@3.2.5` only as a fallback solution, 
  if the Python interpreter used for Spack doesn't provide a newer one
- Be able to parse `pytest --collect-only` in all the different output 
  formats from v3.2.5 to v6.2.4 and use it consistently for `spack unit-test --list-*`
- Updating the unit tests in Github Actions to use a more recent `pytest` version
2021-11-18 15:08:59 +01:00
Harmen Stoppels
8f7640dbef ci: run style unit tests only if we target develop (#27472)
Some tests assume the base branch is develop, but this branch may not
have been checked out.
2021-11-18 13:00:39 +01:00
Alec Scott
7ee736a158 Add Clingo v5.5.1 (#27512) 2021-11-18 06:27:01 +01:00
Glenn Johnson
b260efd9c4 new package: py-scikit-learn-extra (#27439)
* new package: py-scikit-learn-extra

* Remove extra blank line

* Add missing py-cython dependency
2021-11-17 23:13:19 -06:00
Manuela Kuhn
b4fcb4d43c py-envisage: add 6.0.1 (#27517) 2021-11-17 23:12:28 -06:00
haralmha
301a506aa2 py-pygraphviz: Add version 1.7 (#26712)
* Add version 1.7

* Fixup! Add version 1.7
2021-11-17 23:08:56 -06:00
Harmen Stoppels
40a6ac62d3 llvm: introduce [build/link]_llvm_dylib (#27450)
Apart from building a single dylib for LLVM, users should also be able
to link tools against it.
2021-11-17 20:10:59 -07:00
Alec Scott
d24d13559b Add Htslib v1.14, Samtools v1.14, and Bcftools v1.14 (#27515) 2021-11-17 14:30:18 -08:00
Axel Huebl
367c3d8a83 openPMD-validator: 1.1.0.2 (#27514)
Add the latest versions.
2021-11-17 14:26:23 -08:00
Alec Scott
684d9cef23 Add LMOD v8.5.27 (#27513) 2021-11-17 14:21:57 -08:00
Ben Bergen
c459adbf6a Updated Repository Information (#27496) 2021-11-17 10:02:41 -08:00
Jose E. Roman
d9a39d6128 New patch release SLEPc 3.16.1 (#27509) 2021-11-17 09:57:53 -08:00
Manuela Kuhn
26c5001ec4 py-ipykernel: add 5.5.6 (#27498) 2021-11-17 11:06:08 -06:00
Harmen Stoppels
c05746eac2 llvm: use LIBCXX_ENABLE_STATIC_ABI_LIBRARY=ON (#27448)
The previous workaround of using CMAKE_INSTALL_RPATH=ON was used to
avoid CMake trying to write an RPATH into the linker script libcxx.so,
which is nonsensical. See commit f86ed1e.

However, CMAKE_INSTALL_RPATH=ON seems to disable the build RPATH, which
breaks LLVM during the build when it has to locate its build-time shared
libraries (e.g. libLLVM.so). That required yet another workaround, where
some shared libraries were installed "by hand", so that they were picked
up from the install libdir. See commit 8a81229.

This was a dirty workaround, and also makes it impossible to use ninja,
since we explicitly invoked make.

This commit removes the two old workaround, and sets
LIBCXX_ENABLE_STATIC_ABI_LIBRARY=ON, so that libc++abi.a is linked into
libc++.so, which makes it enough to link with -lc++ or invoke clang++
with -stdlib=libc++, so that our install succeeds and linking LLVM's c++
standard lib is still easy.
2021-11-17 09:14:05 -07:00
Harmen Stoppels
d900abe7e1 openblas: add a symbol suffix variant (#27500)
Some packages use a 64_ or _64 symbol suffix for the ilp64 (= 64-bit
integers) intefrace for BLAS. In particular if we want to support shim
libraries like libopenblastrampoline supporting both the 32 and 64 bit
integer version of blas, it must be possible to distinguish between the
two.
2021-11-17 08:52:56 -07:00
Bernhard Kaindl
2765861705 qt+webkit: Build needs Py2, but mesa/Meson needs Py3 (#27466)
mesa inherits MesonPackage (since October 2020) which depends on Py@3.
The conflicts('mesa') enables a regular build of `qt@5.7:5.15+webkit`
without having to specify the exact version by causing the concretizer
to select mesa18 which does not depend on python@3.

Co-authored-by: Bernhard Kaindl <bernhard.kaindl@ait.ac.at>
2021-11-17 07:42:49 -05:00
Harmen Stoppels
ac8993ac38 New version of Spack and new conflicts (#27384) 2021-11-17 11:25:33 +01:00
Harmen Stoppels
cc62689504 Fix overly generic exceptions in log parser (#27413)
This type of error is skipped:

make[1]: *** [Makefile:222: /tmp/user/spack-stage/.../spack-src/usr/lib/julia/libopenblas64_.so.so] Error 1

but it's useful to have it, especially when a package sets a variable
incorrectly in makefiles
2021-11-17 11:24:14 +01:00
Vasileios Karakasis
9260c38408 Add ReFrame 3.9.1 (#27493) 2021-11-17 00:44:59 +00:00
iarspider
b4ac385d70 New version: py-pytest-cov 3.0.0; add 'toml' variant to py-coverage (#27478)
* New version: py-pytest-cov 3.0.0; add 'toml' variant to py-coverage

* Update package.py
2021-11-16 17:16:59 -07:00
iarspider
85a98fc758 New version: py-pylint 2.8.2; new package py-platformdirs (#27473)
* New version: py-pylint 2.8.2; new package py-platformdirs

* Update var/spack/repos/builtin/packages/py-platformdirs/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-pylint/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-16 17:10:52 -07:00
Harmen Stoppels
0221639e8a llvm: maintainers (#27482) 2021-11-16 15:41:02 -07:00
Brian Spilner
7ad9fc7fdb cdo: new release cdo-2.0.0 (#27457) 2021-11-16 22:08:43 +01:00
Robert Cohn
67cba372e8 Intel mpi: allow use of external libfabric (#27292)
Intel mpi comes with an installation of libfabric (which it needs as a
dependency). It can use other implementations of libfabric at runtime
though, so if you install a package that depends on `mpi` and
`libfabric`, you can specify `intel-mpi+external-libfabric` and ensure
that the Spack-built instance is used (both by `intel-mpi` and the
root).

Apply analogous change to intel-oneapi-mpi.
2021-11-16 12:55:24 -08:00
iarspider
b194b957ce New versions: py-pyasn1 0.4.8, py-pyasn1-modules 0.2.8 (#27453)
* New versions: py-pyasn1 0.4.8, py-pyasn1-modules 0.2.8

* Update package.py
2021-11-16 13:52:56 -07:00
Thomas Kluyver
df5892ac89 Add py-h5py 3.6.0 (#27476) 2021-11-16 13:28:08 -06:00
iarspider
a432d971b1 New version: py-pyrsistent 0.18.0 (#27477)
* New version: py-pyrsistent 0.18.0

* Update package.py

* Update var/spack/repos/builtin/packages/py-pyrsistent/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-16 12:20:04 -07:00
Manuela Kuhn
85cf8af026 py-pyqt-builder: add new package (#27484) 2021-11-16 12:11:30 -07:00
iarspider
a867320021 New version: py-pytest 6.2.5 (#27481)
* New version: py-pytest 6.2.5

* Update var/spack/repos/builtin/packages/py-pytest/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-16 12:11:08 -07:00
Harmen Stoppels
a70d917da6 Bump curl (#27471) 2021-11-16 10:30:28 -08:00
iarspider
b9314fd887 New version: py-pu 1.11.0 (#27452) 2021-11-16 11:49:42 -06:00
iarspider
de916bd27b New version: py-pycuda 2021.1 (#27458) 2021-11-16 11:46:23 -06:00
iarspider
9f6dd08e1c New version: py-pycurl 7.44.1 (#27470) 2021-11-16 11:40:43 -06:00
iarspider
db307f9946 New version: py-pymongo 3.12.1 (#27474) 2021-11-16 11:39:28 -06:00
Harmen Stoppels
7c3b146789 llvm: flang implies mlir (#27449) 2021-11-16 10:08:07 -07:00
Harmen Stoppels
54fc26bf34 Add Python 3.9.9 (#27475) 2021-11-16 10:17:23 -06:00
Harmen Stoppels
a780f96a25 Add a symbol suffix option to LLVM (#27445) 2021-11-16 06:06:07 -08:00
Arjun Guha
bb54e9a4be racket: fix URL extrapolation (#27459) 2021-11-16 08:38:47 +01:00
Bernhard Kaindl
ada598668b mesa: Use the llvm-config of spec['llvm'] for '+llvm' (#27235)
Fix builds on hosts where /usr/bin/llvm-config-* is found and provides an
incompatible version: Ensure that the llvm-config of spec['llvm'] is used.
2021-11-16 08:07:48 +01:00
Robert Cohn
8c7fdadeba Add a maintainer for Intel packages (#27465) 2021-11-16 08:03:12 +01:00
Ben Bergen
5a8271d0c6 lanl-cmake-modules: new package (#27447)
Several projects have not yet added modern CMake support. This package
adds find_package support for some of these projects.
2021-11-16 07:31:24 +01:00
Hans Fangohr
35bf91012f oommf: new package (#26933)
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2021-11-15 15:04:08 +01:00
Erik Schnetter
beddcd6a6d pgplot: Correct build error with shared libraries (#27154)
When building pgplot as shared library, dependencies need to be listed explicitly.
2021-11-15 13:51:43 +01:00
Thomas Madlener
6ce4a5ce8c lcio: add v2.17 (#27238) 2021-11-15 13:49:43 +01:00
Edgar Leon
4a71704ae4 mpibind: add python bindings (#27127) 2021-11-15 13:48:14 +01:00
Kyle Gerheiser
ce9ae3c70d Add module variables for NCEPLIBS (#27162)
Use setup_run_environment to search for libraries and set env variables for module generation.

Libraries are installed with CMAKE_INSTALL_LIBDIR, which can be lib or lib64 depending on the machine, which makes it impossible to hardcode through modules.yaml.
2021-11-15 13:21:53 +01:00
Harmen Stoppels
55e853c160 mbedtls: fix shared libs needed section and add new version (#27285) 2021-11-15 13:10:51 +01:00
Sinan
90a92c4efc libspatialite: add v5.0.1 (#27181)
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
2021-11-15 11:48:08 +01:00
kwryankrattiger
fbfc27d707 ECP-DAV-SDK: remove catalyst (#27204)
Temporarily remove +catalyst, assumes catalyst is on when +paraview
2021-11-15 11:46:14 +01:00
Michael Kuhn
aac505d4ff libfuse: fix build with new glibc (#27376)
glibc 2.34+ breaks libfuse@2 (see https://bugs.gentoo.org/803923). While
we are at it, backport a few patches from upstream.
2021-11-15 11:29:44 +01:00
laestrada
e5a0986433 gchp: add v13.2.1 (#27318)
Co-authored-by: laestrada <lestrada00@gmail.com>
2021-11-15 11:28:25 +01:00
Glenn Johnson
2ea1037369 llvm: remove cyclades code from llvm-10:11 (#27385) 2021-11-15 11:27:00 +01:00
Glenn Johnson
faeea04712 star-ccm-plus: add version and environment setup (#27388)
- add version 16.06.008_01
- set up run environment
2021-11-15 11:26:05 +01:00
Glenn Johnson
cdcde36f51 perl-forks: handle non-threaded perl (#27392)
If the perl that perl-forks is built against is non-threaded the build
system will drop into interactive mode to ask about simulating ithreads.
This causes the build to hang. Set FORKS_SIMULATE_USEITHREADS to avoid
going into interactive mode.
2021-11-15 11:24:36 +01:00
Glenn Johnson
85a33bd2f1 perl-dbd-mysql: depend on mysql-client (#27393) 2021-11-15 11:24:09 +01:00
Glenn Johnson
19493e4932 ncl: add dependencies (#27394)
- zstd
- makedepend
2021-11-15 11:23:50 +01:00
Glenn Johnson
557556bbfd maker: add perl-dbd-mysql dependency (#27395) 2021-11-15 11:23:30 +01:00
Glenn Johnson
d01129e9c8 mariadb-c-client: add krb5 dependency (#27396) 2021-11-15 11:23:08 +01:00
Diego Alvarez
972ae2393a nextflow: add v21.10.0 (#27400) 2021-11-15 11:22:39 +01:00
Harmen Stoppels
21308eb5cc Turn some verbose messages into debug messages (#27408) 2021-11-15 11:21:37 +01:00
Valentin Volkl
4c0f5f85c0 edm4hep: add v0.4 (#27412) 2021-11-15 11:17:17 +01:00
Harmen Stoppels
9573eb5315 dsfmt: new package (#27415) 2021-11-15 11:15:42 +01:00
Glenn Johnson
d1df044b42 asciidoc: add autotools dependencies (#27418) 2021-11-15 11:13:00 +01:00
Glenn Johnson
9171d56146 czmq: use Spack's docbook (#27419) 2021-11-15 11:11:29 +01:00
Glenn Johnson
8dc8282651 dbus: use Spack's docbook (#27420) 2021-11-15 11:11:11 +01:00
Glenn Johnson
6e646e42a5 gpu-burn: set spack compiler (#27421) 2021-11-15 11:10:57 +01:00
Glenn Johnson
91b8a87fc7 interproscan: add missing dependency (#27423) 2021-11-15 11:10:27 +01:00
Glenn Johnson
5aaee953c5 libbeagle: add opencl variant (#27424) 2021-11-15 11:10:02 +01:00
Glenn Johnson
9d9fc522c8 libdrm: use Spack docbook (#27425) 2021-11-15 11:09:04 +01:00
Glenn Johnson
5e8368d0c9 libint: fix build (#27426)
- use project autogen.sh script
- set boost path
- make sure to build the compiler before the library
2021-11-15 11:08:48 +01:00
Glenn Johnson
c91b2b42a8 libzmq: use Spack docbook (#27427) 2021-11-15 11:07:30 +01:00
Desmond Orton
4cc42789db py-mxfold2: new package at v0.1.1 (#27383) 2021-11-15 11:07:02 +01:00
Glenn Johnson
f281b59394 hdf-eos5: use szip virtual (#27422) 2021-11-15 11:05:44 +01:00
Oliver Perks
4bd6a11c25 MEEP: update version and libctl (#27433) 2021-11-15 11:03:49 +01:00
Chris White
7b4c08db21 umpire: disable building docs (#27434) 2021-11-15 11:01:20 +01:00
Brian Van Essen
100b212538 Added support for new version of Cray libsci. (#27435) 2021-11-15 11:00:43 +01:00
Arjun Guha
9877c21c50 racket: add new package at v8.3.0 (#27446) 2021-11-15 10:56:26 +01:00
Adam J. Stewart
54cc7d0f30 py-pytorch-lightning: add v1.5.0 (#27266) 2021-11-15 10:49:35 +01:00
Adam J. Stewart
ee8f46e826 py-build: add new package (#27270) 2021-11-15 10:48:43 +01:00
Manuela Kuhn
d9a687af45 py-sip: add 5.5.0 and 6.4.0 (#27357) 2021-11-14 22:18:08 -06:00
iarspider
3706fef9af New version: py-pexpect 4.8.0 (#27401) 2021-11-13 17:31:56 -07:00
iarspider
76fed9c04d New version: py-plac; fix tests (#27405) 2021-11-13 17:28:27 -07:00
Glenn Johnson
7a00cef055 qt: turn off features if not supported by the kernel (#27438)
If building Qt on a system with a recent glibc but an older kernel, it
is possible that Qt configures features based on glibc that are not
supported in the kernel. This PR tests the kernel version and ensures
certain features are disabled if the kernel does not support them.
2021-11-13 15:39:50 -05:00
Manuela Kuhn
9fb9399487 py-pycortex: add new package (#27432) 2021-11-13 12:24:51 -06:00
Manuela Kuhn
94eb175e07 py-lxml: add 4.6.4 (#27440) 2021-11-13 12:19:46 -06:00
Manuela Kuhn
8e9ef0f6f6 py-neurokit2: add 0.1.5 (#27441) 2021-11-13 12:17:14 -06:00
Manuela Kuhn
02355d32bd py-deepdiff: add new package (#27442) 2021-11-13 12:14:46 -06:00
Manuela Kuhn
e3389e9017 py-pytest-runner: add 5.3.1 (#27443) 2021-11-13 12:12:59 -06:00
Manuela Kuhn
984715e859 py-imageio: add 2.10.3 (#27431) 2021-11-13 05:22:27 -07:00
Seth R. Johnson
31fec1f730 py-h5sh: replace github with pypi (#27409) 2021-11-12 19:01:25 -06:00
Satish Balay
446e4ff54b xsdk: add version @0.7.0 (#27250)
- add xsdk_depends_on() that propagates cuda, rocm options
- enable +cuda for hypre, mfem, amrex
- enable +rocm for mfem, sundials, magma, amrex, tasmanian, ginkgo, heffte
- deprecate @0.5.0
- remove 'debug' variant as it doesn't really exist
- heffte: remove binding to openmpi+cuda
- dtk: use 3.1-rc2 on macos (3.1-rc3 elsewhere)
- petsc: disable trilinos

Co-authored-by: Cody Balos <balos1@llnl.gov>
2021-11-12 17:20:35 -06:00
Cody Balos
203a5a0f71 sundials: fix for rocm support (#27295) 2021-11-12 16:13:32 -06:00
Manuela Kuhn
98fbfa2739 py-pybids: add 0.14.0 (#27430) 2021-11-12 15:04:51 -07:00
Cody Balos
c4b52098f3 superlu-dist: fix case for cce fortran compiler (#27296)
* superlu-dist: fix build with cce fortran compiler
2021-11-12 15:27:15 -06:00
iarspider
3f4d0943d7 New version: py-pkgconfig 1.5.5 (#27402)
* New version: py-pkgconfig 1.5.5

* Actually add version 1.5.5; fix dependency
2021-11-12 12:59:00 -07:00
iarspider
67f4e2674f New version: py-prettytable 2.4.0; update homepage (#27407)
* New version: py-prettytable 2.4.0; update homepage

* Update var/spack/repos/builtin/packages/py-prettytable/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-12 12:52:48 -07:00
iarspider
441a76b646 New version: py-prometheus-client 0.12.0 ... (#27410)
* New version: py-prometheus-client 0.12.0; new dependency (py-twisted) version 21.7.0 + it's dependencies

* Apply suggestions from code review (1/?)

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Changes from review (2/?)

* Changes from review (3/?)

* Changes from review (4/?)

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-12 11:31:47 -07:00
Harmen Stoppels
4b16c6fb0c bump python 3.9.8 (#27381) 2021-11-12 11:17:16 -06:00
Glenn Johnson
8d06abb8ed py-numpy: add support for intel-oneapi-mkl (#27390) 2021-11-12 11:16:09 -06:00
Glenn Johnson
77203c940c py-grpcio: add re2 dependency and set paths (#27391) 2021-11-12 11:14:45 -06:00
iarspider
f9c4cb5f8a New version: py-pathlib2 2.3.6 (#27398) 2021-11-12 11:12:03 -06:00
iarspider
1d166a2151 New version: py-pbr 5.7.0 (#27399) 2021-11-12 11:11:04 -06:00
Jen Herting
79e520957b New package: py-pyscipopt (#27363)
* build of pyscipopt w/ scipoptsuite 7.0.1

* removed type from scipoptsuite"

* removed url

* updated to 3.4.0

* [py-pyscipopt] removed redundant python dependency and fixed dependency types

Co-authored-by: Sid Pendelberry <sid@rit.edu>
2021-11-12 10:07:59 -07:00
iarspider
9c386b181a New version: py-pkginfo 1.7.1 (#27403) 2021-11-12 11:04:36 -06:00
iarspider
c9e96018be New version: py-pluggy 1.0.0 (#27406) 2021-11-12 11:03:25 -06:00
Seth R. Johnson
a04cc4470e Add PyPI docs and warning in auto-generated package (#27404)
* docs: Add cross-references for pypi setup

* create: add warning for missing pypi
2021-11-12 10:58:44 -06:00
Andrew W Elble
407f68a67b new package: py-qiskit-aer and dependencies (#27198)
* new package: py-qiskit-aer and dependencies

+updates for dependencies

* flake8 fix

* forgot homepages

* incorporate feedback

* py-cmake and py-ninja to py-tweedledum, prevent py-cmake from
completely rebuilding cmake

* py-ninja: avoid rebuilding ninja

* other feedback fixes

* additional fixes
2021-11-12 10:45:07 -06:00
Massimiliano Culpo
c5dd3265ed Update the Public key of the tutorial (#27370) 2021-11-12 11:46:02 +01:00
Harmen Stoppels
9637ed05f5 Fix overloaded argparse keys (#27379)
Commands should not reuse option names defined in main.
2021-11-11 23:34:18 -08:00
Desmond Orton
11f6aac1e3 New Package: py-setuptools-cpp (#27382) 2021-11-11 23:30:18 -08:00
Adam J. Stewart
dbad9fd58a py-torchgeo: add version 0.1.0 (#27274) 2021-11-11 23:25:20 -08:00
Harmen Stoppels
62b1c3411c suite-sparse: general fixes (#27283)
- disable graphblas by default (very slow to compile)
- fix patch upperbound for cuda 11
- remove find_system_libs; not sure why it was added in the first place,
but it makes spack rather unusable as it introduces an rpath to /lib/...
2021-11-11 23:21:53 -08:00
Ben Cowan
f5e107e046 gcc package: Mac OS patch not needed for @12: 2021-11-11 23:15:38 -08:00
Robert Cohn
c91074514a intel-oneapi-mkl: define MKLROOT for dependents (#27302)
Fixes #27260
2021-11-11 23:12:48 -08:00
Adam J. Stewart
c8fdce28e6 PyTorch: fix compilation of +distributed~tensorpipe (#27337) 2021-11-11 23:09:10 -08:00
Tiziano Müller
38592b9e4f boost: ensure Spack wrappers are used for %intel (#27348)
Fixes #26117
2021-11-11 23:02:08 -08:00
Manuela Kuhn
f777cc079d py-formulaic: add new package (#27373) 2021-11-11 22:22:51 -07:00
Adam J. Stewart
17025f5354 GDAL: add v3.4.0 (#27288) 2021-11-11 19:31:51 -07:00
Charles Doutriaux
7858a2f05c Adds Sina Python Package to Spack (#27219) 2021-11-11 17:20:23 -08:00
eugeneswalker
a35d3b895b e4s ci: enable tau +mpi +python variants (#27273) 2021-11-11 17:14:31 -08:00
Chris White
1d8c2498ee update exercise build option for 0.14.0 (#27338) 2021-11-11 17:06:08 -08:00
Mark W. Krentel
f548243e9e elfutils: add version 0.186 (#27345) 2021-11-11 17:04:53 -08:00
Timothy Brown
eb9a520adb Adding a new ESMF version and checksum. (#27367) 2021-11-11 17:02:08 -08:00
Adam J. Stewart
89a59b98e7 py-scipy: add v1.7.2 (#27252) 2021-11-12 00:40:29 +01:00
Adam J. Stewart
e3645d666e py-numpy: add v1.21.4 (#27251) 2021-11-12 00:11:26 +01:00
Manuela Kuhn
dcef7b066c py-pillow: add 8.4.0 (#27371) 2021-11-11 16:04:45 -07:00
Seth R. Johnson
c9ed4bce66 py-wurlitzer: new package (#27328)
* py-wurlitzer: new package

* py-wurlitzer: use PyPI repository

* py-wurlitzer: use pypi attribute
2021-11-11 14:34:50 -07:00
Jen Herting
44b5c1e41a New package: py-parmed (#27362)
* [py-parmed] created template

* [py-parmed]

- added homepage
- added description
- removed fixmes
- added dependencies
2021-11-11 13:58:29 -06:00
iarspider
0d06bf6027 New version: py-nbconvert 6.2.0 (#27350)
* New version: py-nbconvert 6.2.0

* Update package.py
2021-11-11 11:58:56 -07:00
iarspider
a2eadbbcdf New version: py-pandocfilters 1.5.0 (#27359) 2021-11-11 11:07:53 -07:00
iarspider
347ad11b68 New version: py-parso 0.8.2 (#27360)
* New version: py-parso 0.8.2

* Update package.py
2021-11-11 10:58:45 -07:00
eugeneswalker
f2920cbf62 tau: add v2.31 (#27342) 2021-11-11 09:53:07 -08:00
Manuela Kuhn
7513a644b9 py-h5py: add v3.5.0 (#27344) 2021-11-11 18:51:52 +01:00
Harmen Stoppels
f097a24253 fuse-overlayfs: add maintainer, add v1.7, v1.7.1 (#27349) 2021-11-11 18:50:32 +01:00
iarspider
53168d4e57 New version: py-notebook 6.4.5 (#27353) 2021-11-11 09:55:44 -07:00
Manuela Kuhn
72f41a6591 py-interface-meta: add new package (#27340) 2021-11-11 10:48:46 -06:00
iarspider
00c84aa54f New version: py-oauthlib 3.1.1 (#27356)
* New version: py-oauthlib 3.1.1

* Update var/spack/repos/builtin/packages/py-oauthlib/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-11 09:43:50 -07:00
haralmha
e32dd27eb7 Generalize env var PYTHON to avoid version conflicts (#27334)
* Generalize env var PYTHON to avoid version conflicts

* Use available python executable

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-11 10:37:54 -06:00
iarspider
9ce0abf34f New version: py-nbconvert 0.5.5 (#27347) 2021-11-11 10:37:10 -06:00
iarspider
569a9a9dcc New version: py-nest-asyncio 1.5.1 (#27351) 2021-11-11 10:35:33 -06:00
iarspider
945594fa98 New version: py-networkx 2.6.3 (#27352) 2021-11-11 10:29:45 -06:00
Daryl W. Grunau
770bbba676 Trilinos: fix DEP_INCLUDE_DIRS definition (#27341) 2021-11-11 05:30:44 -05:00
Dan Bonachea
8d021a2915 upcxx: Update the UPC++ package to 2021.9.0 (#26996)
* upcxx: Update the UPC++ package to 2021.9.0

* Add the new release, and a missing older one.

* Remove the spack package cruft for supporting the obsolete build system that
  was present in older versions that are no longer supported.

* General cleanups.

Support for library versions older than 2020.3.0 is officially retired,
for two reasons:

1. Releases prior to 2020.3.0 had a required dependency on Python 2,
   which is [officially EOL](https://www.python.org/doc/sunset-python-2/)
   as of Jan 1 2020, and is no longer considered secure.
2. (Most importantly) The UPC++ development team is unable/unwilling to
   support releases more than two years old.  UPC++ provides robust
   backwards-compatibility to earlier releases of UPC++ v1.0, with very
   rare well-documented/well-motivated exceptions.  Users are strongly
   encouraged to update to a current version of UPC++.

NOTE: Most of the lines changed in this commit are simply re-indentation,
and thus might be best reviewed in a diff that ignores whitespace.

* upcxx: Detect Cray XC more explicitly

This change is necessary to prevent false matches occuring on new Cray Shasta
systems, which do not use the aries network but were incorrectly being treated
as a Cray XC + aries platform.

UPC++ has not yet deployed official native support for Cray Shasta, but this
change is sufficient to allow building the portable backends there.
2021-11-10 13:48:02 -08:00
Manuela Kuhn
8411cc934d py-setupmeta: add new package (#27327)
* py-setupmeta: add new package

* Update var/spack/repos/builtin/packages/py-setupmeta/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-10 13:43:46 -07:00
iarspider
5e0208a710 New version: py-mpld3 0.5.5 (#27329)
* New version: py-mpld3 0.5.5

* Update package.py

* Update package.py
2021-11-10 11:07:51 -07:00
Manuela Kuhn
335bcdc206 py-traits: add 6.3.1 (#27321) 2021-11-10 11:49:13 -06:00
Manuela Kuhn
89d98a780d py-traitlets: add 5.1.1 (#27322) 2021-11-10 11:48:14 -06:00
iarspider
b5e0ac35c8 New versions: py-more-itertools 8.9.0, 8.11.0 (#27325) 2021-11-10 11:45:27 -06:00
Manuela Kuhn
a31f2be91a py-wrapt: add 1.13.3 (#27326) 2021-11-10 11:42:28 -06:00
iarspider
4633f19971 New verison of py-mpmath: 1.2.1 (#27331) 2021-11-10 11:33:20 -06:00
iarspider
0e5cc0d79b New versions of py-multidict: 5.1.0 and 5.2.0 (#27332) 2021-11-10 10:31:46 -07:00
Veselin Dobrev
ceabb96c89 Add a patch for mfem v4.3 to support cusparse >= 11.4 (#27267) 2021-11-09 20:29:06 -06:00
iarspider
723f2f465b New version: py-markdown 3.3.4 (#27310)
* New version: py-markdown 3.3.4

* Changes from review

* Update package.py
2021-11-09 17:19:41 -07:00
Karen C. Tsai
376e5ffd6e fix typo (#27319) 2021-11-09 17:16:41 -07:00
Massimiliano Culpo
b16bfe4f5f spack tutorial: fix output to screen (#27316)
Spack was checking out v0.17, the output reported v0.16
2021-11-09 15:10:22 -08:00
Maxim Belkin
0ab5d42bd5 Fix typos (ouptut) (#27317) 2021-11-09 16:11:34 -06:00
Harmen Stoppels
0322431c5c libssh2: add new versions and crypto=mbedtls|openssl (#27284) 2021-11-09 22:24:23 +01:00
Karen C. Tsai
81a0f7222c create py-sphinx-multiversion spackage (#27314) 2021-11-09 14:09:13 -07:00
iarspider
d8f4339c18 New version: py-mako 1.1.5 (#27308) 2021-11-09 13:39:34 -06:00
Manuela Kuhn
41d4473df4 py-pooch: add 1.5.2 (#27305) 2021-11-09 13:37:16 -06:00
Olivier Cessenat
31c932eec9 octave: override qtchooser, add bz2 variant (#26802)
* octave: override qtchooser, add bz2 variant

* fix texinfo not found from "spack install --test=root -v"

Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2021-11-09 19:36:42 +01:00
Kyle Gerheiser
86c8c3306b Add variant for the --enable-two-level-namespace option in MPICH (#27230) 2021-11-09 15:48:01 +00:00
Jordan Galby
6e9c0a8155 Fix log-format reporter ignoring install errors (#25961)
When running `spack install --log-format junit|cdash ...`, install
errors were ignored. This made spack continue building dependents of
failed install, ignoring `--fail-fast`, and exit 0 at the end.
2021-11-09 15:47:32 +00:00
Larry Knox
8a836213f7 Add hdf5-vol-async package. (#26874)
* Add hdf5-vol-async package.
Add HDF5 1.13.0-rc6 version for building vol-async.

* Style test required another blank line.

* Change hdf5 dependency to develop-1.13+mpi+threadsafe.

* Update args for hdf5-vol-async.
2021-11-09 10:31:34 -05:00
Harmen Stoppels
6e6847d9c7 openlibm: new package (#27286) 2021-11-09 04:52:12 -08:00
Harmen Stoppels
321604b4e5 utf8proc: add new versions (#27287) 2021-11-09 04:50:34 -08:00
iarspider
a7363af47e New version: py-jsonpickle 2.0.0 (#27145) 2021-11-09 05:46:51 -07:00
eugeneswalker
560abc9c14 flecsi 2.1.0: requires gcc>=9 (#27195) 2021-11-09 12:32:41 +00:00
Martin Pokorny
65b0588e80 Fix usage of CMakePackage.define_from_variant() (#27307)
Added `self` target to method calls
2021-11-09 04:52:46 -07:00
Alec Scott
a193e6a204 Add v1.57.0 to Rclone (#27205) 2021-11-09 04:34:48 -07:00
Andrey Prokopenko
7763c7a1cc Deprecate pre-release ArborX version (#27248) 2021-11-09 12:04:54 +01:00
Mikhail Titov
e272e5039d rct: update packages (RP, RU) with new versions (#27306) 2021-11-09 12:03:35 +01:00
Jean-Paul Pelteret
8f88f34973 SymEngine: Update package to 0.8.1 (#27304) 2021-11-09 12:03:07 +01:00
Jean-Paul Pelteret
b4a2fd1fcd deal.II: Update for 9.3.2 release (#27303) 2021-11-09 12:02:51 +01:00
Massimiliano Culpo
752aa7adde Allow triggering the "Bootstrap" workflow manually (#27298) 2021-11-09 12:02:24 +01:00
Robert Underwood
2153ac5863 openmpi fix external find for 0.17 (#27255) 2021-11-09 11:58:14 +01:00
TZ
808375429d openmpi: does not support "--without-pmix" (#27258)
Open MPI currently fails to build with scheduler=slurm if +pmix is
not given with a fatal error due to ``config_args +=
self.with_or_without('pmix', ...)`` resulting in --without-pmix.
However, Open MPI's configure points out "Note that Open MPI does
not support --without-pmix."

The PR only adds "--with-pmix=PATH" if +pmix is part of the spec.
Otherwise, nothing is added and Open MPI can fall back to its
internal PMIX sources.

(The other alternative would be to depend on +pmix in for
scheduler=slurm as is done for +pmi.)
2021-11-09 11:55:56 +01:00
Dylan Simon
2b990b400e make --enable-locks actually enable locks (#24675) 2021-11-09 10:52:08 +00:00
Desmond Orton
34be8d0670 Rebased develop to fix pipeline issues (#27209) 2021-11-09 11:47:59 +01:00
Desmond Orton
eae2027db7 Build Fix (#27210) 2021-11-09 11:46:51 +01:00
Hanwen
65aadf0332 aws-parallelcluster: add v2.11.3 (#27244) 2021-11-09 11:42:09 +01:00
Glenn Johnson
44bf54edee xrootd: add new version and variant (#27245)
- add version 5.3.2
- add krb variant
- add patch to not look for systemd
2021-11-09 11:41:39 +01:00
estewart08
daa8a2fe87 [AMD][rocm-openmp-extras] Update for ROCm 4.3. (#25482)
- Added new checksums for 4.3.
- Now using llvm-amdgpu ~openmp in order to use the rocm-device-libs
  build as external project in llvm-amdgpu package. We still need
  to pull device-libs in using resource for the build as some headers
  are not installed.
- Updated symlink creation to now remove an existing link if  present
  to avoid issues on partial reinstalls when debugging.
- Adjusted the flang_warning to be a part of Cmake options instead of
  a filter_file for better compatability.
- The dependency on hsa-rocr-dev created some problems as type was changed
  to the default build/link. This issue was because ROCr uses libelf and
  the build of openmp expects elfutils. When link is specified libelf
  was being found in the path first, causing errors. This was
  introduced with the llvm-amdgpu external project build of device-libs.
- On a bare bone installation of sles15 it was noted that libquadmath0 was
  needed as a dependency. On 18.04 gcc-multilib was also needed.

* Workaround for libelf headers being used instead of elfutils.
2021-11-09 10:36:13 +00:00
Cameron Rutherford
b8d6351f80 Remove requiring hiop+shared in ExaGO. (#27141) 2021-11-09 11:35:10 +01:00
Cory Bloor
7cb3e9bf43 hipblas: Add spack test support (#27074) 2021-11-09 11:32:36 +01:00
luker
8ce37d9153 HPCG Fix for the Cray complier (CCE) (#25197)
CCE's c++ compiler (Clang based) does not accept the '-ftree-vectorizer-verbose=0'
flag. That flag is removed and a suitable substitution is made.
2021-11-09 10:26:02 +00:00
Cory Bloor
a9c6881bbc rocsolver: Add spack test support (#26919) 2021-11-09 11:25:03 +01:00
Ben Darwin
7c118ee22b ants: fix build by setting BUILD_TESTING=OFF (#26768)
Due to Kitware API changes, default ANTs builds were failing, presumably for all versions (https://github.com/ANTsX/ANTs/issues/1236).
This commit defaults BUILD_TESTING to OFF, preventing calls against
these APIs and fixing all versions.
Note that the ANTs test suite was not clean anyway (e.g. ANTs/#842).
2021-11-09 11:19:32 +01:00
Valentin Volkl
a3dd0e7861 build_environment: clean *_ROOT variables (#26474)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2021-11-09 11:16:42 +01:00
Joseph Wang
abdd206f65 add libtirpc for rpc/types.h (#26628)
rpc/types.h has been removed in recent glibc
2021-11-09 11:14:28 +01:00
Sreenivasa Murthy Kolam
1d09474975 fix rvs build issue by using yaml-cpp spack recipe (#26000) 2021-11-09 11:09:15 +01:00
Tsuyoshi Yamaura
3484434fe2 New package: SCALE (#25574) 2021-11-09 00:19:47 -08:00
iarspider
f164bae4a3 Python tests: skip importing weirdly-named modules (#27151)
* Python tests: allow importing weirdly-named modules

e.g. with dashes in name

* SIP tests: allow importing weirdly-named modules

* Skip modules with invalid names

* Changes from review

* Update from review

* Update from review

* Cleanup
2021-11-08 19:22:33 +00:00
Brian Van Essen
db504db9aa Added new hash values for recent versions. (#27294) 2021-11-08 19:02:30 +00:00
iarspider
bff894900e New version: py-lz4 3.1.3; use external lz4 instead of bundled one (#27282)
* New version: py-lz4 3.1.3; use external lz4 instead of bundled one

* Update var/spack/repos/builtin/packages/py-lz4/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Changes from review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-08 18:14:46 +00:00
iarspider
71f8460357 New versions of py-keyring: 23.1.0, 23.2.0, 23.2.1 (#27276) 2021-11-08 10:59:40 -06:00
iarspider
f9d0007546 New version: py-lazy-object-proxy 1.6.0 (#27277) 2021-11-08 10:58:26 -06:00
iarspider
14232b40d9 New version: py-lockfile 0.12.2 (#27279) 2021-11-08 10:56:34 -06:00
Seth R. Johnson
18d506c7fb trilinos: disable optional packages by default (#26815) 2021-11-08 08:29:35 -05:00
Maciej Wójcik
c384ac88a0 gromacs-swaxs: new package (#26902) 2021-11-08 13:51:23 +01:00
Maciej Wójcik
8f59707f04 gromacs-chain-coordinate: inherit from gromacs package (#27246) 2021-11-08 13:50:42 +01:00
Massimiliano Culpo
62f0c70a8c OpenSUSE: try to apply workaround to avoid CI error (#27275) 2021-11-08 11:56:19 +00:00
Ben Bergen
1fc753af66 Updated Versions (#27268)
- Added 7.1 release
- Added master git branch
2021-11-07 19:41:49 -07:00
Brice Videau
f02dec2bbb Update py-ytopt package and add necessary dependencies (#26879)
* Add spack package py-ytopt-team-ytopt and required dependencies.

* Removed old ytop package.

* Added author as maintainer.

* Fix style.

* Update var/spack/repos/builtin/packages/py-config-space/package.py

Update python dependency to 3.7

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-config-space/package.py

Remove run dependency from py-cython.

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-config-space/package.py

Added run dependency type for py-pyparsing.

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Updated description of py-dh-scikit-optimize.

* Source py-dh-scikit-optimize from PyPI.

* Added latest py-dh-scikit-optimize version.

* Made plots option False by default for py-dh-scikit-optimize.

* Removed 0.9.4 as it needs additional dependencies.

* Added version dependencies.

* Added missing py-joblib dependencies.

* Added run dependency type.

* Added python 2.7+ as supported for py-pyaml.

* Change py-config-space to py-configspace.

* Added dependency on python 3.6+.

* Fix py-configspace package naming.

* Changed py-autotune to py-ytopt-autotune.

* Update var/spack/repos/builtin/packages/py-pyaml/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Added debug variant with py-ray dependency.

* Added missing py-mpi4py missing dependency.

* Removed erroneous variant.

* Added debug variant to py-ray.

* Fix indentation.

* Removed debug variant of py-ray.

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2021-11-06 10:52:30 -05:00
745 changed files with 14199 additions and 6593 deletions

View File

@@ -1,6 +1,8 @@
name: Bootstrapping
on:
# This Workflow can be triggered manually
workflow_dispatch:
pull_request:
branches:
- develop
@@ -76,13 +78,42 @@ jobs:
spack -d solve zlib
tree ~/.spack/bootstrap/store/
ubuntu-clingo-binaries-and-patchelf:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
make patch unzip xz-utils python3 python3-dev tree
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2
- name: Setup repo and non-root user
run: |
git --version
git fetch --unshallow
. .github/workflows/setup_git.sh
useradd -m spack-test
chown -R spack-test .
- name: Bootstrap clingo
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack -d solve zlib
tree ~/.spack/bootstrap/store/
opensuse-clingo-sources:
runs-on: ubuntu-latest
container: "opensuse/leap:latest"
steps:
- name: Install dependencies
run: |
zypper update -y
# Harden CI by applying the workaround described here: https://www.suse.com/support/kb/doc/?id=000019505
zypper update -y || zypper update -y
zypper install -y \
bzip2 curl file gcc-c++ gcc gcc-fortran tar git gpg2 gzip \
make patch unzip which xz python3 python3-devel tree \

View File

@@ -1,9 +1,8 @@
#!/usr/bin/env sh
#!/bin/bash -e
git config --global user.email "spack@example.com"
git config --global user.name "Test User"
# With fetch-depth: 0 we have a remote develop
# but not a local branch. Don't do this on develop
if [ "$(git branch --show-current)" != "develop" ]
then
git branch develop origin/develop
# create a local pr base branch
if [[ -n $GITHUB_BASE_REF ]]; then
git fetch origin "${GITHUB_BASE_REF}:${GITHUB_BASE_REF}"
fi

View File

@@ -24,9 +24,9 @@ jobs:
pip install --upgrade pip
pip install --upgrade vermin
- name: vermin (Spack's Core)
run: vermin --backport argparse --violations --backport typing -t=2.6- -t=3.5- -vvv lib/spack/spack/ lib/spack/llnl/ bin/
run: vermin --backport argparse --violations --backport typing -t=2.7- -t=3.5- -vvv lib/spack/spack/ lib/spack/llnl/ bin/
- name: vermin (Repositories)
run: vermin --backport argparse --violations --backport typing -t=2.6- -t=3.5- -vvv var/spack/repos
run: vermin --backport argparse --violations --backport typing -t=2.7- -t=3.5- -vvv var/spack/repos
# Run style checks on the files that have been changed
style:
runs-on: ubuntu-latest
@@ -114,7 +114,7 @@ jobs:
patchelf cmake bison libbison-dev kcov
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage[toml]
pip install --upgrade pip six setuptools pytest codecov coverage[toml]
# ensure style checks are not skipped in unit tests for python >= 3.6
# note that true/false (i.e., 1/0) are opposite in conditions in python and bash
if python -c 'import sys; sys.exit(not sys.version_info >= (3, 6))'; then
@@ -173,7 +173,7 @@ jobs:
sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage[toml]
pip install --upgrade pip six setuptools pytest codecov coverage[toml]
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -193,37 +193,6 @@ jobs:
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
with:
flags: shelltests,linux
# Test for Python2.6 run on Centos 6
centos6:
needs: [ validate, style, changes ]
runs-on: ubuntu-latest
container: spack/github-actions:centos6
steps:
- name: Run unit tests (full test-suite)
# The CentOS 6 container doesn't run with coverage, but
# under the same conditions it runs the full test suite
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
env:
HOME: /home/spack-test
SPACK_TEST_SOLVER: original
run: |
whoami && echo $HOME && cd $HOME
git clone "${{ github.server_url }}/${{ github.repository }}.git" && cd spack
git fetch origin "${{ github.ref }}:test-branch"
git checkout test-branch
bin/spack unit-test -x
- name: Run unit tests (only package tests)
if: ${{ needs.changes.outputs.with_coverage == 'false' }}
env:
HOME: /home/spack-test
ONLY_PACKAGES: true
SPACK_TEST_SOLVER: original
run: |
whoami && echo $HOME && cd $HOME
git clone "${{ github.server_url }}/${{ github.repository }}.git" && cd spack
git fetch origin "${{ github.ref }}:test-branch"
git checkout test-branch
bin/spack unit-test -x -m "not maybeslow" -k "package_sanity"
# Test RHEL8 UBI with platform Python. This job is run
# only on PRs modifying core Spack
@@ -272,7 +241,7 @@ jobs:
patchelf kcov
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage[toml] clingo
pip install --upgrade pip six setuptools pytest codecov coverage[toml] clingo
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -315,7 +284,7 @@ jobs:
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools
pip install --upgrade codecov coverage[toml]
pip install --upgrade pytest codecov coverage[toml]
- name: Setup Homebrew packages
run: |
brew install dash fish gcc gnupg2 kcov

58
CITATION.cff Normal file
View File

@@ -0,0 +1,58 @@
# If you are referencing Spack in a publication, please cite the SC'15 paper
# described here.
#
# Here's the raw citation:
#
# Todd Gamblin, Matthew P. LeGendre, Michael R. Collette, Gregory L. Lee,
# Adam Moody, Bronis R. de Supinski, and W. Scott Futral.
# The Spack Package Manager: Bringing Order to HPC Software Chaos.
# In Supercomputing 2015 (SC15), Austin, Texas, November 15-20 2015. LLNL-CONF-669890.
#
# Or, in BibTeX:
#
# @inproceedings{Gamblin_The_Spack_Package_2015,
# address = {Austin, Texas, USA},
# author = {Gamblin, Todd and LeGendre, Matthew and
# Collette, Michael R. and Lee, Gregory L. and
# Moody, Adam and de Supinski, Bronis R. and Futral, Scott},
# doi = {10.1145/2807591.2807623},
# month = {November 15-20},
# note = {LLNL-CONF-669890},
# series = {Supercomputing 2015 (SC15)},
# title = {{The Spack Package Manager: Bringing Order to HPC Software Chaos}},
# url = {https://github.com/spack/spack},
# year = {2015}
# }
#
# And here's the CITATION.cff format:
#
cff-version: 1.2.0
message: "If you are referencing Spack in a publication, please cite the paper below."
preferred-citation:
type: conference-paper
doi: "10.1145/2807591.2807623"
url: "https://github.com/spack/spack"
authors:
- family-names: "Gamblin"
given-names: "Todd"
- family-names: "LeGendre"
given-names: "Matthew"
- family-names: "Collette"
given-names: "Michael R."
- family-names: "Lee"
given-names: "Gregory L."
- family-names: "Moody"
given-names: "Adam"
- family-names: "de Supinski"
given-names: "Bronis R."
- family-names: "Futral"
given-names: "Scott"
title: "The Spack Package Manager: Bringing Order to HPC Software Chaos"
conference:
name: "Supercomputing 2015 (SC15)"
city: "Austin"
region: "Texas"
country: "USA"
month: November 15-20
year: 2015
notes: LLNL-CONF-669890

View File

@@ -125,6 +125,9 @@ If you are referencing Spack in a publication, please cite the following paper:
[**The Spack Package Manager: Bringing Order to HPC Software Chaos**](https://www.computer.org/csdl/proceedings/sc/2015/3723/00/2807623.pdf).
In *Supercomputing 2015 (SC15)*, Austin, Texas, November 15-20 2015. LLNL-CONF-669890.
On GitHub, you can copy this citation in APA or BibTeX format via the "Cite this repository"
button. Or, see the comments in `CITATION.cff` for the raw BibTeX.
License
----------------

View File

@@ -33,11 +33,11 @@ import sys
min_python3 = (3, 5)
if sys.version_info[:2] < (2, 6) or (
if sys.version_info[:2] < (2, 7) or (
sys.version_info[:2] >= (3, 0) and sys.version_info[:2] < min_python3
):
v_info = sys.version_info[:3]
msg = "Spack requires Python 2.6, 2.7 or %d.%d or higher " % min_python3
msg = "Spack requires Python 2.7 or %d.%d or higher " % min_python3
msg += "You are running spack with Python %d.%d.%d." % v_info
sys.exit(msg)
@@ -54,8 +54,6 @@ spack_external_libs = os.path.join(spack_lib_path, "external")
if sys.version_info[:2] <= (2, 7):
sys.path.insert(0, os.path.join(spack_external_libs, "py2"))
if sys.version_info[:2] == (2, 6):
sys.path.insert(0, os.path.join(spack_external_libs, "py26"))
sys.path.insert(0, spack_external_libs)

View File

@@ -420,6 +420,24 @@ Or when one variant controls multiple flags:
config_args += self.with_or_without('memchecker', variant='debug_tools')
config_args += self.with_or_without('profiler', variant='debug_tools')
""""""""""""""""""""
Conditional variants
""""""""""""""""""""
When a variant is conditional and its condition is not met on the concrete spec, the
``with_or_without`` and ``enable_or_disable`` methods will simply return an empty list.
For example:
.. code-block:: python
variant('profiler', when='@2.0:')
config_args += self.with_or_without('profiler)
will neither add ``--with-profiler`` nor ``--without-profiler`` when the version is
below ``2.0``.
""""""""""""""""""""
Activation overrides
""""""""""""""""""""

View File

@@ -145,6 +145,20 @@ and without the :meth:`~spack.build_systems.cmake.CMakePackage.define` and
return args
Spack supports CMake defines from conditional variants too. Whenever the condition on
the variant is not met, ``define_from_variant()`` will simply return an empty string,
and CMake simply ignores the empty command line argument. For example the following
.. code-block:: python
variant('example', default=True, when='@2.0:')
def cmake_args(self):
return [self.define_from_variant('EXAMPLE', 'example')]
will generate ``'cmake' '-DEXAMPLE=ON' ...`` when `@2.0: +example` is met, but will
result in ``'cmake' '' ...`` when the spec version is below ``2.0``.
^^^^^^^^^^
Generators

View File

@@ -125,12 +125,15 @@ The zip file will not contain a ``setup.py``, but it will contain a
``METADATA`` file which contains all the information you need to
write a ``package.py`` build recipe.
.. _pypi:
^^^^
PyPI
^^^^
The vast majority of Python packages are hosted on PyPI - The Python
Package Index. ``pip`` only supports packages hosted on PyPI, making
The vast majority of Python packages are hosted on PyPI (The Python
Package Index), which is :ref:`preferred over GitHub <pypi-vs-github>`
for downloading packages. ``pip`` only supports packages hosted on PyPI, making
it the only option for developers who want a simple installation.
Search for "PyPI <package-name>" to find the download page. Note that
some pages are versioned, and the first result may not be the newest
@@ -217,6 +220,7 @@ try to extract the wheel:
version('1.11.0', sha256='d8c9d24ea90457214d798b0d922489863dad518adde3638e08ef62de28fb183a', expand=False)
.. _pypi-vs-github:
"""""""""""""""
PyPI vs. GitHub
@@ -263,6 +267,9 @@ location, but PyPI is preferred for the following reasons:
PyPI is nice because it makes it physically impossible to
re-release the same version of a package with a different checksum.
Use the :ref:`pypi attribute <pypi>` to facilitate construction of PyPI package
references.
^^^^^^^^^^^^^^^^^^^^^^^^^
Build system dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -709,7 +716,7 @@ The package may have its own unit or regression tests. Spack can
run these tests during the installation by adding phase-appropriate
test methods.
For example, ``py-numpy`` adds the following as a check to run
For example, ``py-numpy`` adds the following as a check to run
after the ``install`` phase:
.. code-block:: python

View File

@@ -30,6 +30,7 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('_spack_root/lib/spack/external'))
sys.path.insert(0, os.path.abspath('_spack_root/lib/spack/external/pytest-fallback'))
if sys.version_info[0] < 3:
sys.path.insert(

View File

@@ -71,7 +71,7 @@ locally to speed up the review process.
new release that is causing problems. If this is the case, please file an issue.
We currently test against Python 2.6, 2.7, and 3.5-3.7 on both macOS and Linux and
We currently test against Python 2.7 and 3.5-3.9 on both macOS and Linux and
perform 3 types of tests:
.. _cmd-spack-unit-test:

View File

@@ -1177,6 +1177,10 @@ completed, the steps to make the major release are:
If CI is not passing, submit pull requests to ``develop`` as normal
and keep rebasing the release branch on ``develop`` until CI passes.
#. Make sure the entire documentation is up to date. If documentation
is outdated submit pull requests to ``develop`` as normal
and keep rebasing the release branch on ``develop``.
#. Follow the steps in :ref:`publishing-releases`.
#. Follow the steps in :ref:`merging-releases`.

View File

@@ -273,29 +273,30 @@ of the installed software. For instance, in the snippet below:
.. code-block:: yaml
modules:
tcl:
# The keyword `all` selects every package
all:
environment:
set:
BAR: 'bar'
# This anonymous spec selects any package that
# depends on openmpi. The double colon at the
# end clears the set of rules that matched so far.
^openmpi::
environment:
set:
BAR: 'baz'
# Selects any zlib package
zlib:
environment:
prepend_path:
LD_LIBRARY_PATH: 'foo'
# Selects zlib compiled with gcc@4.8
zlib%gcc@4.8:
environment:
unset:
- FOOBAR
default:
tcl:
# The keyword `all` selects every package
all:
environment:
set:
BAR: 'bar'
# This anonymous spec selects any package that
# depends on openmpi. The double colon at the
# end clears the set of rules that matched so far.
^openmpi::
environment:
set:
BAR: 'baz'
# Selects any zlib package
zlib:
environment:
prepend_path:
LD_LIBRARY_PATH: 'foo'
# Selects zlib compiled with gcc@4.8
zlib%gcc@4.8:
environment:
unset:
- FOOBAR
you are instructing Spack to set the environment variable ``BAR=bar`` for every module,
unless the associated spec satisfies ``^openmpi`` in which case ``BAR=baz``.
@@ -322,9 +323,10 @@ your system. If you write a configuration file like:
.. code-block:: yaml
modules:
tcl:
whitelist: ['gcc', 'llvm'] # Whitelist will have precedence over blacklist
blacklist: ['%gcc@4.4.7'] # Assuming gcc@4.4.7 is the system compiler
default:
tcl:
whitelist: ['gcc', 'llvm'] # Whitelist will have precedence over blacklist
blacklist: ['%gcc@4.4.7'] # Assuming gcc@4.4.7 is the system compiler
you will prevent the generation of module files for any package that
is compiled with ``gcc@4.4.7``, with the only exception of any ``gcc``
@@ -349,8 +351,9 @@ shows how to set hash length in the module file names:
.. code-block:: yaml
modules:
tcl:
hash_length: 7
default:
tcl:
hash_length: 7
To help make module names more readable, and to help alleviate name conflicts
with a short hash, one can use the ``suffixes`` option in the modules
@@ -360,11 +363,12 @@ For instance, the following config options,
.. code-block:: yaml
modules:
tcl:
all:
suffixes:
^python@2.7.12: 'python-2.7.12'
^openblas: 'openblas'
default:
tcl:
all:
suffixes:
^python@2.7.12: 'python-2.7.12'
^openblas: 'openblas'
will add a ``python-2.7.12`` version string to any packages compiled with
python matching the spec, ``python@2.7.12``. This is useful to know which
@@ -379,10 +383,11 @@ covered in :ref:`adding_projections_to_views`.
.. code-block:: yaml
modules:
tcl:
projections:
all: '{name}/{version}-{compiler.name}-{compiler.version}-module'
^mpi: '{name}/{version}-{^mpi.name}-{^mpi.version}-{compiler.name}-{compiler.version}-module'
default:
tcl:
projections:
all: '{name}/{version}-{compiler.name}-{compiler.version}-module'
^mpi: '{name}/{version}-{^mpi.name}-{^mpi.version}-{compiler.name}-{compiler.version}-module'
will create module files that are nested in directories by package
name, contain the version and compiler name and version, and have the
@@ -403,15 +408,16 @@ that are already in the LMod hierarchy.
.. code-block:: yaml
modules:
enable:
- tcl
tcl:
projections:
all: '{name}/{version}-{compiler.name}-{compiler.version}'
all:
conflict:
- '{name}'
- 'intel/14.0.1'
default:
enable:
- tcl
tcl:
projections:
all: '{name}/{version}-{compiler.name}-{compiler.version}'
all:
conflict:
- '{name}'
- 'intel/14.0.1'
will create module files that will conflict with ``intel/14.0.1`` and with the
base directory of the same module, effectively preventing the possibility to
@@ -431,16 +437,17 @@ that are already in the LMod hierarchy.
.. code-block:: yaml
modules:
enable:
- lmod
lmod:
core_compilers:
- 'gcc@4.8'
core_specs:
- 'python'
hierarchy:
- 'mpi'
- 'lapack'
default:
enable:
- lmod
lmod:
core_compilers:
- 'gcc@4.8'
core_specs:
- 'python'
hierarchy:
- 'mpi'
- 'lapack'
that will generate a hierarchy in which the ``lapack`` and ``mpi`` layer can be switched
independently. This allows a site to build the same libraries or applications against different
@@ -591,11 +598,12 @@ do so by using the environment blacklist:
.. code-block:: yaml
modules:
tcl:
all:
filter:
# Exclude changes to any of these variables
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
default:
tcl:
all:
filter:
# Exclude changes to any of these variables
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
The configuration above will generate module files that will not contain
modifications to either ``CPATH`` or ``LIBRARY_PATH``.
@@ -614,9 +622,10 @@ activated using ``spack activate``:
.. code-block:: yaml
modules:
tcl:
^python:
autoload: 'direct'
default:
tcl:
^python:
autoload: 'direct'
The configuration file above will produce module files that will
load their direct dependencies if the package installed depends on ``python``.
@@ -633,9 +642,10 @@ The allowed values for the ``autoload`` statement are either ``none``,
.. code-block:: yaml
modules:
lmod:
all:
autoload: 'direct'
default:
lmod:
all:
autoload: 'direct'
.. note::
TCL prerequisites

View File

@@ -1,5 +1,5 @@
Name, Supported Versions, Notes, Requirement Reason
Python, 2.6/2.7/3.5-3.9, , Interpreter for Spack
Python, 2.7/3.5-3.9, , Interpreter for Spack
C/C++ Compilers, , , Building software
make, , , Build software
patch, , , Build software
1 Name Supported Versions Notes Requirement Reason
2 Python 2.6/2.7/3.5-3.9 2.7/3.5-3.9 Interpreter for Spack
3 C/C++ Compilers Building software
4 make Build software
5 patch Build software

2
lib/spack/env/cc vendored
View File

@@ -248,7 +248,7 @@ case "$command" in
lang_flags=C
debug_flags="-g"
;;
c++|CC|g++|clang++|armclang++|icpc|icpx|pgc++|nvc++|xlc++|xlc++_r|FCC)
c++|CC|g++|clang++|armclang++|icpc|icpx|dpcpp|pgc++|nvc++|xlc++|xlc++_r|FCC)
command="$SPACK_CXX"
language="C++"
comp="CXX"

1
lib/spack/env/oneapi/dpcpp vendored Symbolic link
View File

@@ -0,0 +1 @@
../cc

View File

@@ -17,7 +17,7 @@
--------
* Homepage: https://pypi.python.org/pypi/argparse
* Usage: We include our own version to be Python 2.6 compatible.
* Usage: We include our own version to be Python 3.X compatible.
* Version: 1.4.0
* Note: This package has been slightly modified to improve
error message formatting. See the following commit if the
@@ -37,23 +37,15 @@
* Homepage: https://pypi.python.org/pypi/distro
* Usage: Provides a more stable linux distribution detection.
* Version: 1.0.4 (last version supporting Python 2.6)
functools
---------
* Homepage: https://github.com/python/cpython/blob/2.7/Lib/functools.py
* Usage: Used for implementation of total_ordering.
* Version: Unversioned
* Note: This is the functools.total_ordering implementation
from Python 2.7 backported so we can run on Python 2.6.
* Version: 1.6.0 (64946a1e2a9ff529047070657728600e006c99ff)
* Note: Last version supporting Python 2.7
jinja2
------
* Homepage: https://pypi.python.org/pypi/Jinja2
* Usage: A modern and designer-friendly templating language for Python.
* Version: 2.10
* Version: 2.11.3 (last version supporting Python 2.7)
jsonschema
----------
@@ -71,15 +63,7 @@
* Homepage: https://pypi.python.org/pypi/MarkupSafe
* Usage: Implements a XML/HTML/XHTML Markup safe string for Python.
* Version: 1.0
orderddict
----------
* Homepage: https://pypi.org/project/ordereddict/
* Usage: A drop-in substitute for Py2.7's new collections.OrderedDict
that works in Python 2.4-2.6.
* Version: 1.1
* Version: 1.1.1 (last version supporting Python 2.7)
py
--
@@ -120,7 +104,7 @@
* Homepage: https://pypi.python.org/pypi/six
* Usage: Python 2 and 3 compatibility utilities.
* Version: 1.11.0
* Version: 1.16.0
macholib
--------

View File

@@ -150,8 +150,6 @@
": note",
" ok",
"Note:",
"makefile:",
"Makefile:",
":[ \\t]+Where:",
"[^ :]:[0-9]+: Warning",
"------ Build started: .* ------",
@@ -189,8 +187,6 @@
"/usr/.*/X11/XResource\\.h:[0-9]+: war.*: ANSI C\\+\\+ forbids declaration",
"WARNING 84 :",
"WARNING 47 :",
"makefile:",
"Makefile:",
"warning: Clock skew detected. Your build may be incomplete.",
"/usr/openwin/include/GL/[^:]+:",
"bind_at_load",

File diff suppressed because it is too large Load Diff

View File

@@ -1,47 +0,0 @@
#
# Backport of Python 2.7's total_ordering.
#
def total_ordering(cls):
"""Class decorator that fills in missing ordering methods"""
convert = {
'__lt__': [('__gt__', lambda self, other: not (self < other or self == other)),
('__le__', lambda self, other: self < other or self == other),
('__ge__', lambda self, other: not self < other)],
'__le__': [('__ge__', lambda self, other: not self <= other or self == other),
('__lt__', lambda self, other: self <= other and not self == other),
('__gt__', lambda self, other: not self <= other)],
'__gt__': [('__lt__', lambda self, other: not (self > other or self == other)),
('__ge__', lambda self, other: self > other or self == other),
('__le__', lambda self, other: not self > other)],
'__ge__': [('__le__', lambda self, other: (not self >= other) or self == other),
('__gt__', lambda self, other: self >= other and not self == other),
('__lt__', lambda self, other: not self >= other)]
}
roots = set(dir(cls)) & set(convert)
if not roots:
raise ValueError('must define at least one ordering operation: < > <= >=')
root = max(roots) # prefer __lt__ to __le__ to __gt__ to __ge__
for opname, opfunc in convert[root]:
if opname not in roots:
opfunc.__name__ = opname
opfunc.__doc__ = getattr(int, opname).__doc__
setattr(cls, opname, opfunc)
return cls
@total_ordering
class reverse_order(object):
"""Helper for creating key functions.
This is a wrapper that inverts the sense of the natural
comparisons on the object.
"""
def __init__(self, value):
self.value = value
def __eq__(self, other):
return other.value == self.value
def __lt__(self, other):
return other.value < self.value

28
lib/spack/external/jinja2/LICENSE.rst vendored Normal file
View File

@@ -0,0 +1,28 @@
Copyright 2007 Pallets
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,83 +1,44 @@
# -*- coding: utf-8 -*-
"""Jinja is a template engine written in pure Python. It provides a
non-XML syntax that supports inline expressions and an optional
sandboxed environment.
"""
jinja2
~~~~~~
from markupsafe import escape
from markupsafe import Markup
Jinja2 is a template engine written in pure Python. It provides a
Django inspired non-XML syntax but supports inline expressions and
an optional sandboxed environment.
from .bccache import BytecodeCache
from .bccache import FileSystemBytecodeCache
from .bccache import MemcachedBytecodeCache
from .environment import Environment
from .environment import Template
from .exceptions import TemplateAssertionError
from .exceptions import TemplateError
from .exceptions import TemplateNotFound
from .exceptions import TemplateRuntimeError
from .exceptions import TemplatesNotFound
from .exceptions import TemplateSyntaxError
from .exceptions import UndefinedError
from .filters import contextfilter
from .filters import environmentfilter
from .filters import evalcontextfilter
from .loaders import BaseLoader
from .loaders import ChoiceLoader
from .loaders import DictLoader
from .loaders import FileSystemLoader
from .loaders import FunctionLoader
from .loaders import ModuleLoader
from .loaders import PackageLoader
from .loaders import PrefixLoader
from .runtime import ChainableUndefined
from .runtime import DebugUndefined
from .runtime import make_logging_undefined
from .runtime import StrictUndefined
from .runtime import Undefined
from .utils import clear_caches
from .utils import contextfunction
from .utils import environmentfunction
from .utils import evalcontextfunction
from .utils import is_undefined
from .utils import select_autoescape
Nutshell
--------
Here a small example of a Jinja2 template::
{% extends 'base.html' %}
{% block title %}Memberlist{% endblock %}
{% block content %}
<ul>
{% for user in users %}
<li><a href="{{ user.url }}">{{ user.username }}</a></li>
{% endfor %}
</ul>
{% endblock %}
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
__docformat__ = 'restructuredtext en'
__version__ = '2.10'
# high level interface
from jinja2.environment import Environment, Template
# loaders
from jinja2.loaders import BaseLoader, FileSystemLoader, PackageLoader, \
DictLoader, FunctionLoader, PrefixLoader, ChoiceLoader, \
ModuleLoader
# bytecode caches
from jinja2.bccache import BytecodeCache, FileSystemBytecodeCache, \
MemcachedBytecodeCache
# undefined types
from jinja2.runtime import Undefined, DebugUndefined, StrictUndefined, \
make_logging_undefined
# exceptions
from jinja2.exceptions import TemplateError, UndefinedError, \
TemplateNotFound, TemplatesNotFound, TemplateSyntaxError, \
TemplateAssertionError, TemplateRuntimeError
# decorators and public utilities
from jinja2.filters import environmentfilter, contextfilter, \
evalcontextfilter
from jinja2.utils import Markup, escape, clear_caches, \
environmentfunction, evalcontextfunction, contextfunction, \
is_undefined, select_autoescape
__all__ = [
'Environment', 'Template', 'BaseLoader', 'FileSystemLoader',
'PackageLoader', 'DictLoader', 'FunctionLoader', 'PrefixLoader',
'ChoiceLoader', 'BytecodeCache', 'FileSystemBytecodeCache',
'MemcachedBytecodeCache', 'Undefined', 'DebugUndefined',
'StrictUndefined', 'TemplateError', 'UndefinedError', 'TemplateNotFound',
'TemplatesNotFound', 'TemplateSyntaxError', 'TemplateAssertionError',
'TemplateRuntimeError',
'ModuleLoader', 'environmentfilter', 'contextfilter', 'Markup', 'escape',
'environmentfunction', 'contextfunction', 'clear_caches', 'is_undefined',
'evalcontextfilter', 'evalcontextfunction', 'make_logging_undefined',
'select_autoescape',
]
def _patch_async():
from jinja2.utils import have_async_gen
if have_async_gen:
from jinja2.asyncsupport import patch_all
patch_all()
_patch_async()
del _patch_async
__version__ = "2.11.3"

View File

@@ -1,22 +1,12 @@
# -*- coding: utf-8 -*-
"""
jinja2._compat
~~~~~~~~~~~~~~
Some py2/py3 compatibility support based on a stripped down
version of six so we don't have to depend on a specific version
of it.
:copyright: Copyright 2013 by the Jinja team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
# flake8: noqa
import marshal
import sys
PY2 = sys.version_info[0] == 2
PYPY = hasattr(sys, 'pypy_translation_info')
PYPY = hasattr(sys, "pypy_translation_info")
_identity = lambda x: x
if not PY2:
unichr = chr
range_type = range
@@ -30,6 +20,7 @@
import pickle
from io import BytesIO, StringIO
NativeStringIO = StringIO
def reraise(tp, value, tb=None):
@@ -46,6 +37,9 @@ def reraise(tp, value, tb=None):
implements_to_string = _identity
encode_filename = _identity
marshal_dump = marshal.dump
marshal_load = marshal.load
else:
unichr = unichr
text_type = unicode
@@ -59,11 +53,13 @@ def reraise(tp, value, tb=None):
import cPickle as pickle
from cStringIO import StringIO as BytesIO, StringIO
NativeStringIO = BytesIO
exec('def reraise(tp, value, tb=None):\n raise tp, value, tb')
exec("def reraise(tp, value, tb=None):\n raise tp, value, tb")
from itertools import imap, izip, ifilter
intern = intern
def implements_iterator(cls):
@@ -73,14 +69,25 @@ def implements_iterator(cls):
def implements_to_string(cls):
cls.__unicode__ = cls.__str__
cls.__str__ = lambda x: x.__unicode__().encode('utf-8')
cls.__str__ = lambda x: x.__unicode__().encode("utf-8")
return cls
def encode_filename(filename):
if isinstance(filename, unicode):
return filename.encode('utf-8')
return filename.encode("utf-8")
return filename
def marshal_dump(code, f):
if isinstance(f, file):
marshal.dump(code, f)
else:
f.write(marshal.dumps(code))
def marshal_load(f):
if isinstance(f, file):
return marshal.load(f)
return marshal.loads(f.read())
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
@@ -90,10 +97,36 @@ def with_metaclass(meta, *bases):
class metaclass(type):
def __new__(cls, name, this_bases, d):
return meta(name, bases, d)
return type.__new__(metaclass, 'temporary_class', (), {})
return type.__new__(metaclass, "temporary_class", (), {})
try:
from urllib.parse import quote_from_bytes as url_quote
except ImportError:
from urllib import quote as url_quote
try:
from collections import abc
except ImportError:
import collections as abc
try:
from os import fspath
except ImportError:
try:
from pathlib import PurePath
except ImportError:
PurePath = None
def fspath(path):
if hasattr(path, "__fspath__"):
return path.__fspath__()
# Python 3.5 doesn't have __fspath__ yet, use str.
if PurePath is not None and isinstance(path, PurePath):
return str(path)
return path

View File

@@ -1,2 +1,6 @@
import re
# generated by scripts/generate_identifier_pattern.py
pattern = '·̀-ͯ·҃-֑҇-ׇֽֿׁׂׅׄؐ-ًؚ-ٰٟۖ-ۜ۟-۪ۤۧۨ-ܑۭܰ-݊ަ-ް߫-߳ࠖ-࠙ࠛ-ࠣࠥ-ࠧࠩ-࡙࠭-࡛ࣔ-ࣣ࣡-ःऺ-़ा-ॏ॑-ॗॢॣঁ-ঃ়া-ৄেৈো-্ৗৢৣਁ-ਃ਼ਾ-ੂੇੈੋ-੍ੑੰੱੵઁ-ઃ઼ા-ૅે-ૉો-્ૢૣଁ-ଃ଼ା-ୄେୈୋ-୍ୖୗୢୣஂா-ூெ-ைொ-்ௗఀ-ఃా-ౄె-ైొ-్ౕౖౢౣಁ-ಃ಼ಾ-ೄೆ-ೈೊ-್ೕೖೢೣഁ-ഃാ-ൄെ-ൈൊ-്ൗൢൣංඃ්ා-ුූෘ-ෟෲෳัิ-ฺ็-๎ັິ-ູົຼ່-ໍ༹༘༙༵༷༾༿ཱ-྄྆྇ྍ-ྗྙ-ྼ࿆ါ-ှၖ-ၙၞ-ၠၢ-ၤၧ-ၭၱ-ၴႂ-ႍႏႚ-ႝ፝-፟ᜒ-᜔ᜲ-᜴ᝒᝓᝲᝳ឴-៓៝᠋-᠍ᢅᢆᢩᤠ-ᤫᤰ-᤻ᨗ-ᨛᩕ-ᩞ᩠-᩿᩼᪰-᪽ᬀ-ᬄ᬴-᭄᭫-᭳ᮀ-ᮂᮡ-ᮭ᯦-᯳ᰤ-᰷᳐-᳔᳒-᳨᳭ᳲ-᳴᳸᳹᷀-᷵᷻-᷿‿⁀⁔⃐-⃥⃜⃡-⃰℘℮⳯-⵿⳱ⷠ-〪ⷿ-゙゚〯꙯ꙴ-꙽ꚞꚟ꛰꛱ꠂ꠆ꠋꠣ-ꠧꢀꢁꢴ-ꣅ꣠-꣱ꤦ-꤭ꥇ-꥓ꦀ-ꦃ꦳-꧀ꧥꨩ-ꨶꩃꩌꩍꩻ-ꩽꪰꪲ-ꪴꪷꪸꪾ꪿꫁ꫫ-ꫯꫵ꫶ꯣ-ꯪ꯬꯭ﬞ︀-️︠-︯︳︴﹍-﹏_𐇽𐋠𐍶-𐍺𐨁-𐨃𐨅𐨆𐨌-𐨏𐨸-𐨿𐨺𐫦𐫥𑀀-𑀂𑀸-𑁆𑁿-𑂂𑂰-𑂺𑄀-𑄂𑄧-𑅳𑄴𑆀-𑆂𑆳-𑇊𑇀-𑇌𑈬-𑈷𑈾𑋟-𑋪𑌀-𑌃𑌼𑌾-𑍄𑍇𑍈𑍋-𑍍𑍗𑍢𑍣𑍦-𑍬𑍰-𑍴𑐵-𑑆𑒰-𑓃𑖯-𑖵𑖸-𑗀𑗜𑗝𑘰-𑙀𑚫-𑚷𑜝-𑜫𑰯-𑰶𑰸-𑰿𑲒-𑲧𑲩-𑲶𖫰-𖫴𖬰-𖬶𖽑-𖽾𖾏-𖾒𛲝𛲞𝅥-𝅩𝅭-𝅲𝅻-𝆂𝆅-𝆋𝆪-𝆭𝉂-𝉄𝨀-𝨶𝨻-𝩬𝩵𝪄𝪛-𝪟𝪡-𝪯𞀀-𞀆𞀈-𞀘𞀛-𞀡𞀣𞀤𞀦-𞣐𞀪-𞣖𞥄-𞥊󠄀-󠇯'
pattern = re.compile(
r"[\w·̀-ͯ·҃-֑҇-ׇֽֿׁׂׅׄؐ-ًؚ-ٰٟۖ-ۜ۟-۪ۤۧۨ-ܑۭܰ-݊ަ-ް߫-߳ࠖ-࠙ࠛ-ࠣࠥ-ࠧࠩ-࡙࠭-࡛ࣔ-ࣣ࣡-ःऺ-़ा-ॏ॑-ॗॢॣঁ-ঃ়া-ৄেৈো-্ৗৢৣਁ-ਃ਼ਾ-ੂੇੈੋ-੍ੑੰੱੵઁ-ઃ઼ા-ૅે-ૉો-્ૢૣଁ-ଃ଼ା-ୄେୈୋ-୍ୖୗୢୣஂா-ூெ-ைொ-்ௗఀ-ఃా-ౄె-ైొ-్ౕౖౢౣಁ-ಃ಼ಾ-ೄೆ-ೈೊ-್ೕೖೢೣഁ-ഃാ-ൄെ-ൈൊ-്ൗൢൣංඃ්ා-ුූෘ-ෟෲෳัิ-ฺ็-๎ັິ-ູົຼ່-ໍ༹༘༙༵༷༾༿ཱ-྄྆྇ྍ-ྗྙ-ྼ࿆ါ-ှၖ-ၙၞ-ၠၢ-ၤၧ-ၭၱ-ၴႂ-ႍႏႚ-ႝ፝-፟ᜒ-᜔ᜲ-᜴ᝒᝓᝲᝳ឴-៓៝᠋-᠍ᢅᢆᢩᤠ-ᤫᤰ-᤻ᨗ-ᨛᩕ-ᩞ᩠-᩿᩼᪰-᪽ᬀ-ᬄ᬴-᭄᭫-᭳ᮀ-ᮂᮡ-ᮭ᯦-᯳ᰤ-᰷᳐-᳔᳒-᳨᳭ᳲ-᳴᳸᳹᷀-᷵᷻-᷿‿⁀⁔⃐-⃥⃜⃡-⃰℘℮⳯-⵿⳱ⷠ-〪ⷿ-゙゚〯꙯ꙴ-꙽ꚞꚟ꛰꛱ꠂ꠆ꠋꠣ-ꠧꢀꢁꢴ-ꣅ꣠-꣱ꤦ-꤭ꥇ-꥓ꦀ-ꦃ꦳-꧀ꧥꨩ-ꨶꩃꩌꩍꩻ-ꩽꪰꪲ-ꪴꪷꪸꪾ꪿꫁ꫫ-ꫯꫵ꫶ꯣ-ꯪ꯬꯭ﬞ︀-️︠-︯︳︴﹍-﹏_𐇽𐋠𐍶-𐍺𐨁-𐨃𐨅𐨆𐨌-𐨏𐨸-𐨿𐨺𐫦𐫥𑀀-𑀂𑀸-𑁆𑁿-𑂂𑂰-𑂺𑄀-𑄂𑄧-𑅳𑄴𑆀-𑆂𑆳-𑇊𑇀-𑇌𑈬-𑈷𑈾𑋟-𑋪𑌀-𑌃𑌼𑌾-𑍄𑍇𑍈𑍋-𑍍𑍗𑍢𑍣𑍦-𑍬𑍰-𑍴𑐵-𑑆𑒰-𑓃𑖯-𑖵𑖸-𑗀𑗜𑗝𑘰-𑙀𑚫-𑚷𑜝-𑜫𑰯-𑰶𑰸-𑰿𑲒-𑲧𑲩-𑲶𖫰-𖫴𖬰-𖬶𖽑-𖽾𖾏-𖾒𛲝𛲞𝅥-𝅩𝅭-𝅲𝅻-𝆂𝆅-𝆋𝆪-𝆭𝉂-𝉄𝨀-𝨶𝨻-𝩬𝩵𝪄𝪛-𝪟𝪡-𝪯𞀀-𞀆𞀈-𞀘𞀛-𞀡𞀣𞀤𞀦-𞣐𞀪-𞣖𞥄-𞥊󠄀-󠇯]+" # noqa: B950
)

View File

@@ -1,12 +1,13 @@
from functools import wraps
from jinja2.asyncsupport import auto_aiter
from jinja2 import filters
from . import filters
from .asyncsupport import auto_aiter
from .asyncsupport import auto_await
async def auto_to_seq(value):
seq = []
if hasattr(value, '__aiter__'):
if hasattr(value, "__aiter__"):
async for item in value:
seq.append(item)
else:
@@ -16,8 +17,7 @@ async def auto_to_seq(value):
async def async_select_or_reject(args, kwargs, modfunc, lookup_attr):
seq, func = filters.prepare_select_or_reject(
args, kwargs, modfunc, lookup_attr)
seq, func = filters.prepare_select_or_reject(args, kwargs, modfunc, lookup_attr)
if seq:
async for item in auto_aiter(seq):
if func(item):
@@ -26,14 +26,19 @@ async def async_select_or_reject(args, kwargs, modfunc, lookup_attr):
def dualfilter(normal_filter, async_filter):
wrap_evalctx = False
if getattr(normal_filter, 'environmentfilter', False):
is_async = lambda args: args[0].is_async
if getattr(normal_filter, "environmentfilter", False) is True:
def is_async(args):
return args[0].is_async
wrap_evalctx = False
else:
if not getattr(normal_filter, 'evalcontextfilter', False) and \
not getattr(normal_filter, 'contextfilter', False):
wrap_evalctx = True
is_async = lambda args: args[0].environment.is_async
has_evalctxfilter = getattr(normal_filter, "evalcontextfilter", False) is True
has_ctxfilter = getattr(normal_filter, "contextfilter", False) is True
wrap_evalctx = not has_evalctxfilter and not has_ctxfilter
def is_async(args):
return args[0].environment.is_async
@wraps(normal_filter)
def wrapper(*args, **kwargs):
@@ -55,6 +60,7 @@ def wrapper(*args, **kwargs):
def asyncfiltervariant(original):
def decorator(f):
return dualfilter(original, f)
return decorator
@@ -63,19 +69,22 @@ async def do_first(environment, seq):
try:
return await auto_aiter(seq).__anext__()
except StopAsyncIteration:
return environment.undefined('No first item, sequence was empty.')
return environment.undefined("No first item, sequence was empty.")
@asyncfiltervariant(filters.do_groupby)
async def do_groupby(environment, value, attribute):
expr = filters.make_attrgetter(environment, attribute)
return [filters._GroupTuple(key, await auto_to_seq(values))
for key, values in filters.groupby(sorted(
await auto_to_seq(value), key=expr), expr)]
return [
filters._GroupTuple(key, await auto_to_seq(values))
for key, values in filters.groupby(
sorted(await auto_to_seq(value), key=expr), expr
)
]
@asyncfiltervariant(filters.do_join)
async def do_join(eval_ctx, value, d=u'', attribute=None):
async def do_join(eval_ctx, value, d=u"", attribute=None):
return filters.do_join(eval_ctx, await auto_to_seq(value), d, attribute)
@@ -109,7 +118,7 @@ async def do_map(*args, **kwargs):
seq, func = filters.prepare_map(args, kwargs)
if seq:
async for item in auto_aiter(seq):
yield func(item)
yield await auto_await(func(item))
@asyncfiltervariant(filters.do_sum)
@@ -118,7 +127,10 @@ async def do_sum(environment, iterable, attribute=None, start=0):
if attribute is not None:
func = filters.make_attrgetter(environment, attribute)
else:
func = lambda x: x
def func(x):
return x
async for item in auto_aiter(iterable):
rv += func(item)
return rv
@@ -130,17 +142,17 @@ async def do_slice(value, slices, fill_with=None):
ASYNC_FILTERS = {
'first': do_first,
'groupby': do_groupby,
'join': do_join,
'list': do_list,
"first": do_first,
"groupby": do_groupby,
"join": do_join,
"list": do_list,
# we intentionally do not support do_last because that would be
# ridiculous
'reject': do_reject,
'rejectattr': do_rejectattr,
'map': do_map,
'select': do_select,
'selectattr': do_selectattr,
'sum': do_sum,
'slice': do_slice,
"reject": do_reject,
"rejectattr": do_rejectattr,
"map": do_map,
"select": do_select,
"selectattr": do_selectattr,
"sum": do_sum,
"slice": do_slice,
}

View File

@@ -1,29 +1,27 @@
# -*- coding: utf-8 -*-
"""The code for async support. Importing this patches Jinja on supported
Python versions.
"""
jinja2.asyncsupport
~~~~~~~~~~~~~~~~~~~
Has all the code for async support which is implemented as a patch
for supported Python versions.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
import sys
import asyncio
import inspect
from functools import update_wrapper
from jinja2.utils import concat, internalcode, Markup
from jinja2.environment import TemplateModule
from jinja2.runtime import LoopContextBase, _last_iteration
from markupsafe import Markup
from .environment import TemplateModule
from .runtime import LoopContext
from .utils import concat
from .utils import internalcode
from .utils import missing
async def concat_async(async_gen):
rv = []
async def collect():
async for event in async_gen:
rv.append(event)
await collect()
return concat(rv)
@@ -34,10 +32,7 @@ async def generate_async(self, *args, **kwargs):
async for event in self.root_render_func(self.new_context(vars)):
yield event
except Exception:
exc_info = sys.exc_info()
else:
return
yield self.environment.handle_exception(exc_info, True)
yield self.environment.handle_exception()
def wrap_generate_func(original_generate):
@@ -48,17 +43,18 @@ def _convert_generator(self, loop, args, kwargs):
yield loop.run_until_complete(async_gen.__anext__())
except StopAsyncIteration:
pass
def generate(self, *args, **kwargs):
if not self.environment.is_async:
return original_generate(self, *args, **kwargs)
return _convert_generator(self, asyncio.get_event_loop(), args, kwargs)
return update_wrapper(generate, original_generate)
async def render_async(self, *args, **kwargs):
if not self.environment.is_async:
raise RuntimeError('The environment was not created with async mode '
'enabled.')
raise RuntimeError("The environment was not created with async mode enabled.")
vars = dict(*args, **kwargs)
ctx = self.new_context(vars)
@@ -66,8 +62,7 @@ async def render_async(self, *args, **kwargs):
try:
return await concat_async(self.root_render_func(ctx))
except Exception:
exc_info = sys.exc_info()
return self.environment.handle_exception(exc_info, True)
return self.environment.handle_exception()
def wrap_render_func(original_render):
@@ -76,6 +71,7 @@ def render(self, *args, **kwargs):
return original_render(self, *args, **kwargs)
loop = asyncio.get_event_loop()
return loop.run_until_complete(self.render_async(*args, **kwargs))
return update_wrapper(render, original_render)
@@ -109,6 +105,7 @@ def _invoke(self, arguments, autoescape):
if not self._environment.is_async:
return original_invoke(self, arguments, autoescape)
return async_invoke(self, arguments, autoescape)
return update_wrapper(_invoke, original_invoke)
@@ -124,9 +121,9 @@ def wrap_default_module(original_default_module):
@internalcode
def _get_default_module(self):
if self.environment.is_async:
raise RuntimeError('Template module attribute is unavailable '
'in async mode')
raise RuntimeError("Template module attribute is unavailable in async mode")
return original_default_module(self)
return _get_default_module
@@ -139,30 +136,30 @@ async def make_module_async(self, vars=None, shared=False, locals=None):
def patch_template():
from jinja2 import Template
from . import Template
Template.generate = wrap_generate_func(Template.generate)
Template.generate_async = update_wrapper(
generate_async, Template.generate_async)
Template.render_async = update_wrapper(
render_async, Template.render_async)
Template.generate_async = update_wrapper(generate_async, Template.generate_async)
Template.render_async = update_wrapper(render_async, Template.render_async)
Template.render = wrap_render_func(Template.render)
Template._get_default_module = wrap_default_module(
Template._get_default_module)
Template._get_default_module = wrap_default_module(Template._get_default_module)
Template._get_default_module_async = get_default_module_async
Template.make_module_async = update_wrapper(
make_module_async, Template.make_module_async)
make_module_async, Template.make_module_async
)
def patch_runtime():
from jinja2.runtime import BlockReference, Macro
BlockReference.__call__ = wrap_block_reference_call(
BlockReference.__call__)
from .runtime import BlockReference, Macro
BlockReference.__call__ = wrap_block_reference_call(BlockReference.__call__)
Macro._invoke = wrap_macro_invoke(Macro._invoke)
def patch_filters():
from jinja2.filters import FILTERS
from jinja2.asyncfilters import ASYNC_FILTERS
from .filters import FILTERS
from .asyncfilters import ASYNC_FILTERS
FILTERS.update(ASYNC_FILTERS)
@@ -179,7 +176,7 @@ async def auto_await(value):
async def auto_aiter(iterable):
if hasattr(iterable, '__aiter__'):
if hasattr(iterable, "__aiter__"):
async for item in iterable:
yield item
return
@@ -187,70 +184,81 @@ async def auto_aiter(iterable):
yield item
class AsyncLoopContext(LoopContextBase):
def __init__(self, async_iterator, undefined, after, length, recurse=None,
depth0=0):
LoopContextBase.__init__(self, undefined, recurse, depth0)
self._async_iterator = async_iterator
self._after = after
self._length = length
class AsyncLoopContext(LoopContext):
_to_iterator = staticmethod(auto_aiter)
@property
def length(self):
if self._length is None:
raise TypeError('Loop length for some iterators cannot be '
'lazily calculated in async mode')
async def length(self):
if self._length is not None:
return self._length
try:
self._length = len(self._iterable)
except TypeError:
iterable = [x async for x in self._iterator]
self._iterator = self._to_iterator(iterable)
self._length = len(iterable) + self.index + (self._after is not missing)
return self._length
def __aiter__(self):
return AsyncLoopContextIterator(self)
@property
async def revindex0(self):
return await self.length - self.index
@property
async def revindex(self):
return await self.length - self.index0
class AsyncLoopContextIterator(object):
__slots__ = ('context',)
async def _peek_next(self):
if self._after is not missing:
return self._after
def __init__(self, context):
self.context = context
try:
self._after = await self._iterator.__anext__()
except StopAsyncIteration:
self._after = missing
return self._after
@property
async def last(self):
return await self._peek_next() is missing
@property
async def nextitem(self):
rv = await self._peek_next()
if rv is missing:
return self._undefined("there is no next item")
return rv
def __aiter__(self):
return self
async def __anext__(self):
ctx = self.context
ctx.index0 += 1
if ctx._after is _last_iteration:
raise StopAsyncIteration()
ctx._before = ctx._current
ctx._current = ctx._after
try:
ctx._after = await ctx._async_iterator.__anext__()
except StopAsyncIteration:
ctx._after = _last_iteration
return ctx._current, ctx
if self._after is not missing:
rv = self._after
self._after = missing
else:
rv = await self._iterator.__anext__()
self.index0 += 1
self._before = self._current
self._current = rv
return rv, self
async def make_async_loop_context(iterable, undefined, recurse=None, depth0=0):
# Length is more complicated and less efficient in async mode. The
# reason for this is that we cannot know if length will be used
# upfront but because length is a property we cannot lazily execute it
# later. This means that we need to buffer it up and measure :(
#
# We however only do this for actual iterators, not for async
# iterators as blocking here does not seem like the best idea in the
# world.
try:
length = len(iterable)
except (TypeError, AttributeError):
if not hasattr(iterable, '__aiter__'):
iterable = tuple(iterable)
length = len(iterable)
else:
length = None
async_iterator = auto_aiter(iterable)
try:
after = await async_iterator.__anext__()
except StopAsyncIteration:
after = _last_iteration
return AsyncLoopContext(async_iterator, undefined, after, length, recurse,
depth0)
import warnings
warnings.warn(
"This template must be recompiled with at least Jinja 2.11, or"
" it will fail in 3.0.",
DeprecationWarning,
stacklevel=2,
)
return AsyncLoopContext(iterable, undefined, recurse, depth0)
patch_all()

View File

@@ -1,60 +1,37 @@
# -*- coding: utf-8 -*-
"""The optional bytecode cache system. This is useful if you have very
complex template situations and the compilation of all those templates
slows down your application too much.
Situations where this is useful are often forking web applications that
are initialized on the first request.
"""
jinja2.bccache
~~~~~~~~~~~~~~
This module implements the bytecode cache system Jinja is optionally
using. This is useful if you have very complex template situations and
the compiliation of all those templates slow down your application too
much.
Situations where this is useful are often forking web applications that
are initialized on the first request.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD.
"""
from os import path, listdir
import os
import sys
import stat
import errno
import marshal
import tempfile
import fnmatch
import os
import stat
import sys
import tempfile
from hashlib import sha1
from jinja2.utils import open_if_exists
from jinja2._compat import BytesIO, pickle, PY2, text_type
from os import listdir
from os import path
from ._compat import BytesIO
from ._compat import marshal_dump
from ._compat import marshal_load
from ._compat import pickle
from ._compat import text_type
from .utils import open_if_exists
# marshal works better on 3.x, one hack less required
if not PY2:
marshal_dump = marshal.dump
marshal_load = marshal.load
else:
def marshal_dump(code, f):
if isinstance(f, file):
marshal.dump(code, f)
else:
f.write(marshal.dumps(code))
def marshal_load(f):
if isinstance(f, file):
return marshal.load(f)
return marshal.loads(f.read())
bc_version = 3
# magic version used to only change with new jinja versions. With 2.6
# we change this to also take Python version changes into account. The
# reason for this is that Python tends to segfault if fed earlier bytecode
# versions because someone thought it would be a good idea to reuse opcodes
# or make Python incompatible with earlier versions.
bc_magic = 'j2'.encode('ascii') + \
pickle.dumps(bc_version, 2) + \
pickle.dumps((sys.version_info[0] << 24) | sys.version_info[1])
bc_version = 4
# Magic bytes to identify Jinja bytecode cache files. Contains the
# Python major and minor version to avoid loading incompatible bytecode
# if a project upgrades its Python version.
bc_magic = (
b"j2"
+ pickle.dumps(bc_version, 2)
+ pickle.dumps((sys.version_info[0] << 24) | sys.version_info[1], 2)
)
class Bucket(object):
@@ -98,7 +75,7 @@ def load_bytecode(self, f):
def write_bytecode(self, f):
"""Dump the bytecode into the file or file like object passed."""
if self.code is None:
raise TypeError('can\'t write empty bucket')
raise TypeError("can't write empty bucket")
f.write(bc_magic)
pickle.dump(self.checksum, f, 2)
marshal_dump(self.code, f)
@@ -140,7 +117,7 @@ def dump_bytecode(self, bucket):
bucket.write_bytecode(f)
A more advanced version of a filesystem based bytecode cache is part of
Jinja2.
Jinja.
"""
def load_bytecode(self, bucket):
@@ -158,24 +135,24 @@ def dump_bytecode(self, bucket):
raise NotImplementedError()
def clear(self):
"""Clears the cache. This method is not used by Jinja2 but should be
"""Clears the cache. This method is not used by Jinja but should be
implemented to allow applications to clear the bytecode cache used
by a particular environment.
"""
def get_cache_key(self, name, filename=None):
"""Returns the unique hash key for this template name."""
hash = sha1(name.encode('utf-8'))
hash = sha1(name.encode("utf-8"))
if filename is not None:
filename = '|' + filename
filename = "|" + filename
if isinstance(filename, text_type):
filename = filename.encode('utf-8')
filename = filename.encode("utf-8")
hash.update(filename)
return hash.hexdigest()
def get_source_checksum(self, source):
"""Returns a checksum for the source."""
return sha1(source.encode('utf-8')).hexdigest()
return sha1(source.encode("utf-8")).hexdigest()
def get_bucket(self, environment, name, filename, source):
"""Return a cache bucket for the given template. All arguments are
@@ -210,7 +187,7 @@ class FileSystemBytecodeCache(BytecodeCache):
This bytecode cache supports clearing of the cache using the clear method.
"""
def __init__(self, directory=None, pattern='__jinja2_%s.cache'):
def __init__(self, directory=None, pattern="__jinja2_%s.cache"):
if directory is None:
directory = self._get_default_cache_dir()
self.directory = directory
@@ -218,19 +195,21 @@ def __init__(self, directory=None, pattern='__jinja2_%s.cache'):
def _get_default_cache_dir(self):
def _unsafe_dir():
raise RuntimeError('Cannot determine safe temp directory. You '
'need to explicitly provide one.')
raise RuntimeError(
"Cannot determine safe temp directory. You "
"need to explicitly provide one."
)
tmpdir = tempfile.gettempdir()
# On windows the temporary directory is used specific unless
# explicitly forced otherwise. We can just use that.
if os.name == 'nt':
if os.name == "nt":
return tmpdir
if not hasattr(os, 'getuid'):
if not hasattr(os, "getuid"):
_unsafe_dir()
dirname = '_jinja2-cache-%d' % os.getuid()
dirname = "_jinja2-cache-%d" % os.getuid()
actual_dir = os.path.join(tmpdir, dirname)
try:
@@ -241,18 +220,22 @@ def _unsafe_dir():
try:
os.chmod(actual_dir, stat.S_IRWXU)
actual_dir_stat = os.lstat(actual_dir)
if actual_dir_stat.st_uid != os.getuid() \
or not stat.S_ISDIR(actual_dir_stat.st_mode) \
or stat.S_IMODE(actual_dir_stat.st_mode) != stat.S_IRWXU:
if (
actual_dir_stat.st_uid != os.getuid()
or not stat.S_ISDIR(actual_dir_stat.st_mode)
or stat.S_IMODE(actual_dir_stat.st_mode) != stat.S_IRWXU
):
_unsafe_dir()
except OSError as e:
if e.errno != errno.EEXIST:
raise
actual_dir_stat = os.lstat(actual_dir)
if actual_dir_stat.st_uid != os.getuid() \
or not stat.S_ISDIR(actual_dir_stat.st_mode) \
or stat.S_IMODE(actual_dir_stat.st_mode) != stat.S_IRWXU:
if (
actual_dir_stat.st_uid != os.getuid()
or not stat.S_ISDIR(actual_dir_stat.st_mode)
or stat.S_IMODE(actual_dir_stat.st_mode) != stat.S_IRWXU
):
_unsafe_dir()
return actual_dir
@@ -261,7 +244,7 @@ def _get_cache_filename(self, bucket):
return path.join(self.directory, self.pattern % bucket.key)
def load_bytecode(self, bucket):
f = open_if_exists(self._get_cache_filename(bucket), 'rb')
f = open_if_exists(self._get_cache_filename(bucket), "rb")
if f is not None:
try:
bucket.load_bytecode(f)
@@ -269,7 +252,7 @@ def load_bytecode(self, bucket):
f.close()
def dump_bytecode(self, bucket):
f = open(self._get_cache_filename(bucket), 'wb')
f = open(self._get_cache_filename(bucket), "wb")
try:
bucket.write_bytecode(f)
finally:
@@ -280,7 +263,8 @@ def clear(self):
# write access on the file system and the function does not exist
# normally.
from os import remove
files = fnmatch.filter(listdir(self.directory), self.pattern % '*')
files = fnmatch.filter(listdir(self.directory), self.pattern % "*")
for filename in files:
try:
remove(path.join(self.directory, filename))
@@ -296,9 +280,8 @@ class MemcachedBytecodeCache(BytecodeCache):
Libraries compatible with this class:
- `werkzeug <http://werkzeug.pocoo.org/>`_.contrib.cache
- `python-memcached <https://www.tummy.com/Community/software/python-memcached/>`_
- `cmemcache <http://gijsbert.org/cmemcache/>`_
- `cachelib <https://github.com/pallets/cachelib>`_
- `python-memcached <https://pypi.org/project/python-memcached/>`_
(Unfortunately the django cache interface is not compatible because it
does not support storing binary data, only unicode. You can however pass
@@ -334,8 +317,13 @@ class MemcachedBytecodeCache(BytecodeCache):
`ignore_memcache_errors` parameter.
"""
def __init__(self, client, prefix='jinja2/bytecode/', timeout=None,
ignore_memcache_errors=True):
def __init__(
self,
client,
prefix="jinja2/bytecode/",
timeout=None,
ignore_memcache_errors=True,
):
self.client = client
self.prefix = prefix
self.timeout = timeout

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +1,6 @@
# -*- coding: utf-8 -*-
"""
jinja.constants
~~~~~~~~~~~~~~~
Various constants.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
#: list of lorem ipsum words used by the lipsum() helper function
LOREM_IPSUM_WORDS = u'''\
LOREM_IPSUM_WORDS = u"""\
a ac accumsan ad adipiscing aenean aliquam aliquet amet ante aptent arcu at
auctor augue bibendum blandit class commodo condimentum congue consectetuer
consequat conubia convallis cras cubilia cum curabitur curae cursus dapibus
@@ -29,4 +18,4 @@
sociis sociosqu sodales sollicitudin suscipit suspendisse taciti tellus tempor
tempus tincidunt torquent tortor tristique turpis ullamcorper ultrices
ultricies urna ut varius vehicula vel velit venenatis vestibulum vitae vivamus
viverra volutpat vulputate'''
viverra volutpat vulputate"""

View File

@@ -1,372 +1,268 @@
# -*- coding: utf-8 -*-
"""
jinja2.debug
~~~~~~~~~~~~
Implements the debug interface for Jinja. This module does some pretty
ugly stuff with the Python traceback system in order to achieve tracebacks
with correct line numbers, locals and contents.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
import sys
import traceback
from types import TracebackType, CodeType
from jinja2.utils import missing, internal_code
from jinja2.exceptions import TemplateSyntaxError
from jinja2._compat import iteritems, reraise, PY2
from types import CodeType
# on pypy we can take advantage of transparent proxies
try:
from __pypy__ import tproxy
except ImportError:
tproxy = None
from . import TemplateSyntaxError
from ._compat import PYPY
from .utils import internal_code
from .utils import missing
# how does the raise helper look like?
try:
exec("raise TypeError, 'foo'")
except SyntaxError:
raise_helper = 'raise __jinja_exception__[1]'
except TypeError:
raise_helper = 'raise __jinja_exception__[0], __jinja_exception__[1]'
def rewrite_traceback_stack(source=None):
"""Rewrite the current exception to replace any tracebacks from
within compiled template code with tracebacks that look like they
came from the template source.
This must be called within an ``except`` block.
class TracebackFrameProxy(object):
"""Proxies a traceback frame."""
def __init__(self, tb):
self.tb = tb
self._tb_next = None
@property
def tb_next(self):
return self._tb_next
def set_next(self, next):
if tb_set_next is not None:
try:
tb_set_next(self.tb, next and next.tb or None)
except Exception:
# this function can fail due to all the hackery it does
# on various python implementations. We just catch errors
# down and ignore them if necessary.
pass
self._tb_next = next
@property
def is_jinja_frame(self):
return '__jinja_template__' in self.tb.tb_frame.f_globals
def __getattr__(self, name):
return getattr(self.tb, name)
def make_frame_proxy(frame):
proxy = TracebackFrameProxy(frame)
if tproxy is None:
return proxy
def operation_handler(operation, *args, **kwargs):
if operation in ('__getattribute__', '__getattr__'):
return getattr(proxy, args[0])
elif operation == '__setattr__':
proxy.__setattr__(*args, **kwargs)
else:
return getattr(proxy, operation)(*args, **kwargs)
return tproxy(TracebackType, operation_handler)
class ProcessedTraceback(object):
"""Holds a Jinja preprocessed traceback for printing or reraising."""
def __init__(self, exc_type, exc_value, frames):
assert frames, 'no frames for this traceback?'
self.exc_type = exc_type
self.exc_value = exc_value
self.frames = frames
# newly concatenate the frames (which are proxies)
prev_tb = None
for tb in self.frames:
if prev_tb is not None:
prev_tb.set_next(tb)
prev_tb = tb
prev_tb.set_next(None)
def render_as_text(self, limit=None):
"""Return a string with the traceback."""
lines = traceback.format_exception(self.exc_type, self.exc_value,
self.frames[0], limit=limit)
return ''.join(lines).rstrip()
def render_as_html(self, full=False):
"""Return a unicode string with the traceback as rendered HTML."""
from jinja2.debugrenderer import render_traceback
return u'%s\n\n<!--\n%s\n-->' % (
render_traceback(self, full=full),
self.render_as_text().decode('utf-8', 'replace')
)
@property
def is_template_syntax_error(self):
"""`True` if this is a template syntax error."""
return isinstance(self.exc_value, TemplateSyntaxError)
@property
def exc_info(self):
"""Exception info tuple with a proxy around the frame objects."""
return self.exc_type, self.exc_value, self.frames[0]
@property
def standard_exc_info(self):
"""Standard python exc_info for re-raising"""
tb = self.frames[0]
# the frame will be an actual traceback (or transparent proxy) if
# we are on pypy or a python implementation with support for tproxy
if type(tb) is not TracebackType:
tb = tb.tb
return self.exc_type, self.exc_value, tb
def make_traceback(exc_info, source_hint=None):
"""Creates a processed traceback object from the exc_info."""
exc_type, exc_value, tb = exc_info
if isinstance(exc_value, TemplateSyntaxError):
exc_info = translate_syntax_error(exc_value, source_hint)
initial_skip = 0
else:
initial_skip = 1
return translate_exception(exc_info, initial_skip)
def translate_syntax_error(error, source=None):
"""Rewrites a syntax error to please traceback systems."""
error.source = source
error.translated = True
exc_info = (error.__class__, error, None)
filename = error.filename
if filename is None:
filename = '<unknown>'
return fake_exc_info(exc_info, filename, error.lineno)
def translate_exception(exc_info, initial_skip=0):
"""If passed an exc_info it will automatically rewrite the exceptions
all the way down to the correct line numbers and frames.
:param exc_info: A :meth:`sys.exc_info` tuple. If not provided,
the current ``exc_info`` is used.
:param source: For ``TemplateSyntaxError``, the original source if
known.
:return: A :meth:`sys.exc_info` tuple that can be re-raised.
"""
tb = exc_info[2]
frames = []
exc_type, exc_value, tb = sys.exc_info()
# skip some internal frames if wanted
for x in range(initial_skip):
if tb is not None:
tb = tb.tb_next
initial_tb = tb
if isinstance(exc_value, TemplateSyntaxError) and not exc_value.translated:
exc_value.translated = True
exc_value.source = source
try:
# Remove the old traceback on Python 3, otherwise the frames
# from the compiler still show up.
exc_value.with_traceback(None)
except AttributeError:
pass
# Outside of runtime, so the frame isn't executing template
# code, but it still needs to point at the template.
tb = fake_traceback(
exc_value, None, exc_value.filename or "<unknown>", exc_value.lineno
)
else:
# Skip the frame for the render function.
tb = tb.tb_next
stack = []
# Build the stack of traceback object, replacing any in template
# code with the source file and line information.
while tb is not None:
# skip frames decorated with @internalcode. These are internal
# calls we can't avoid and that are useless in template debugging
# output.
# Skip frames decorated with @internalcode. These are internal
# calls that aren't useful in template debugging output.
if tb.tb_frame.f_code in internal_code:
tb = tb.tb_next
continue
# save a reference to the next frame if we override the current
# one with a faked one.
next = tb.tb_next
template = tb.tb_frame.f_globals.get("__jinja_template__")
# fake template exceptions
template = tb.tb_frame.f_globals.get('__jinja_template__')
if template is not None:
lineno = template.get_corresponding_lineno(tb.tb_lineno)
tb = fake_exc_info(exc_info[:2] + (tb,), template.filename,
lineno)[2]
fake_tb = fake_traceback(exc_value, tb, template.filename, lineno)
stack.append(fake_tb)
else:
stack.append(tb)
frames.append(make_frame_proxy(tb))
tb = next
tb = tb.tb_next
# if we don't have any exceptions in the frames left, we have to
# reraise it unchanged.
# XXX: can we backup here? when could this happen?
if not frames:
reraise(exc_info[0], exc_info[1], exc_info[2])
tb_next = None
return ProcessedTraceback(exc_info[0], exc_info[1], frames)
# Assign tb_next in reverse to avoid circular references.
for tb in reversed(stack):
tb_next = tb_set_next(tb, tb_next)
return exc_type, exc_value, tb_next
def get_jinja_locals(real_locals):
ctx = real_locals.get('context')
if ctx:
locals = ctx.get_all().copy()
def fake_traceback(exc_value, tb, filename, lineno):
"""Produce a new traceback object that looks like it came from the
template source instead of the compiled code. The filename, line
number, and location name will point to the template, and the local
variables will be the current template context.
:param exc_value: The original exception to be re-raised to create
the new traceback.
:param tb: The original traceback to get the local variables and
code info from.
:param filename: The template filename.
:param lineno: The line number in the template source.
"""
if tb is not None:
# Replace the real locals with the context that would be
# available at that point in the template.
locals = get_template_locals(tb.tb_frame.f_locals)
locals.pop("__jinja_exception__", None)
else:
locals = {}
globals = {
"__name__": filename,
"__file__": filename,
"__jinja_exception__": exc_value,
}
# Raise an exception at the correct line number.
code = compile("\n" * (lineno - 1) + "raise __jinja_exception__", filename, "exec")
# Build a new code object that points to the template file and
# replaces the location with a block name.
try:
location = "template"
if tb is not None:
function = tb.tb_frame.f_code.co_name
if function == "root":
location = "top-level template code"
elif function.startswith("block_"):
location = 'block "%s"' % function[6:]
# Collect arguments for the new code object. CodeType only
# accepts positional arguments, and arguments were inserted in
# new Python versions.
code_args = []
for attr in (
"argcount",
"posonlyargcount", # Python 3.8
"kwonlyargcount", # Python 3
"nlocals",
"stacksize",
"flags",
"code", # codestring
"consts", # constants
"names",
"varnames",
("filename", filename),
("name", location),
"firstlineno",
"lnotab",
"freevars",
"cellvars",
):
if isinstance(attr, tuple):
# Replace with given value.
code_args.append(attr[1])
continue
try:
# Copy original value if it exists.
code_args.append(getattr(code, "co_" + attr))
except AttributeError:
# Some arguments were added later.
continue
code = CodeType(*code_args)
except Exception:
# Some environments such as Google App Engine don't support
# modifying code objects.
pass
# Execute the new code, which is guaranteed to raise, and return
# the new traceback without this frame.
try:
exec(code, globals, locals)
except BaseException:
return sys.exc_info()[2].tb_next
def get_template_locals(real_locals):
"""Based on the runtime locals, get the context that would be
available at that point in the template.
"""
# Start with the current template context.
ctx = real_locals.get("context")
if ctx:
data = ctx.get_all().copy()
else:
data = {}
# Might be in a derived context that only sets local variables
# rather than pushing a context. Local variables follow the scheme
# l_depth_name. Find the highest-depth local that has a value for
# each name.
local_overrides = {}
for name, value in iteritems(real_locals):
if not name.startswith('l_') or value is missing:
for name, value in real_locals.items():
if not name.startswith("l_") or value is missing:
# Not a template variable, or no longer relevant.
continue
try:
_, depth, name = name.split('_', 2)
_, depth, name = name.split("_", 2)
depth = int(depth)
except ValueError:
continue
cur_depth = local_overrides.get(name, (-1,))[0]
if cur_depth < depth:
local_overrides[name] = (depth, value)
for name, (_, value) in iteritems(local_overrides):
# Modify the context with any derived context.
for name, (_, value) in local_overrides.items():
if value is missing:
locals.pop(name, None)
data.pop(name, None)
else:
locals[name] = value
data[name] = value
return locals
return data
def fake_exc_info(exc_info, filename, lineno):
"""Helper for `translate_exception`."""
exc_type, exc_value, tb = exc_info
if sys.version_info >= (3, 7):
# tb_next is directly assignable as of Python 3.7
def tb_set_next(tb, tb_next):
tb.tb_next = tb_next
return tb
# figure the real context out
if tb is not None:
locals = get_jinja_locals(tb.tb_frame.f_locals)
# if there is a local called __jinja_exception__, we get
# rid of it to not break the debug functionality.
locals.pop('__jinja_exception__', None)
elif PYPY:
# PyPy might have special support, and won't work with ctypes.
try:
import tputil
except ImportError:
# Without tproxy support, use the original traceback.
def tb_set_next(tb, tb_next):
return tb
else:
locals = {}
# With tproxy support, create a proxy around the traceback that
# returns the new tb_next.
def tb_set_next(tb, tb_next):
def controller(op):
if op.opname == "__getattribute__" and op.args[0] == "tb_next":
return tb_next
# assamble fake globals we need
globals = {
'__name__': filename,
'__file__': filename,
'__jinja_exception__': exc_info[:2],
return op.delegate()
# we don't want to keep the reference to the template around
# to not cause circular dependencies, but we mark it as Jinja
# frame for the ProcessedTraceback
'__jinja_template__': None
}
# and fake the exception
code = compile('\n' * (lineno - 1) + raise_helper, filename, 'exec')
# if it's possible, change the name of the code. This won't work
# on some python environments such as google appengine
try:
if tb is None:
location = 'template'
else:
function = tb.tb_frame.f_code.co_name
if function == 'root':
location = 'top-level template code'
elif function.startswith('block_'):
location = 'block "%s"' % function[6:]
else:
location = 'template'
if PY2:
code = CodeType(0, code.co_nlocals, code.co_stacksize,
code.co_flags, code.co_code, code.co_consts,
code.co_names, code.co_varnames, filename,
location, code.co_firstlineno,
code.co_lnotab, (), ())
else:
code = CodeType(0, code.co_kwonlyargcount,
code.co_nlocals, code.co_stacksize,
code.co_flags, code.co_code, code.co_consts,
code.co_names, code.co_varnames, filename,
location, code.co_firstlineno,
code.co_lnotab, (), ())
except Exception as e:
pass
# execute the code and catch the new traceback
try:
exec(code, globals, locals)
except:
exc_info = sys.exc_info()
new_tb = exc_info[2].tb_next
# return without this frame
return exc_info[:2] + (new_tb,)
return tputil.make_proxy(controller, obj=tb)
def _init_ugly_crap():
"""This function implements a few ugly things so that we can patch the
traceback objects. The function returned allows resetting `tb_next` on
any python traceback object. Do not attempt to use this on non cpython
interpreters
"""
else:
# Use ctypes to assign tb_next at the C level since it's read-only
# from Python.
import ctypes
from types import TracebackType
if PY2:
# figure out size of _Py_ssize_t for Python 2:
if hasattr(ctypes.pythonapi, 'Py_InitModule4_64'):
_Py_ssize_t = ctypes.c_int64
else:
_Py_ssize_t = ctypes.c_int
else:
# platform ssize_t on Python 3
_Py_ssize_t = ctypes.c_ssize_t
# regular python
class _PyObject(ctypes.Structure):
pass
_PyObject._fields_ = [
('ob_refcnt', _Py_ssize_t),
('ob_type', ctypes.POINTER(_PyObject))
]
# python with trace
if hasattr(sys, 'getobjects'):
class _PyObject(ctypes.Structure):
pass
_PyObject._fields_ = [
('_ob_next', ctypes.POINTER(_PyObject)),
('_ob_prev', ctypes.POINTER(_PyObject)),
('ob_refcnt', _Py_ssize_t),
('ob_type', ctypes.POINTER(_PyObject))
class _CTraceback(ctypes.Structure):
_fields_ = [
# Extra PyObject slots when compiled with Py_TRACE_REFS.
("PyObject_HEAD", ctypes.c_byte * object().__sizeof__()),
# Only care about tb_next as an object, not a traceback.
("tb_next", ctypes.py_object),
]
class _Traceback(_PyObject):
pass
_Traceback._fields_ = [
('tb_next', ctypes.POINTER(_Traceback)),
('tb_frame', ctypes.POINTER(_PyObject)),
('tb_lasti', ctypes.c_int),
('tb_lineno', ctypes.c_int)
]
def tb_set_next(tb, tb_next):
c_tb = _CTraceback.from_address(id(tb))
def tb_set_next(tb, next):
"""Set the tb_next attribute of a traceback object."""
if not (isinstance(tb, TracebackType) and
(next is None or isinstance(next, TracebackType))):
raise TypeError('tb_set_next arguments must be traceback objects')
obj = _Traceback.from_address(id(tb))
# Clear out the old tb_next.
if tb.tb_next is not None:
old = _Traceback.from_address(id(tb.tb_next))
old.ob_refcnt -= 1
if next is None:
obj.tb_next = ctypes.POINTER(_Traceback)()
else:
next = _Traceback.from_address(id(next))
next.ob_refcnt += 1
obj.tb_next = ctypes.pointer(next)
c_tb_next = ctypes.py_object(tb.tb_next)
c_tb.tb_next = ctypes.py_object()
ctypes.pythonapi.Py_DecRef(c_tb_next)
return tb_set_next
# Assign the new tb_next.
if tb_next is not None:
c_tb_next = ctypes.py_object(tb_next)
ctypes.pythonapi.Py_IncRef(c_tb_next)
c_tb.tb_next = c_tb_next
# try to get a tb_set_next implementation if we don't have transparent
# proxies.
tb_set_next = None
if tproxy is None:
try:
tb_set_next = _init_ugly_crap()
except:
pass
del _init_ugly_crap
return tb

View File

@@ -1,56 +1,44 @@
# -*- coding: utf-8 -*-
"""
jinja2.defaults
~~~~~~~~~~~~~~~
Jinja default filters and tags.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
from jinja2._compat import range_type
from jinja2.utils import generate_lorem_ipsum, Cycler, Joiner, Namespace
from ._compat import range_type
from .filters import FILTERS as DEFAULT_FILTERS # noqa: F401
from .tests import TESTS as DEFAULT_TESTS # noqa: F401
from .utils import Cycler
from .utils import generate_lorem_ipsum
from .utils import Joiner
from .utils import Namespace
# defaults for the parser / lexer
BLOCK_START_STRING = '{%'
BLOCK_END_STRING = '%}'
VARIABLE_START_STRING = '{{'
VARIABLE_END_STRING = '}}'
COMMENT_START_STRING = '{#'
COMMENT_END_STRING = '#}'
BLOCK_START_STRING = "{%"
BLOCK_END_STRING = "%}"
VARIABLE_START_STRING = "{{"
VARIABLE_END_STRING = "}}"
COMMENT_START_STRING = "{#"
COMMENT_END_STRING = "#}"
LINE_STATEMENT_PREFIX = None
LINE_COMMENT_PREFIX = None
TRIM_BLOCKS = False
LSTRIP_BLOCKS = False
NEWLINE_SEQUENCE = '\n'
NEWLINE_SEQUENCE = "\n"
KEEP_TRAILING_NEWLINE = False
# default filters, tests and namespace
from jinja2.filters import FILTERS as DEFAULT_FILTERS
from jinja2.tests import TESTS as DEFAULT_TESTS
DEFAULT_NAMESPACE = {
'range': range_type,
'dict': dict,
'lipsum': generate_lorem_ipsum,
'cycler': Cycler,
'joiner': Joiner,
'namespace': Namespace
}
DEFAULT_NAMESPACE = {
"range": range_type,
"dict": dict,
"lipsum": generate_lorem_ipsum,
"cycler": Cycler,
"joiner": Joiner,
"namespace": Namespace,
}
# default policies
DEFAULT_POLICIES = {
'compiler.ascii_str': True,
'urlize.rel': 'noopener',
'urlize.target': None,
'truncate.leeway': 5,
'json.dumps_function': None,
'json.dumps_kwargs': {'sort_keys': True},
'ext.i18n.trimmed': False,
"compiler.ascii_str": True,
"urlize.rel": "noopener",
"urlize.target": None,
"truncate.leeway": 5,
"json.dumps_function": None,
"json.dumps_kwargs": {"sort_keys": True},
"ext.i18n.trimmed": False,
}
# export all constants
__all__ = tuple(x for x in locals().keys() if x.isupper())

View File

@@ -1,60 +1,83 @@
# -*- coding: utf-8 -*-
"""
jinja2.environment
~~~~~~~~~~~~~~~~~~
Provides a class that holds runtime and parsing time options.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""Classes for managing templates and their runtime and compile time
options.
"""
import os
import sys
import weakref
from functools import reduce, partial
from jinja2 import nodes
from jinja2.defaults import BLOCK_START_STRING, \
BLOCK_END_STRING, VARIABLE_START_STRING, VARIABLE_END_STRING, \
COMMENT_START_STRING, COMMENT_END_STRING, LINE_STATEMENT_PREFIX, \
LINE_COMMENT_PREFIX, TRIM_BLOCKS, NEWLINE_SEQUENCE, \
DEFAULT_FILTERS, DEFAULT_TESTS, DEFAULT_NAMESPACE, \
DEFAULT_POLICIES, KEEP_TRAILING_NEWLINE, LSTRIP_BLOCKS
from jinja2.lexer import get_lexer, TokenStream
from jinja2.parser import Parser
from jinja2.nodes import EvalContext
from jinja2.compiler import generate, CodeGenerator
from jinja2.runtime import Undefined, new_context, Context
from jinja2.exceptions import TemplateSyntaxError, TemplateNotFound, \
TemplatesNotFound, TemplateRuntimeError
from jinja2.utils import import_string, LRUCache, Markup, missing, \
concat, consume, internalcode, have_async_gen
from jinja2._compat import imap, ifilter, string_types, iteritems, \
text_type, reraise, implements_iterator, implements_to_string, \
encode_filename, PY2, PYPY
from functools import partial
from functools import reduce
from markupsafe import Markup
from . import nodes
from ._compat import encode_filename
from ._compat import implements_iterator
from ._compat import implements_to_string
from ._compat import iteritems
from ._compat import PY2
from ._compat import PYPY
from ._compat import reraise
from ._compat import string_types
from ._compat import text_type
from .compiler import CodeGenerator
from .compiler import generate
from .defaults import BLOCK_END_STRING
from .defaults import BLOCK_START_STRING
from .defaults import COMMENT_END_STRING
from .defaults import COMMENT_START_STRING
from .defaults import DEFAULT_FILTERS
from .defaults import DEFAULT_NAMESPACE
from .defaults import DEFAULT_POLICIES
from .defaults import DEFAULT_TESTS
from .defaults import KEEP_TRAILING_NEWLINE
from .defaults import LINE_COMMENT_PREFIX
from .defaults import LINE_STATEMENT_PREFIX
from .defaults import LSTRIP_BLOCKS
from .defaults import NEWLINE_SEQUENCE
from .defaults import TRIM_BLOCKS
from .defaults import VARIABLE_END_STRING
from .defaults import VARIABLE_START_STRING
from .exceptions import TemplateNotFound
from .exceptions import TemplateRuntimeError
from .exceptions import TemplatesNotFound
from .exceptions import TemplateSyntaxError
from .exceptions import UndefinedError
from .lexer import get_lexer
from .lexer import TokenStream
from .nodes import EvalContext
from .parser import Parser
from .runtime import Context
from .runtime import new_context
from .runtime import Undefined
from .utils import concat
from .utils import consume
from .utils import have_async_gen
from .utils import import_string
from .utils import internalcode
from .utils import LRUCache
from .utils import missing
# for direct template usage we have up to ten living environments
_spontaneous_environments = LRUCache(10)
# the function to create jinja traceback objects. This is dynamically
# imported on the first exception in the exception handler.
_make_traceback = None
def get_spontaneous_environment(cls, *args):
"""Return a new spontaneous environment. A spontaneous environment
is used for templates created directly rather than through an
existing environment.
def get_spontaneous_environment(*args):
"""Return a new spontaneous environment. A spontaneous environment is an
unnamed and unaccessible (in theory) environment that is used for
templates generated from a string and not from the file system.
:param cls: Environment class to create.
:param args: Positional arguments passed to environment.
"""
key = (cls, args)
try:
env = _spontaneous_environments.get(args)
except TypeError:
return Environment(*args)
if env is not None:
return _spontaneous_environments[key]
except KeyError:
_spontaneous_environments[key] = env = cls(*args)
env.shared = True
return env
_spontaneous_environments[args] = env = Environment(*args)
env.shared = True
return env
def create_cache(size):
@@ -93,20 +116,25 @@ def fail_for_missing_callable(string, name):
try:
name._fail_with_undefined_error()
except Exception as e:
msg = '%s (%s; did you forget to quote the callable name?)' % (msg, e)
msg = "%s (%s; did you forget to quote the callable name?)" % (msg, e)
raise TemplateRuntimeError(msg)
def _environment_sanity_check(environment):
"""Perform a sanity check on the environment."""
assert issubclass(environment.undefined, Undefined), 'undefined must ' \
'be a subclass of undefined because filters depend on it.'
assert environment.block_start_string != \
environment.variable_start_string != \
environment.comment_start_string, 'block, variable and comment ' \
'start strings must be different'
assert environment.newline_sequence in ('\r', '\r\n', '\n'), \
'newline_sequence set to unknown line ending string.'
assert issubclass(
environment.undefined, Undefined
), "undefined must be a subclass of undefined because filters depend on it."
assert (
environment.block_start_string
!= environment.variable_start_string
!= environment.comment_start_string
), "block, variable and comment start strings must be different"
assert environment.newline_sequence in (
"\r",
"\r\n",
"\n",
), "newline_sequence set to unknown line ending string."
return environment
@@ -191,7 +219,7 @@ class Environment(object):
`autoescape`
If set to ``True`` the XML/HTML autoescaping feature is enabled by
default. For more details about autoescaping see
:class:`~jinja2.utils.Markup`. As of Jinja 2.4 this can also
:class:`~markupsafe.Markup`. As of Jinja 2.4 this can also
be a callable that is passed the template name and has to
return ``True`` or ``False`` depending on autoescape should be
enabled by default.
@@ -249,10 +277,6 @@ class Environment(object):
#: must not be modified
shared = False
#: these are currently EXPERIMENTAL undocumented features.
exception_handler = None
exception_formatter = None
#: the class that is used for code generation. See
#: :class:`~jinja2.compiler.CodeGenerator` for more information.
code_generator_class = CodeGenerator
@@ -261,29 +285,31 @@ class Environment(object):
#: :class:`~jinja2.runtime.Context` for more information.
context_class = Context
def __init__(self,
block_start_string=BLOCK_START_STRING,
block_end_string=BLOCK_END_STRING,
variable_start_string=VARIABLE_START_STRING,
variable_end_string=VARIABLE_END_STRING,
comment_start_string=COMMENT_START_STRING,
comment_end_string=COMMENT_END_STRING,
line_statement_prefix=LINE_STATEMENT_PREFIX,
line_comment_prefix=LINE_COMMENT_PREFIX,
trim_blocks=TRIM_BLOCKS,
lstrip_blocks=LSTRIP_BLOCKS,
newline_sequence=NEWLINE_SEQUENCE,
keep_trailing_newline=KEEP_TRAILING_NEWLINE,
extensions=(),
optimized=True,
undefined=Undefined,
finalize=None,
autoescape=False,
loader=None,
cache_size=400,
auto_reload=True,
bytecode_cache=None,
enable_async=False):
def __init__(
self,
block_start_string=BLOCK_START_STRING,
block_end_string=BLOCK_END_STRING,
variable_start_string=VARIABLE_START_STRING,
variable_end_string=VARIABLE_END_STRING,
comment_start_string=COMMENT_START_STRING,
comment_end_string=COMMENT_END_STRING,
line_statement_prefix=LINE_STATEMENT_PREFIX,
line_comment_prefix=LINE_COMMENT_PREFIX,
trim_blocks=TRIM_BLOCKS,
lstrip_blocks=LSTRIP_BLOCKS,
newline_sequence=NEWLINE_SEQUENCE,
keep_trailing_newline=KEEP_TRAILING_NEWLINE,
extensions=(),
optimized=True,
undefined=Undefined,
finalize=None,
autoescape=False,
loader=None,
cache_size=400,
auto_reload=True,
bytecode_cache=None,
enable_async=False,
):
# !!Important notice!!
# The constructor accepts quite a few arguments that should be
# passed by keyword rather than position. However it's important to
@@ -334,6 +360,9 @@ def __init__(self,
self.enable_async = enable_async
self.is_async = self.enable_async and have_async_gen
if self.is_async:
# runs patch_all() to enable async support
from . import asyncsupport # noqa: F401
_environment_sanity_check(self)
@@ -353,15 +382,28 @@ def extend(self, **attributes):
if not hasattr(self, key):
setattr(self, key, value)
def overlay(self, block_start_string=missing, block_end_string=missing,
variable_start_string=missing, variable_end_string=missing,
comment_start_string=missing, comment_end_string=missing,
line_statement_prefix=missing, line_comment_prefix=missing,
trim_blocks=missing, lstrip_blocks=missing,
extensions=missing, optimized=missing,
undefined=missing, finalize=missing, autoescape=missing,
loader=missing, cache_size=missing, auto_reload=missing,
bytecode_cache=missing):
def overlay(
self,
block_start_string=missing,
block_end_string=missing,
variable_start_string=missing,
variable_end_string=missing,
comment_start_string=missing,
comment_end_string=missing,
line_statement_prefix=missing,
line_comment_prefix=missing,
trim_blocks=missing,
lstrip_blocks=missing,
extensions=missing,
optimized=missing,
undefined=missing,
finalize=missing,
autoescape=missing,
loader=missing,
cache_size=missing,
auto_reload=missing,
bytecode_cache=missing,
):
"""Create a new overlay environment that shares all the data with the
current environment except for cache and the overridden attributes.
Extensions cannot be removed for an overlayed environment. An overlayed
@@ -374,7 +416,7 @@ def overlay(self, block_start_string=missing, block_end_string=missing,
through.
"""
args = dict(locals())
del args['self'], args['cache_size'], args['extensions']
del args["self"], args["cache_size"], args["extensions"]
rv = object.__new__(self.__class__)
rv.__dict__.update(self.__dict__)
@@ -402,8 +444,7 @@ def overlay(self, block_start_string=missing, block_end_string=missing,
def iter_extensions(self):
"""Iterates over the extensions by priority."""
return iter(sorted(self.extensions.values(),
key=lambda x: x.priority))
return iter(sorted(self.extensions.values(), key=lambda x: x.priority))
def getitem(self, obj, argument):
"""Get an item or attribute of an object but prefer the item."""
@@ -435,8 +476,9 @@ def getattr(self, obj, attribute):
except (TypeError, LookupError, AttributeError):
return self.undefined(obj=obj, name=attribute)
def call_filter(self, name, value, args=None, kwargs=None,
context=None, eval_ctx=None):
def call_filter(
self, name, value, args=None, kwargs=None, context=None, eval_ctx=None
):
"""Invokes a filter on a value the same way the compiler does it.
Note that on Python 3 this might return a coroutine in case the
@@ -448,21 +490,22 @@ def call_filter(self, name, value, args=None, kwargs=None,
"""
func = self.filters.get(name)
if func is None:
fail_for_missing_callable('no filter named %r', name)
fail_for_missing_callable("no filter named %r", name)
args = [value] + list(args or ())
if getattr(func, 'contextfilter', False):
if getattr(func, "contextfilter", False) is True:
if context is None:
raise TemplateRuntimeError('Attempted to invoke context '
'filter without context')
raise TemplateRuntimeError(
"Attempted to invoke context filter without context"
)
args.insert(0, context)
elif getattr(func, 'evalcontextfilter', False):
elif getattr(func, "evalcontextfilter", False) is True:
if eval_ctx is None:
if context is not None:
eval_ctx = context.eval_ctx
else:
eval_ctx = EvalContext(self)
args.insert(0, eval_ctx)
elif getattr(func, 'environmentfilter', False):
elif getattr(func, "environmentfilter", False) is True:
args.insert(0, self)
return func(*args, **(kwargs or {}))
@@ -473,7 +516,7 @@ def call_test(self, name, value, args=None, kwargs=None):
"""
func = self.tests.get(name)
if func is None:
fail_for_missing_callable('no test named %r', name)
fail_for_missing_callable("no test named %r", name)
return func(value, *(args or ()), **(kwargs or {}))
@internalcode
@@ -483,14 +526,13 @@ def parse(self, source, name=None, filename=None):
executable source- or bytecode. This is useful for debugging or to
extract information from templates.
If you are :ref:`developing Jinja2 extensions <writing-extensions>`
If you are :ref:`developing Jinja extensions <writing-extensions>`
this gives you a good overview of the node tree generated.
"""
try:
return self._parse(source, name, filename)
except TemplateSyntaxError:
exc_info = sys.exc_info()
self.handle_exception(exc_info, source_hint=source)
self.handle_exception(source=source)
def _parse(self, source, name, filename):
"""Internal parsing function used by `parse` and `compile`."""
@@ -510,16 +552,18 @@ def lex(self, source, name=None, filename=None):
try:
return self.lexer.tokeniter(source, name, filename)
except TemplateSyntaxError:
exc_info = sys.exc_info()
self.handle_exception(exc_info, source_hint=source)
self.handle_exception(source=source)
def preprocess(self, source, name=None, filename=None):
"""Preprocesses the source with all extensions. This is automatically
called for all parsing and compiling methods but *not* for :meth:`lex`
because there you usually only want the actual source tokenized.
"""
return reduce(lambda s, e: e.preprocess(s, name, filename),
self.iter_extensions(), text_type(source))
return reduce(
lambda s, e: e.preprocess(s, name, filename),
self.iter_extensions(),
text_type(source),
)
def _tokenize(self, source, name, filename=None, state=None):
"""Called by the parser to do the preprocessing and filtering
@@ -539,8 +583,14 @@ def _generate(self, source, name, filename, defer_init=False):
.. versionadded:: 2.5
"""
return generate(source, self, name, filename, defer_init=defer_init,
optimized=self.optimized)
return generate(
source,
self,
name,
filename,
defer_init=defer_init,
optimized=self.optimized,
)
def _compile(self, source, filename):
"""Internal hook that can be overridden to hook a different compile
@@ -548,11 +598,10 @@ def _compile(self, source, filename):
.. versionadded:: 2.5
"""
return compile(source, filename, 'exec')
return compile(source, filename, "exec")
@internalcode
def compile(self, source, name=None, filename=None, raw=False,
defer_init=False):
def compile(self, source, name=None, filename=None, raw=False, defer_init=False):
"""Compile a node or template source code. The `name` parameter is
the load name of the template after it was joined using
:meth:`join_path` if necessary, not the filename on the file system.
@@ -577,18 +626,16 @@ def compile(self, source, name=None, filename=None, raw=False,
if isinstance(source, string_types):
source_hint = source
source = self._parse(source, name, filename)
source = self._generate(source, name, filename,
defer_init=defer_init)
source = self._generate(source, name, filename, defer_init=defer_init)
if raw:
return source
if filename is None:
filename = '<template>'
filename = "<template>"
else:
filename = encode_filename(filename)
return self._compile(source, filename)
except TemplateSyntaxError:
exc_info = sys.exc_info()
self.handle_exception(exc_info, source_hint=source_hint)
self.handle_exception(source=source_hint)
def compile_expression(self, source, undefined_to_none=True):
"""A handy helper method that returns a callable that accepts keyword
@@ -618,26 +665,32 @@ def compile_expression(self, source, undefined_to_none=True):
.. versionadded:: 2.1
"""
parser = Parser(self, source, state='variable')
exc_info = None
parser = Parser(self, source, state="variable")
try:
expr = parser.parse_expression()
if not parser.stream.eos:
raise TemplateSyntaxError('chunk after expression',
parser.stream.current.lineno,
None, None)
raise TemplateSyntaxError(
"chunk after expression", parser.stream.current.lineno, None, None
)
expr.set_environment(self)
except TemplateSyntaxError:
exc_info = sys.exc_info()
if exc_info is not None:
self.handle_exception(exc_info, source_hint=source)
body = [nodes.Assign(nodes.Name('result', 'store'), expr, lineno=1)]
if sys.exc_info() is not None:
self.handle_exception(source=source)
body = [nodes.Assign(nodes.Name("result", "store"), expr, lineno=1)]
template = self.from_string(nodes.Template(body, lineno=1))
return TemplateExpression(template, undefined_to_none)
def compile_templates(self, target, extensions=None, filter_func=None,
zip='deflated', log_function=None,
ignore_errors=True, py_compile=False):
def compile_templates(
self,
target,
extensions=None,
filter_func=None,
zip="deflated",
log_function=None,
ignore_errors=True,
py_compile=False,
):
"""Finds all the templates the loader can find, compiles them
and stores them in `target`. If `zip` is `None`, instead of in a
zipfile, the templates will be stored in a directory.
@@ -660,42 +713,52 @@ def compile_templates(self, target, extensions=None, filter_func=None,
.. versionadded:: 2.4
"""
from jinja2.loaders import ModuleLoader
from .loaders import ModuleLoader
if log_function is None:
log_function = lambda x: None
def log_function(x):
pass
if py_compile:
if not PY2 or PYPY:
from warnings import warn
warn(Warning('py_compile has no effect on pypy or Python 3'))
import warnings
warnings.warn(
"'py_compile=True' has no effect on PyPy or Python"
" 3 and will be removed in version 3.0",
DeprecationWarning,
stacklevel=2,
)
py_compile = False
else:
import imp
import marshal
py_header = imp.get_magic() + \
u'\xff\xff\xff\xff'.encode('iso-8859-15')
py_header = imp.get_magic() + u"\xff\xff\xff\xff".encode("iso-8859-15")
# Python 3.3 added a source filesize to the header
if sys.version_info >= (3, 3):
py_header += u'\x00\x00\x00\x00'.encode('iso-8859-15')
py_header += u"\x00\x00\x00\x00".encode("iso-8859-15")
def write_file(filename, data, mode):
def write_file(filename, data):
if zip:
info = ZipInfo(filename)
info.external_attr = 0o755 << 16
zip_file.writestr(info, data)
else:
f = open(os.path.join(target, filename), mode)
try:
if isinstance(data, text_type):
data = data.encode("utf8")
with open(os.path.join(target, filename), "wb") as f:
f.write(data)
finally:
f.close()
if zip is not None:
from zipfile import ZipFile, ZipInfo, ZIP_DEFLATED, ZIP_STORED
zip_file = ZipFile(target, 'w', dict(deflated=ZIP_DEFLATED,
stored=ZIP_STORED)[zip])
zip_file = ZipFile(
target, "w", dict(deflated=ZIP_DEFLATED, stored=ZIP_STORED)[zip]
)
log_function('Compiling into Zip archive "%s"' % target)
else:
if not os.path.isdir(target):
@@ -717,18 +780,16 @@ def write_file(filename, data, mode):
if py_compile:
c = self._compile(code, encode_filename(filename))
write_file(filename + 'c', py_header +
marshal.dumps(c), 'wb')
log_function('Byte-compiled "%s" as %s' %
(name, filename + 'c'))
write_file(filename + "c", py_header + marshal.dumps(c))
log_function('Byte-compiled "%s" as %s' % (name, filename + "c"))
else:
write_file(filename, code, 'w')
write_file(filename, code)
log_function('Compiled "%s" as %s' % (name, filename))
finally:
if zip:
zip_file.close()
log_function('Finished compiling templates')
log_function("Finished compiling templates")
def list_templates(self, extensions=None, filter_func=None):
"""Returns a list of templates for this environment. This requires
@@ -746,38 +807,29 @@ def list_templates(self, extensions=None, filter_func=None):
.. versionadded:: 2.4
"""
x = self.loader.list_templates()
names = self.loader.list_templates()
if extensions is not None:
if filter_func is not None:
raise TypeError('either extensions or filter_func '
'can be passed, but not both')
filter_func = lambda x: '.' in x and \
x.rsplit('.', 1)[1] in extensions
if filter_func is not None:
x = list(ifilter(filter_func, x))
return x
raise TypeError(
"either extensions or filter_func can be passed, but not both"
)
def handle_exception(self, exc_info=None, rendered=False, source_hint=None):
def filter_func(x):
return "." in x and x.rsplit(".", 1)[1] in extensions
if filter_func is not None:
names = [name for name in names if filter_func(name)]
return names
def handle_exception(self, source=None):
"""Exception handling helper. This is used internally to either raise
rewritten exceptions or return a rendered traceback for the template.
"""
global _make_traceback
if exc_info is None:
exc_info = sys.exc_info()
from .debug import rewrite_traceback_stack
# the debugging module is imported when it's used for the first time.
# we're doing a lot of stuff there and for applications that do not
# get any exceptions in template rendering there is no need to load
# all of that.
if _make_traceback is None:
from jinja2.debug import make_traceback as _make_traceback
traceback = _make_traceback(exc_info, source_hint)
if rendered and self.exception_formatter is not None:
return self.exception_formatter(traceback)
if self.exception_handler is not None:
self.exception_handler(traceback)
exc_type, exc_value, tb = traceback.standard_exc_info
reraise(exc_type, exc_value, tb)
reraise(*rewrite_traceback_stack(source=source))
def join_path(self, template, parent):
"""Join a template with the parent. By default all the lookups are
@@ -794,12 +846,13 @@ def join_path(self, template, parent):
@internalcode
def _load_template(self, name, globals):
if self.loader is None:
raise TypeError('no loader for this environment specified')
raise TypeError("no loader for this environment specified")
cache_key = (weakref.ref(self.loader), name)
if self.cache is not None:
template = self.cache.get(cache_key)
if template is not None and (not self.auto_reload or
template.is_up_to_date):
if template is not None and (
not self.auto_reload or template.is_up_to_date
):
return template
template = self.loader.load(self, name, globals)
if self.cache is not None:
@@ -835,15 +888,24 @@ def select_template(self, names, parent=None, globals=None):
before it fails. If it cannot find any of the templates, it will
raise a :exc:`TemplatesNotFound` exception.
.. versionadded:: 2.3
.. versionchanged:: 2.11
If names is :class:`Undefined`, an :exc:`UndefinedError` is
raised instead. If no templates were found and names
contains :class:`Undefined`, the message is more helpful.
.. versionchanged:: 2.4
If `names` contains a :class:`Template` object it is returned
from the function unchanged.
.. versionadded:: 2.3
"""
if isinstance(names, Undefined):
names._fail_with_undefined_error()
if not names:
raise TemplatesNotFound(message=u'Tried to select from an empty list '
u'of templates.')
raise TemplatesNotFound(
message=u"Tried to select from an empty list " u"of templates."
)
globals = self.make_globals(globals)
for name in names:
if isinstance(name, Template):
@@ -852,20 +914,19 @@ def select_template(self, names, parent=None, globals=None):
name = self.join_path(name, parent)
try:
return self._load_template(name, globals)
except TemplateNotFound:
except (TemplateNotFound, UndefinedError):
pass
raise TemplatesNotFound(names)
@internalcode
def get_or_select_template(self, template_name_or_list,
parent=None, globals=None):
def get_or_select_template(self, template_name_or_list, parent=None, globals=None):
"""Does a typecheck and dispatches to :meth:`select_template`
if an iterable of template names is given, otherwise to
:meth:`get_template`.
.. versionadded:: 2.3
"""
if isinstance(template_name_or_list, string_types):
if isinstance(template_name_or_list, (string_types, Undefined)):
return self.get_template(template_name_or_list, parent, globals)
elif isinstance(template_name_or_list, Template):
return template_name_or_list
@@ -916,32 +977,57 @@ class Template(object):
StopIteration
"""
def __new__(cls, source,
block_start_string=BLOCK_START_STRING,
block_end_string=BLOCK_END_STRING,
variable_start_string=VARIABLE_START_STRING,
variable_end_string=VARIABLE_END_STRING,
comment_start_string=COMMENT_START_STRING,
comment_end_string=COMMENT_END_STRING,
line_statement_prefix=LINE_STATEMENT_PREFIX,
line_comment_prefix=LINE_COMMENT_PREFIX,
trim_blocks=TRIM_BLOCKS,
lstrip_blocks=LSTRIP_BLOCKS,
newline_sequence=NEWLINE_SEQUENCE,
keep_trailing_newline=KEEP_TRAILING_NEWLINE,
extensions=(),
optimized=True,
undefined=Undefined,
finalize=None,
autoescape=False,
enable_async=False):
#: Type of environment to create when creating a template directly
#: rather than through an existing environment.
environment_class = Environment
def __new__(
cls,
source,
block_start_string=BLOCK_START_STRING,
block_end_string=BLOCK_END_STRING,
variable_start_string=VARIABLE_START_STRING,
variable_end_string=VARIABLE_END_STRING,
comment_start_string=COMMENT_START_STRING,
comment_end_string=COMMENT_END_STRING,
line_statement_prefix=LINE_STATEMENT_PREFIX,
line_comment_prefix=LINE_COMMENT_PREFIX,
trim_blocks=TRIM_BLOCKS,
lstrip_blocks=LSTRIP_BLOCKS,
newline_sequence=NEWLINE_SEQUENCE,
keep_trailing_newline=KEEP_TRAILING_NEWLINE,
extensions=(),
optimized=True,
undefined=Undefined,
finalize=None,
autoescape=False,
enable_async=False,
):
env = get_spontaneous_environment(
block_start_string, block_end_string, variable_start_string,
variable_end_string, comment_start_string, comment_end_string,
line_statement_prefix, line_comment_prefix, trim_blocks,
lstrip_blocks, newline_sequence, keep_trailing_newline,
frozenset(extensions), optimized, undefined, finalize, autoescape,
None, 0, False, None, enable_async)
cls.environment_class,
block_start_string,
block_end_string,
variable_start_string,
variable_end_string,
comment_start_string,
comment_end_string,
line_statement_prefix,
line_comment_prefix,
trim_blocks,
lstrip_blocks,
newline_sequence,
keep_trailing_newline,
frozenset(extensions),
optimized,
undefined,
finalize,
autoescape,
None,
0,
False,
None,
enable_async,
)
return env.from_string(source, template_class=cls)
@classmethod
@@ -949,10 +1035,7 @@ def from_code(cls, environment, code, globals, uptodate=None):
"""Creates a template object from compiled code and the globals. This
is used by the loaders and environment to create a template object.
"""
namespace = {
'environment': environment,
'__file__': code.co_filename
}
namespace = {"environment": environment, "__file__": code.co_filename}
exec(code, namespace)
rv = cls._from_namespace(environment, namespace, globals)
rv._uptodate = uptodate
@@ -972,21 +1055,21 @@ def _from_namespace(cls, environment, namespace, globals):
t = object.__new__(cls)
t.environment = environment
t.globals = globals
t.name = namespace['name']
t.filename = namespace['__file__']
t.blocks = namespace['blocks']
t.name = namespace["name"]
t.filename = namespace["__file__"]
t.blocks = namespace["blocks"]
# render function and module
t.root_render_func = namespace['root']
t.root_render_func = namespace["root"]
t._module = None
# debug and loader helpers
t._debug_info = namespace['debug_info']
t._debug_info = namespace["debug_info"]
t._uptodate = None
# store the reference
namespace['environment'] = environment
namespace['__jinja_template__'] = t
namespace["environment"] = environment
namespace["__jinja_template__"] = t
return t
@@ -1004,8 +1087,7 @@ def render(self, *args, **kwargs):
try:
return concat(self.root_render_func(self.new_context(vars)))
except Exception:
exc_info = sys.exc_info()
return self.environment.handle_exception(exc_info, True)
self.environment.handle_exception()
def render_async(self, *args, **kwargs):
"""This works similar to :meth:`render` but returns a coroutine
@@ -1017,8 +1099,9 @@ def render_async(self, *args, **kwargs):
await template.render_async(knights='that say nih; asynchronously')
"""
# see asyncsupport for the actual implementation
raise NotImplementedError('This feature is not available for this '
'version of Python')
raise NotImplementedError(
"This feature is not available for this version of Python"
)
def stream(self, *args, **kwargs):
"""Works exactly like :meth:`generate` but returns a
@@ -1039,29 +1122,28 @@ def generate(self, *args, **kwargs):
for event in self.root_render_func(self.new_context(vars)):
yield event
except Exception:
exc_info = sys.exc_info()
else:
return
yield self.environment.handle_exception(exc_info, True)
yield self.environment.handle_exception()
def generate_async(self, *args, **kwargs):
"""An async version of :meth:`generate`. Works very similarly but
returns an async iterator instead.
"""
# see asyncsupport for the actual implementation
raise NotImplementedError('This feature is not available for this '
'version of Python')
raise NotImplementedError(
"This feature is not available for this version of Python"
)
def new_context(self, vars=None, shared=False, locals=None):
"""Create a new :class:`Context` for this template. The vars
provided will be passed to the template. Per default the globals
are added to the context. If shared is set to `True` the data
is passed as it to the context without adding the globals.
is passed as is to the context without adding the globals.
`locals` can be a dict of local variables for internal usage.
"""
return new_context(self.environment, self.name, self.blocks,
vars, shared, self.globals, locals)
return new_context(
self.environment, self.name, self.blocks, vars, shared, self.globals, locals
)
def make_module(self, vars=None, shared=False, locals=None):
"""This method works like the :attr:`module` attribute when called
@@ -1074,13 +1156,14 @@ def make_module(self, vars=None, shared=False, locals=None):
def make_module_async(self, vars=None, shared=False, locals=None):
"""As template module creation can invoke template code for
asynchronous exections this method must be used instead of the
asynchronous executions this method must be used instead of the
normal :meth:`make_module` one. Likewise the module attribute
becomes unavailable in async mode.
"""
# see asyncsupport for the actual implementation
raise NotImplementedError('This feature is not available for this '
'version of Python')
raise NotImplementedError(
"This feature is not available for this version of Python"
)
@internalcode
def _get_default_module(self):
@@ -1124,15 +1207,16 @@ def is_up_to_date(self):
@property
def debug_info(self):
"""The debug info mapping."""
return [tuple(imap(int, x.split('='))) for x in
self._debug_info.split('&')]
if self._debug_info:
return [tuple(map(int, x.split("="))) for x in self._debug_info.split("&")]
return []
def __repr__(self):
if self.name is None:
name = 'memory:%x' % id(self)
name = "memory:%x" % id(self)
else:
name = repr(self.name)
return '<%s %s>' % (self.__class__.__name__, name)
return "<%s %s>" % (self.__class__.__name__, name)
@implements_to_string
@@ -1145,10 +1229,12 @@ class TemplateModule(object):
def __init__(self, template, context, body_stream=None):
if body_stream is None:
if context.environment.is_async:
raise RuntimeError('Async mode requires a body stream '
'to be passed to a template module. Use '
'the async methods of the API you are '
'using.')
raise RuntimeError(
"Async mode requires a body stream "
"to be passed to a template module. Use "
"the async methods of the API you are "
"using."
)
body_stream = list(template.root_render_func(context))
self._body_stream = body_stream
self.__dict__.update(context.get_exported())
@@ -1162,10 +1248,10 @@ def __str__(self):
def __repr__(self):
if self.__name__ is None:
name = 'memory:%x' % id(self)
name = "memory:%x" % id(self)
else:
name = repr(self.__name__)
return '<%s %s>' % (self.__class__.__name__, name)
return "<%s %s>" % (self.__class__.__name__, name)
class TemplateExpression(object):
@@ -1181,7 +1267,7 @@ def __init__(self, template, undefined_to_none):
def __call__(self, *args, **kwargs):
context = self._template.new_context(dict(*args, **kwargs))
consume(self._template.root_render_func(context))
rv = context.vars['result']
rv = context.vars["result"]
if self._undefined_to_none and isinstance(rv, Undefined):
rv = None
return rv
@@ -1203,7 +1289,7 @@ def __init__(self, gen):
self._gen = gen
self.disable_buffering()
def dump(self, fp, encoding=None, errors='strict'):
def dump(self, fp, encoding=None, errors="strict"):
"""Dump the complete stream into a file or file-like object.
Per default unicode strings are written, if you want to encode
before writing specify an `encoding`.
@@ -1215,15 +1301,15 @@ def dump(self, fp, encoding=None, errors='strict'):
close = False
if isinstance(fp, string_types):
if encoding is None:
encoding = 'utf-8'
fp = open(fp, 'wb')
encoding = "utf-8"
fp = open(fp, "wb")
close = True
try:
if encoding is not None:
iterable = (x.encode(encoding, errors) for x in self)
else:
iterable = self
if hasattr(fp, 'writelines'):
if hasattr(fp, "writelines"):
fp.writelines(iterable)
else:
for item in iterable:
@@ -1259,7 +1345,7 @@ def _buffered_generator(self, size):
def enable_buffering(self, size=5):
"""Enable buffering. Buffer `size` items before yielding them."""
if size <= 1:
raise ValueError('buffer size too small')
raise ValueError("buffer size too small")
self.buffered = True
self._next = partial(next, self._buffered_generator(size))

View File

@@ -1,23 +1,18 @@
# -*- coding: utf-8 -*-
"""
jinja2.exceptions
~~~~~~~~~~~~~~~~~
Jinja exceptions.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
from jinja2._compat import imap, text_type, PY2, implements_to_string
from ._compat import imap
from ._compat import implements_to_string
from ._compat import PY2
from ._compat import text_type
class TemplateError(Exception):
"""Baseclass for all template errors."""
if PY2:
def __init__(self, message=None):
if message is not None:
message = text_type(message).encode('utf-8')
message = text_type(message).encode("utf-8")
Exception.__init__(self, message)
@property
@@ -25,11 +20,13 @@ def message(self):
if self.args:
message = self.args[0]
if message is not None:
return message.decode('utf-8', 'replace')
return message.decode("utf-8", "replace")
def __unicode__(self):
return self.message or u''
return self.message or u""
else:
def __init__(self, message=None):
Exception.__init__(self, message)
@@ -43,16 +40,28 @@ def message(self):
@implements_to_string
class TemplateNotFound(IOError, LookupError, TemplateError):
"""Raised if a template does not exist."""
"""Raised if a template does not exist.
.. versionchanged:: 2.11
If the given name is :class:`Undefined` and no message was
provided, an :exc:`UndefinedError` is raised.
"""
# looks weird, but removes the warning descriptor that just
# bogusly warns us about message being deprecated
message = None
def __init__(self, name, message=None):
IOError.__init__(self)
IOError.__init__(self, name)
if message is None:
from .runtime import Undefined
if isinstance(name, Undefined):
name._fail_with_undefined_error()
message = name
self.message = message
self.name = name
self.templates = [name]
@@ -66,13 +75,28 @@ class TemplatesNotFound(TemplateNotFound):
are selected. This is a subclass of :class:`TemplateNotFound`
exception, so just catching the base exception will catch both.
.. versionchanged:: 2.11
If a name in the list of names is :class:`Undefined`, a message
about it being undefined is shown rather than the empty string.
.. versionadded:: 2.2
"""
def __init__(self, names=(), message=None):
if message is None:
message = u'none of the templates given were found: ' + \
u', '.join(imap(text_type, names))
from .runtime import Undefined
parts = []
for name in names:
if isinstance(name, Undefined):
parts.append(name._undefined_message)
else:
parts.append(name)
message = u"none of the templates given were found: " + u", ".join(
imap(text_type, parts)
)
TemplateNotFound.__init__(self, names and names[-1] or None, message)
self.templates = list(names)
@@ -98,11 +122,11 @@ def __str__(self):
return self.message
# otherwise attach some stuff
location = 'line %d' % self.lineno
location = "line %d" % self.lineno
name = self.filename or self.name
if name:
location = 'File "%s", %s' % (name, location)
lines = [self.message, ' ' + location]
lines = [self.message, " " + location]
# if the source is set, add the line to the output
if self.source is not None:
@@ -111,9 +135,16 @@ def __str__(self):
except IndexError:
line = None
if line:
lines.append(' ' + line.strip())
lines.append(" " + line.strip())
return u'\n'.join(lines)
return u"\n".join(lines)
def __reduce__(self):
# https://bugs.python.org/issue1692335 Exceptions that take
# multiple required arguments have problems with pickling.
# Without this, raises TypeError: __init__() missing 1 required
# positional argument: 'lineno'
return self.__class__, (self.message, self.lineno, self.name, self.filename)
class TemplateAssertionError(TemplateSyntaxError):

View File

@@ -1,42 +1,49 @@
# -*- coding: utf-8 -*-
"""
jinja2.ext
~~~~~~~~~~
Jinja extensions allow to add custom tags similar to the way django custom
tags work. By default two example extensions exist: an i18n and a cache
extension.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD.
"""
"""Extension API for adding custom tags and behavior."""
import pprint
import re
from sys import version_info
from jinja2 import nodes
from jinja2.defaults import BLOCK_START_STRING, \
BLOCK_END_STRING, VARIABLE_START_STRING, VARIABLE_END_STRING, \
COMMENT_START_STRING, COMMENT_END_STRING, LINE_STATEMENT_PREFIX, \
LINE_COMMENT_PREFIX, TRIM_BLOCKS, NEWLINE_SEQUENCE, \
KEEP_TRAILING_NEWLINE, LSTRIP_BLOCKS
from jinja2.environment import Environment
from jinja2.runtime import concat
from jinja2.exceptions import TemplateAssertionError, TemplateSyntaxError
from jinja2.utils import contextfunction, import_string, Markup
from jinja2._compat import with_metaclass, string_types, iteritems
from markupsafe import Markup
from . import nodes
from ._compat import iteritems
from ._compat import string_types
from ._compat import with_metaclass
from .defaults import BLOCK_END_STRING
from .defaults import BLOCK_START_STRING
from .defaults import COMMENT_END_STRING
from .defaults import COMMENT_START_STRING
from .defaults import KEEP_TRAILING_NEWLINE
from .defaults import LINE_COMMENT_PREFIX
from .defaults import LINE_STATEMENT_PREFIX
from .defaults import LSTRIP_BLOCKS
from .defaults import NEWLINE_SEQUENCE
from .defaults import TRIM_BLOCKS
from .defaults import VARIABLE_END_STRING
from .defaults import VARIABLE_START_STRING
from .environment import Environment
from .exceptions import TemplateAssertionError
from .exceptions import TemplateSyntaxError
from .nodes import ContextReference
from .runtime import concat
from .utils import contextfunction
from .utils import import_string
# the only real useful gettext functions for a Jinja template. Note
# that ugettext must be assigned to gettext as Jinja doesn't support
# non unicode strings.
GETTEXT_FUNCTIONS = ('_', 'gettext', 'ngettext')
GETTEXT_FUNCTIONS = ("_", "gettext", "ngettext")
_ws_re = re.compile(r"\s*\n\s*")
class ExtensionRegistry(type):
"""Gives the extension an unique identifier."""
def __new__(cls, name, bases, d):
rv = type.__new__(cls, name, bases, d)
rv.identifier = rv.__module__ + '.' + rv.__name__
def __new__(mcs, name, bases, d):
rv = type.__new__(mcs, name, bases, d)
rv.identifier = rv.__module__ + "." + rv.__name__
return rv
@@ -91,10 +98,6 @@ def filter_stream(self, stream):
to filter tokens returned. This method has to return an iterable of
:class:`~jinja2.lexer.Token`\\s, but it doesn't have to return a
:class:`~jinja2.lexer.TokenStream`.
In the `ext` folder of the Jinja2 source distribution there is a file
called `inlinegettext.py` which implements a filter that utilizes this
method.
"""
return stream
@@ -116,8 +119,9 @@ def attr(self, name, lineno=None):
"""
return nodes.ExtensionAttribute(self.identifier, name, lineno=lineno)
def call_method(self, name, args=None, kwargs=None, dyn_args=None,
dyn_kwargs=None, lineno=None):
def call_method(
self, name, args=None, kwargs=None, dyn_args=None, dyn_kwargs=None, lineno=None
):
"""Call a method of the extension. This is a shortcut for
:meth:`attr` + :class:`jinja2.nodes.Call`.
"""
@@ -125,13 +129,19 @@ def call_method(self, name, args=None, kwargs=None, dyn_args=None,
args = []
if kwargs is None:
kwargs = []
return nodes.Call(self.attr(name, lineno=lineno), args, kwargs,
dyn_args, dyn_kwargs, lineno=lineno)
return nodes.Call(
self.attr(name, lineno=lineno),
args,
kwargs,
dyn_args,
dyn_kwargs,
lineno=lineno,
)
@contextfunction
def _gettext_alias(__context, *args, **kwargs):
return __context.call(__context.resolve('gettext'), *args, **kwargs)
return __context.call(__context.resolve("gettext"), *args, **kwargs)
def _make_new_gettext(func):
@@ -140,24 +150,31 @@ def gettext(__context, __string, **variables):
rv = __context.call(func, __string)
if __context.eval_ctx.autoescape:
rv = Markup(rv)
# Always treat as a format string, even if there are no
# variables. This makes translation strings more consistent
# and predictable. This requires escaping
return rv % variables
return gettext
def _make_new_ngettext(func):
@contextfunction
def ngettext(__context, __singular, __plural, __num, **variables):
variables.setdefault('num', __num)
variables.setdefault("num", __num)
rv = __context.call(func, __singular, __plural, __num)
if __context.eval_ctx.autoescape:
rv = Markup(rv)
# Always treat as a format string, see gettext comment above.
return rv % variables
return ngettext
class InternationalizationExtension(Extension):
"""This extension adds gettext support to Jinja2."""
tags = set(['trans'])
"""This extension adds gettext support to Jinja."""
tags = {"trans"}
# TODO: the i18n extension is currently reevaluating values in a few
# situations. Take this example:
@@ -168,30 +185,28 @@ class InternationalizationExtension(Extension):
def __init__(self, environment):
Extension.__init__(self, environment)
environment.globals['_'] = _gettext_alias
environment.globals["_"] = _gettext_alias
environment.extend(
install_gettext_translations=self._install,
install_null_translations=self._install_null,
install_gettext_callables=self._install_callables,
uninstall_gettext_translations=self._uninstall,
extract_translations=self._extract,
newstyle_gettext=False
newstyle_gettext=False,
)
def _install(self, translations, newstyle=None):
gettext = getattr(translations, 'ugettext', None)
gettext = getattr(translations, "ugettext", None)
if gettext is None:
gettext = translations.gettext
ngettext = getattr(translations, 'ungettext', None)
ngettext = getattr(translations, "ungettext", None)
if ngettext is None:
ngettext = translations.ngettext
self._install_callables(gettext, ngettext, newstyle)
def _install_null(self, newstyle=None):
self._install_callables(
lambda x: x,
lambda s, p, n: (n != 1 and (p,) or (s,))[0],
newstyle
lambda x: x, lambda s, p, n: (n != 1 and (p,) or (s,))[0], newstyle
)
def _install_callables(self, gettext, ngettext, newstyle=None):
@@ -200,13 +215,10 @@ def _install_callables(self, gettext, ngettext, newstyle=None):
if self.environment.newstyle_gettext:
gettext = _make_new_gettext(gettext)
ngettext = _make_new_ngettext(ngettext)
self.environment.globals.update(
gettext=gettext,
ngettext=ngettext
)
self.environment.globals.update(gettext=gettext, ngettext=ngettext)
def _uninstall(self, translations):
for key in 'gettext', 'ngettext':
for key in "gettext", "ngettext":
self.environment.globals.pop(key, None)
def _extract(self, source, gettext_functions=GETTEXT_FUNCTIONS):
@@ -226,41 +238,44 @@ def parse(self, parser):
plural_expr_assignment = None
variables = {}
trimmed = None
while parser.stream.current.type != 'block_end':
while parser.stream.current.type != "block_end":
if variables:
parser.stream.expect('comma')
parser.stream.expect("comma")
# skip colon for python compatibility
if parser.stream.skip_if('colon'):
if parser.stream.skip_if("colon"):
break
name = parser.stream.expect('name')
name = parser.stream.expect("name")
if name.value in variables:
parser.fail('translatable variable %r defined twice.' %
name.value, name.lineno,
exc=TemplateAssertionError)
parser.fail(
"translatable variable %r defined twice." % name.value,
name.lineno,
exc=TemplateAssertionError,
)
# expressions
if parser.stream.current.type == 'assign':
if parser.stream.current.type == "assign":
next(parser.stream)
variables[name.value] = var = parser.parse_expression()
elif trimmed is None and name.value in ('trimmed', 'notrimmed'):
trimmed = name.value == 'trimmed'
elif trimmed is None and name.value in ("trimmed", "notrimmed"):
trimmed = name.value == "trimmed"
continue
else:
variables[name.value] = var = nodes.Name(name.value, 'load')
variables[name.value] = var = nodes.Name(name.value, "load")
if plural_expr is None:
if isinstance(var, nodes.Call):
plural_expr = nodes.Name('_trans', 'load')
plural_expr = nodes.Name("_trans", "load")
variables[name.value] = plural_expr
plural_expr_assignment = nodes.Assign(
nodes.Name('_trans', 'store'), var)
nodes.Name("_trans", "store"), var
)
else:
plural_expr = var
num_called_num = name.value == 'num'
num_called_num = name.value == "num"
parser.stream.expect('block_end')
parser.stream.expect("block_end")
plural = None
have_plural = False
@@ -271,22 +286,24 @@ def parse(self, parser):
if singular_names:
referenced.update(singular_names)
if plural_expr is None:
plural_expr = nodes.Name(singular_names[0], 'load')
num_called_num = singular_names[0] == 'num'
plural_expr = nodes.Name(singular_names[0], "load")
num_called_num = singular_names[0] == "num"
# if we have a pluralize block, we parse that too
if parser.stream.current.test('name:pluralize'):
if parser.stream.current.test("name:pluralize"):
have_plural = True
next(parser.stream)
if parser.stream.current.type != 'block_end':
name = parser.stream.expect('name')
if parser.stream.current.type != "block_end":
name = parser.stream.expect("name")
if name.value not in variables:
parser.fail('unknown variable %r for pluralization' %
name.value, name.lineno,
exc=TemplateAssertionError)
parser.fail(
"unknown variable %r for pluralization" % name.value,
name.lineno,
exc=TemplateAssertionError,
)
plural_expr = variables[name.value]
num_called_num = name.value == 'num'
parser.stream.expect('block_end')
num_called_num = name.value == "num"
parser.stream.expect("block_end")
plural_names, plural = self._parse_block(parser, False)
next(parser.stream)
referenced.update(plural_names)
@@ -296,88 +313,97 @@ def parse(self, parser):
# register free names as simple name expressions
for var in referenced:
if var not in variables:
variables[var] = nodes.Name(var, 'load')
variables[var] = nodes.Name(var, "load")
if not have_plural:
plural_expr = None
elif plural_expr is None:
parser.fail('pluralize without variables', lineno)
parser.fail("pluralize without variables", lineno)
if trimmed is None:
trimmed = self.environment.policies['ext.i18n.trimmed']
trimmed = self.environment.policies["ext.i18n.trimmed"]
if trimmed:
singular = self._trim_whitespace(singular)
if plural:
plural = self._trim_whitespace(plural)
node = self._make_node(singular, plural, variables, plural_expr,
bool(referenced),
num_called_num and have_plural)
node = self._make_node(
singular,
plural,
variables,
plural_expr,
bool(referenced),
num_called_num and have_plural,
)
node.set_lineno(lineno)
if plural_expr_assignment is not None:
return [plural_expr_assignment, node]
else:
return node
def _trim_whitespace(self, string, _ws_re=re.compile(r'\s*\n\s*')):
return _ws_re.sub(' ', string.strip())
def _trim_whitespace(self, string, _ws_re=_ws_re):
return _ws_re.sub(" ", string.strip())
def _parse_block(self, parser, allow_pluralize):
"""Parse until the next block tag with a given name."""
referenced = []
buf = []
while 1:
if parser.stream.current.type == 'data':
buf.append(parser.stream.current.value.replace('%', '%%'))
if parser.stream.current.type == "data":
buf.append(parser.stream.current.value.replace("%", "%%"))
next(parser.stream)
elif parser.stream.current.type == 'variable_begin':
elif parser.stream.current.type == "variable_begin":
next(parser.stream)
name = parser.stream.expect('name').value
name = parser.stream.expect("name").value
referenced.append(name)
buf.append('%%(%s)s' % name)
parser.stream.expect('variable_end')
elif parser.stream.current.type == 'block_begin':
buf.append("%%(%s)s" % name)
parser.stream.expect("variable_end")
elif parser.stream.current.type == "block_begin":
next(parser.stream)
if parser.stream.current.test('name:endtrans'):
if parser.stream.current.test("name:endtrans"):
break
elif parser.stream.current.test('name:pluralize'):
elif parser.stream.current.test("name:pluralize"):
if allow_pluralize:
break
parser.fail('a translatable section can have only one '
'pluralize section')
parser.fail('control structures in translatable sections are '
'not allowed')
parser.fail(
"a translatable section can have only one pluralize section"
)
parser.fail(
"control structures in translatable sections are not allowed"
)
elif parser.stream.eos:
parser.fail('unclosed translation block')
parser.fail("unclosed translation block")
else:
assert False, 'internal parser error'
raise RuntimeError("internal parser error")
return referenced, concat(buf)
def _make_node(self, singular, plural, variables, plural_expr,
vars_referenced, num_called_num):
def _make_node(
self, singular, plural, variables, plural_expr, vars_referenced, num_called_num
):
"""Generates a useful node from the data provided."""
# no variables referenced? no need to escape for old style
# gettext invocations only if there are vars.
if not vars_referenced and not self.environment.newstyle_gettext:
singular = singular.replace('%%', '%')
singular = singular.replace("%%", "%")
if plural:
plural = plural.replace('%%', '%')
plural = plural.replace("%%", "%")
# singular only:
if plural_expr is None:
gettext = nodes.Name('gettext', 'load')
node = nodes.Call(gettext, [nodes.Const(singular)],
[], None, None)
gettext = nodes.Name("gettext", "load")
node = nodes.Call(gettext, [nodes.Const(singular)], [], None, None)
# singular and plural
else:
ngettext = nodes.Name('ngettext', 'load')
node = nodes.Call(ngettext, [
nodes.Const(singular),
nodes.Const(plural),
plural_expr
], [], None, None)
ngettext = nodes.Name("ngettext", "load")
node = nodes.Call(
ngettext,
[nodes.Const(singular), nodes.Const(plural), plural_expr],
[],
None,
None,
)
# in case newstyle gettext is used, the method is powerful
# enough to handle the variable expansion and autoescape
@@ -386,7 +412,7 @@ def _make_node(self, singular, plural, variables, plural_expr,
for key, value in iteritems(variables):
# the function adds that later anyways in case num was
# called num, so just skip it.
if num_called_num and key == 'num':
if num_called_num and key == "num":
continue
node.kwargs.append(nodes.Keyword(key, value))
@@ -396,18 +422,24 @@ def _make_node(self, singular, plural, variables, plural_expr,
# environment with autoescaping turned on
node = nodes.MarkSafeIfAutoescape(node)
if variables:
node = nodes.Mod(node, nodes.Dict([
nodes.Pair(nodes.Const(key), value)
for key, value in variables.items()
]))
node = nodes.Mod(
node,
nodes.Dict(
[
nodes.Pair(nodes.Const(key), value)
for key, value in variables.items()
]
),
)
return nodes.Output([node])
class ExprStmtExtension(Extension):
"""Adds a `do` tag to Jinja2 that works like the print statement just
"""Adds a `do` tag to Jinja that works like the print statement just
that it doesn't print the return value.
"""
tags = set(['do'])
tags = set(["do"])
def parse(self, parser):
node = nodes.ExprStmt(lineno=next(parser.stream).lineno)
@@ -417,11 +449,12 @@ def parse(self, parser):
class LoopControlExtension(Extension):
"""Adds break and continue to the template engine."""
tags = set(['break', 'continue'])
tags = set(["break", "continue"])
def parse(self, parser):
token = next(parser.stream)
if token.value == 'break':
if token.value == "break":
return nodes.Break(lineno=token.lineno)
return nodes.Continue(lineno=token.lineno)
@@ -434,8 +467,50 @@ class AutoEscapeExtension(Extension):
pass
def extract_from_ast(node, gettext_functions=GETTEXT_FUNCTIONS,
babel_style=True):
class DebugExtension(Extension):
"""A ``{% debug %}`` tag that dumps the available variables,
filters, and tests.
.. code-block:: html+jinja
<pre>{% debug %}</pre>
.. code-block:: text
{'context': {'cycler': <class 'jinja2.utils.Cycler'>,
...,
'namespace': <class 'jinja2.utils.Namespace'>},
'filters': ['abs', 'attr', 'batch', 'capitalize', 'center', 'count', 'd',
..., 'urlencode', 'urlize', 'wordcount', 'wordwrap', 'xmlattr'],
'tests': ['!=', '<', '<=', '==', '>', '>=', 'callable', 'defined',
..., 'odd', 'sameas', 'sequence', 'string', 'undefined', 'upper']}
.. versionadded:: 2.11.0
"""
tags = {"debug"}
def parse(self, parser):
lineno = parser.stream.expect("name:debug").lineno
context = ContextReference()
result = self.call_method("_render", [context], lineno=lineno)
return nodes.Output([result], lineno=lineno)
def _render(self, context):
result = {
"context": context.get_all(),
"filters": sorted(self.environment.filters.keys()),
"tests": sorted(self.environment.tests.keys()),
}
# Set the depth since the intent is to show the top few names.
if version_info[:2] >= (3, 4):
return pprint.pformat(result, depth=3, compact=True)
else:
return pprint.pformat(result, depth=3)
def extract_from_ast(node, gettext_functions=GETTEXT_FUNCTIONS, babel_style=True):
"""Extract localizable strings from the given template node. Per
default this function returns matches in babel style that means non string
parameters as well as keyword arguments are returned as `None`. This
@@ -471,19 +546,20 @@ def extract_from_ast(node, gettext_functions=GETTEXT_FUNCTIONS,
extraction interface or extract comments yourself.
"""
for node in node.find_all(nodes.Call):
if not isinstance(node.node, nodes.Name) or \
node.node.name not in gettext_functions:
if (
not isinstance(node.node, nodes.Name)
or node.node.name not in gettext_functions
):
continue
strings = []
for arg in node.args:
if isinstance(arg, nodes.Const) and \
isinstance(arg.value, string_types):
if isinstance(arg, nodes.Const) and isinstance(arg.value, string_types):
strings.append(arg.value)
else:
strings.append(None)
for arg in node.kwargs:
for _ in node.kwargs:
strings.append(None)
if node.dyn_args is not None:
strings.append(None)
@@ -517,9 +593,10 @@ def __init__(self, tokens, comment_tags):
def find_backwards(self, offset):
try:
for _, token_type, token_value in \
reversed(self.tokens[self.offset:offset]):
if token_type in ('comment', 'linecomment'):
for _, token_type, token_value in reversed(
self.tokens[self.offset : offset]
):
if token_type in ("comment", "linecomment"):
try:
prefix, comment = token_value.split(None, 1)
except ValueError:
@@ -533,7 +610,7 @@ def find_backwards(self, offset):
def find_comments(self, lineno):
if not self.comment_tags or self.last_lineno > lineno:
return []
for idx, (token_lineno, _, _) in enumerate(self.tokens[self.offset:]):
for idx, (token_lineno, _, _) in enumerate(self.tokens[self.offset :]):
if token_lineno > lineno:
return self.find_backwards(self.offset + idx)
return self.find_backwards(len(self.tokens))
@@ -545,7 +622,7 @@ def babel_extract(fileobj, keywords, comment_tags, options):
.. versionchanged:: 2.3
Basic support for translation comments was added. If `comment_tags`
is now set to a list of keywords for extraction, the extractor will
try to find the best preceeding comment that begins with one of the
try to find the best preceding comment that begins with one of the
keywords. For best results, make sure to not have more than one
gettext call in one line of code and the matching comment in the
same line or the line before.
@@ -568,7 +645,7 @@ def babel_extract(fileobj, keywords, comment_tags, options):
(comments will be empty currently)
"""
extensions = set()
for extension in options.get('extensions', '').split(','):
for extension in options.get("extensions", "").split(","):
extension = extension.strip()
if not extension:
continue
@@ -577,38 +654,37 @@ def babel_extract(fileobj, keywords, comment_tags, options):
extensions.add(InternationalizationExtension)
def getbool(options, key, default=False):
return options.get(key, str(default)).lower() in \
('1', 'on', 'yes', 'true')
return options.get(key, str(default)).lower() in ("1", "on", "yes", "true")
silent = getbool(options, 'silent', True)
silent = getbool(options, "silent", True)
environment = Environment(
options.get('block_start_string', BLOCK_START_STRING),
options.get('block_end_string', BLOCK_END_STRING),
options.get('variable_start_string', VARIABLE_START_STRING),
options.get('variable_end_string', VARIABLE_END_STRING),
options.get('comment_start_string', COMMENT_START_STRING),
options.get('comment_end_string', COMMENT_END_STRING),
options.get('line_statement_prefix') or LINE_STATEMENT_PREFIX,
options.get('line_comment_prefix') or LINE_COMMENT_PREFIX,
getbool(options, 'trim_blocks', TRIM_BLOCKS),
getbool(options, 'lstrip_blocks', LSTRIP_BLOCKS),
options.get("block_start_string", BLOCK_START_STRING),
options.get("block_end_string", BLOCK_END_STRING),
options.get("variable_start_string", VARIABLE_START_STRING),
options.get("variable_end_string", VARIABLE_END_STRING),
options.get("comment_start_string", COMMENT_START_STRING),
options.get("comment_end_string", COMMENT_END_STRING),
options.get("line_statement_prefix") or LINE_STATEMENT_PREFIX,
options.get("line_comment_prefix") or LINE_COMMENT_PREFIX,
getbool(options, "trim_blocks", TRIM_BLOCKS),
getbool(options, "lstrip_blocks", LSTRIP_BLOCKS),
NEWLINE_SEQUENCE,
getbool(options, 'keep_trailing_newline', KEEP_TRAILING_NEWLINE),
getbool(options, "keep_trailing_newline", KEEP_TRAILING_NEWLINE),
frozenset(extensions),
cache_size=0,
auto_reload=False
auto_reload=False,
)
if getbool(options, 'trimmed'):
environment.policies['ext.i18n.trimmed'] = True
if getbool(options, 'newstyle_gettext'):
if getbool(options, "trimmed"):
environment.policies["ext.i18n.trimmed"] = True
if getbool(options, "newstyle_gettext"):
environment.newstyle_gettext = True
source = fileobj.read().decode(options.get('encoding', 'utf-8'))
source = fileobj.read().decode(options.get("encoding", "utf-8"))
try:
node = environment.parse(source)
tokens = list(environment.lex(environment.preprocess(source)))
except TemplateSyntaxError as e:
except TemplateSyntaxError:
if not silent:
raise
# skip templates with syntax errors
@@ -625,3 +701,4 @@ def getbool(options, key, default=False):
loopcontrols = LoopControlExtension
with_ = WithExtension
autoescape = AutoEscapeExtension
debug = DebugExtension

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,10 @@
from jinja2.visitor import NodeVisitor
from jinja2._compat import iteritems
from ._compat import iteritems
from .visitor import NodeVisitor
VAR_LOAD_PARAMETER = 'param'
VAR_LOAD_RESOLVE = 'resolve'
VAR_LOAD_ALIAS = 'alias'
VAR_LOAD_UNDEFINED = 'undefined'
VAR_LOAD_PARAMETER = "param"
VAR_LOAD_RESOLVE = "resolve"
VAR_LOAD_ALIAS = "alias"
VAR_LOAD_UNDEFINED = "undefined"
def find_symbols(nodes, parent_symbols=None):
@@ -23,7 +22,6 @@ def symbols_for_node(node, parent_symbols=None):
class Symbols(object):
def __init__(self, parent=None, level=None):
if level is None:
if parent is None:
@@ -41,7 +39,7 @@ def analyze_node(self, node, **kwargs):
visitor.visit(node, **kwargs)
def _define_ref(self, name, load=None):
ident = 'l_%d_%s' % (self.level, name)
ident = "l_%d_%s" % (self.level, name)
self.refs[name] = ident
if load is not None:
self.loads[ident] = load
@@ -62,8 +60,10 @@ def find_ref(self, name):
def ref(self, name):
rv = self.find_ref(name)
if rv is None:
raise AssertionError('Tried to resolve a name to a reference that '
'was unknown to the frame (%r)' % name)
raise AssertionError(
"Tried to resolve a name to a reference that "
"was unknown to the frame (%r)" % name
)
return rv
def copy(self):
@@ -118,7 +118,7 @@ def branch_update(self, branch_symbols):
if branch_count == len(branch_symbols):
continue
target = self.find_ref(name)
assert target is not None, 'should not happen'
assert target is not None, "should not happen"
if self.parent is not None:
outer_target = self.parent.find_ref(name)
@@ -149,7 +149,6 @@ def dump_param_targets(self):
class RootVisitor(NodeVisitor):
def __init__(self, symbols):
self.sym_visitor = FrameSymbolVisitor(symbols)
@@ -157,35 +156,39 @@ def _simple_visit(self, node, **kwargs):
for child in node.iter_child_nodes():
self.sym_visitor.visit(child)
visit_Template = visit_Block = visit_Macro = visit_FilterBlock = \
visit_Scope = visit_If = visit_ScopedEvalContextModifier = \
_simple_visit
visit_Template = (
visit_Block
) = (
visit_Macro
) = (
visit_FilterBlock
) = visit_Scope = visit_If = visit_ScopedEvalContextModifier = _simple_visit
def visit_AssignBlock(self, node, **kwargs):
for child in node.body:
self.sym_visitor.visit(child)
def visit_CallBlock(self, node, **kwargs):
for child in node.iter_child_nodes(exclude=('call',)):
for child in node.iter_child_nodes(exclude=("call",)):
self.sym_visitor.visit(child)
def visit_OverlayScope(self, node, **kwargs):
for child in node.body:
self.sym_visitor.visit(child)
def visit_For(self, node, for_branch='body', **kwargs):
if for_branch == 'body':
def visit_For(self, node, for_branch="body", **kwargs):
if for_branch == "body":
self.sym_visitor.visit(node.target, store_as_param=True)
branch = node.body
elif for_branch == 'else':
elif for_branch == "else":
branch = node.else_
elif for_branch == 'test':
elif for_branch == "test":
self.sym_visitor.visit(node.target, store_as_param=True)
if node.test is not None:
self.sym_visitor.visit(node.test)
return
else:
raise RuntimeError('Unknown for branch')
raise RuntimeError("Unknown for branch")
for item in branch or ():
self.sym_visitor.visit(item)
@@ -196,8 +199,9 @@ def visit_With(self, node, **kwargs):
self.sym_visitor.visit(child)
def generic_visit(self, node, *args, **kwargs):
raise NotImplementedError('Cannot find symbols for %r' %
node.__class__.__name__)
raise NotImplementedError(
"Cannot find symbols for %r" % node.__class__.__name__
)
class FrameSymbolVisitor(NodeVisitor):
@@ -208,11 +212,11 @@ def __init__(self, symbols):
def visit_Name(self, node, store_as_param=False, **kwargs):
"""All assignments to names go through this function."""
if store_as_param or node.ctx == 'param':
if store_as_param or node.ctx == "param":
self.symbols.declare_parameter(node.name)
elif node.ctx == 'store':
elif node.ctx == "store":
self.symbols.store(node.name)
elif node.ctx == 'load':
elif node.ctx == "load":
self.symbols.load(node.name)
def visit_NSRef(self, node, **kwargs):

File diff suppressed because it is too large Load Diff

View File

@@ -1,22 +1,21 @@
# -*- coding: utf-8 -*-
"""
jinja2.loaders
~~~~~~~~~~~~~~
Jinja loader classes.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""API and implementations for loading templates from different data
sources.
"""
import os
import sys
import weakref
from types import ModuleType
from os import path
from hashlib import sha1
from jinja2.exceptions import TemplateNotFound
from jinja2.utils import open_if_exists, internalcode
from jinja2._compat import string_types, iteritems
from os import path
from types import ModuleType
from ._compat import abc
from ._compat import fspath
from ._compat import iteritems
from ._compat import string_types
from .exceptions import TemplateNotFound
from .utils import internalcode
from .utils import open_if_exists
def split_template_path(template):
@@ -24,12 +23,14 @@ def split_template_path(template):
'..' in the path it will raise a `TemplateNotFound` error.
"""
pieces = []
for piece in template.split('/'):
if path.sep in piece \
or (path.altsep and path.altsep in piece) or \
piece == path.pardir:
for piece in template.split("/"):
if (
path.sep in piece
or (path.altsep and path.altsep in piece)
or piece == path.pardir
):
raise TemplateNotFound(template)
elif piece and piece != '.':
elif piece and piece != ".":
pieces.append(piece)
return pieces
@@ -86,15 +87,16 @@ def get_source(self, environment, template):
the template will be reloaded.
"""
if not self.has_source_access:
raise RuntimeError('%s cannot provide access to the source' %
self.__class__.__name__)
raise RuntimeError(
"%s cannot provide access to the source" % self.__class__.__name__
)
raise TemplateNotFound(template)
def list_templates(self):
"""Iterates over all templates. If the loader does not support that
it should raise a :exc:`TypeError` which is the default behavior.
"""
raise TypeError('this loader cannot iterate over all templates')
raise TypeError("this loader cannot iterate over all templates")
@internalcode
def load(self, environment, name, globals=None):
@@ -131,8 +133,9 @@ def load(self, environment, name, globals=None):
bucket.code = code
bcc.set_bucket(bucket)
return environment.template_class.from_code(environment, code,
globals, uptodate)
return environment.template_class.from_code(
environment, code, globals, uptodate
)
class FileSystemLoader(BaseLoader):
@@ -153,14 +156,20 @@ class FileSystemLoader(BaseLoader):
>>> loader = FileSystemLoader('/path/to/templates', followlinks=True)
.. versionchanged:: 2.8+
The *followlinks* parameter was added.
.. versionchanged:: 2.8
The ``followlinks`` parameter was added.
"""
def __init__(self, searchpath, encoding='utf-8', followlinks=False):
if isinstance(searchpath, string_types):
def __init__(self, searchpath, encoding="utf-8", followlinks=False):
if not isinstance(searchpath, abc.Iterable) or isinstance(
searchpath, string_types
):
searchpath = [searchpath]
self.searchpath = list(searchpath)
# In Python 3.5, os.path.join doesn't support Path. This can be
# simplified to list(searchpath) when Python 3.5 is dropped.
self.searchpath = [fspath(p) for p in searchpath]
self.encoding = encoding
self.followlinks = followlinks
@@ -183,6 +192,7 @@ def uptodate():
return path.getmtime(filename) == mtime
except OSError:
return False
return contents, filename, uptodate
raise TemplateNotFound(template)
@@ -190,12 +200,14 @@ def list_templates(self):
found = set()
for searchpath in self.searchpath:
walk_dir = os.walk(searchpath, followlinks=self.followlinks)
for dirpath, dirnames, filenames in walk_dir:
for dirpath, _, filenames in walk_dir:
for filename in filenames:
template = os.path.join(dirpath, filename) \
[len(searchpath):].strip(os.path.sep) \
.replace(os.path.sep, '/')
if template[:2] == './':
template = (
os.path.join(dirpath, filename)[len(searchpath) :]
.strip(os.path.sep)
.replace(os.path.sep, "/")
)
if template[:2] == "./":
template = template[2:]
if template not in found:
found.add(template)
@@ -217,10 +229,11 @@ class PackageLoader(BaseLoader):
from the file system and not a zip file.
"""
def __init__(self, package_name, package_path='templates',
encoding='utf-8'):
from pkg_resources import DefaultProvider, ResourceManager, \
get_provider
def __init__(self, package_name, package_path="templates", encoding="utf-8"):
from pkg_resources import DefaultProvider
from pkg_resources import get_provider
from pkg_resources import ResourceManager
provider = get_provider(package_name)
self.encoding = encoding
self.manager = ResourceManager()
@@ -230,14 +243,17 @@ def __init__(self, package_name, package_path='templates',
def get_source(self, environment, template):
pieces = split_template_path(template)
p = '/'.join((self.package_path,) + tuple(pieces))
p = "/".join((self.package_path,) + tuple(pieces))
if not self.provider.has_resource(p):
raise TemplateNotFound(template)
filename = uptodate = None
if self.filesystem_bound:
filename = self.provider.get_resource_filename(self.manager, p)
mtime = path.getmtime(filename)
def uptodate():
try:
return path.getmtime(filename) == mtime
@@ -249,19 +265,24 @@ def uptodate():
def list_templates(self):
path = self.package_path
if path[:2] == './':
if path[:2] == "./":
path = path[2:]
elif path == '.':
path = ''
elif path == ".":
path = ""
offset = len(path)
results = []
def _walk(path):
for filename in self.provider.resource_listdir(path):
fullname = path + '/' + filename
fullname = path + "/" + filename
if self.provider.resource_isdir(fullname):
_walk(fullname)
else:
results.append(fullname[offset:].lstrip('/'))
results.append(fullname[offset:].lstrip("/"))
_walk(path)
results.sort()
return results
@@ -334,7 +355,7 @@ class PrefixLoader(BaseLoader):
by loading ``'app2/index.html'`` the file from the second.
"""
def __init__(self, mapping, delimiter='/'):
def __init__(self, mapping, delimiter="/"):
self.mapping = mapping
self.delimiter = delimiter
@@ -434,19 +455,20 @@ class ModuleLoader(BaseLoader):
has_source_access = False
def __init__(self, path):
package_name = '_jinja2_module_templates_%x' % id(self)
package_name = "_jinja2_module_templates_%x" % id(self)
# create a fake module that looks for the templates in the
# path given.
mod = _TemplateModule(package_name)
if isinstance(path, string_types):
path = [path]
else:
path = list(path)
mod.__path__ = path
sys.modules[package_name] = weakref.proxy(mod,
lambda x: sys.modules.pop(package_name, None))
if not isinstance(path, abc.Iterable) or isinstance(path, string_types):
path = [path]
mod.__path__ = [fspath(p) for p in path]
sys.modules[package_name] = weakref.proxy(
mod, lambda x: sys.modules.pop(package_name, None)
)
# the only strong reference, the sys.modules entry is weak
# so that the garbage collector can remove it once the
@@ -456,20 +478,20 @@ def __init__(self, path):
@staticmethod
def get_template_key(name):
return 'tmpl_' + sha1(name.encode('utf-8')).hexdigest()
return "tmpl_" + sha1(name.encode("utf-8")).hexdigest()
@staticmethod
def get_module_filename(name):
return ModuleLoader.get_template_key(name) + '.py'
return ModuleLoader.get_template_key(name) + ".py"
@internalcode
def load(self, environment, name, globals=None):
key = self.get_template_key(name)
module = '%s.%s' % (self.package_name, key)
module = "%s.%s" % (self.package_name, key)
mod = getattr(self.module, module, None)
if mod is None:
try:
mod = __import__(module, None, None, ['root'])
mod = __import__(module, None, None, ["root"])
except ImportError:
raise TemplateNotFound(name)
@@ -478,4 +500,5 @@ def load(self, environment, name, globals=None):
sys.modules.pop(module, None)
return environment.template_class.from_module_dict(
environment, mod.__dict__, globals)
environment, mod.__dict__, globals
)

View File

@@ -1,25 +1,18 @@
# -*- coding: utf-8 -*-
"""Functions that expose information about templates that might be
interesting for introspection.
"""
jinja2.meta
~~~~~~~~~~~
This module implements various functions that exposes information about
templates that might be interesting for various kinds of applications.
:copyright: (c) 2017 by the Jinja Team, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from jinja2 import nodes
from jinja2.compiler import CodeGenerator
from jinja2._compat import string_types, iteritems
from . import nodes
from ._compat import iteritems
from ._compat import string_types
from .compiler import CodeGenerator
class TrackingCodeGenerator(CodeGenerator):
"""We abuse the code generator for introspection."""
def __init__(self, environment):
CodeGenerator.__init__(self, environment, '<introspection>',
'<introspection>')
CodeGenerator.__init__(self, environment, "<introspection>", "<introspection>")
self.undeclared_identifiers = set()
def write(self, x):
@@ -29,7 +22,7 @@ def enter_frame(self, frame):
"""Remember all undeclared identifiers."""
CodeGenerator.enter_frame(self, frame)
for _, (action, param) in iteritems(frame.symbols.loads):
if action == 'resolve':
if action == "resolve" and param not in self.environment.globals:
self.undeclared_identifiers.add(param)
@@ -72,8 +65,9 @@ def find_referenced_templates(ast):
This function is useful for dependency tracking. For example if you want
to rebuild parts of the website after a layout template has changed.
"""
for node in ast.find_all((nodes.Extends, nodes.FromImport, nodes.Import,
nodes.Include)):
for node in ast.find_all(
(nodes.Extends, nodes.FromImport, nodes.Import, nodes.Include)
):
if not isinstance(node.template, nodes.Const):
# a tuple with some non consts in there
if isinstance(node.template, (nodes.Tuple, nodes.List)):
@@ -96,8 +90,9 @@ def find_referenced_templates(ast):
# a tuple or list (latter *should* not happen) made of consts,
# yield the consts that are strings. We could warn here for
# non string values
elif isinstance(node, nodes.Include) and \
isinstance(node.template.value, (tuple, list)):
elif isinstance(node, nodes.Include) and isinstance(
node.template.value, (tuple, list)
):
for template_name in node.template.value:
if isinstance(template_name, string_types):
yield template_name

View File

@@ -1,19 +1,23 @@
import sys
from ast import literal_eval
from itertools import islice, chain
from jinja2 import nodes
from jinja2._compat import text_type
from jinja2.compiler import CodeGenerator, has_safe_repr
from jinja2.environment import Environment, Template
from jinja2.utils import concat, escape
from itertools import chain
from itertools import islice
from . import nodes
from ._compat import text_type
from .compiler import CodeGenerator
from .compiler import has_safe_repr
from .environment import Environment
from .environment import Template
def native_concat(nodes):
"""Return a native Python type from the list of compiled nodes. If the
result is a single node, its value is returned. Otherwise, the nodes are
concatenated as strings. If the result can be parsed with
:func:`ast.literal_eval`, the parsed value is returned. Otherwise, the
string is returned.
"""Return a native Python type from the list of compiled nodes. If
the result is a single node, its value is returned. Otherwise, the
nodes are concatenated as strings. If the result can be parsed with
:func:`ast.literal_eval`, the parsed value is returned. Otherwise,
the string is returned.
:param nodes: Iterable of nodes to concatenate.
"""
head = list(islice(nodes, 2))
@@ -21,200 +25,70 @@ def native_concat(nodes):
return None
if len(head) == 1:
out = head[0]
raw = head[0]
else:
out = u''.join([text_type(v) for v in chain(head, nodes)])
raw = u"".join([text_type(v) for v in chain(head, nodes)])
try:
return literal_eval(out)
return literal_eval(raw)
except (ValueError, SyntaxError, MemoryError):
return out
return raw
class NativeCodeGenerator(CodeGenerator):
"""A code generator which avoids injecting ``to_string()`` calls around the
internal code Jinja uses to render templates.
"""A code generator which renders Python types by not adding
``to_string()`` around output nodes.
"""
def visit_Output(self, node, frame):
"""Same as :meth:`CodeGenerator.visit_Output`, but do not call
``to_string`` on output nodes in generated code.
"""
if self.has_known_extends and frame.require_output_check:
return
@staticmethod
def _default_finalize(value):
return value
finalize = self.environment.finalize
finalize_context = getattr(finalize, 'contextfunction', False)
finalize_eval = getattr(finalize, 'evalcontextfunction', False)
finalize_env = getattr(finalize, 'environmentfunction', False)
def _output_const_repr(self, group):
return repr(u"".join([text_type(v) for v in group]))
if finalize is not None:
if finalize_context or finalize_eval:
const_finalize = None
elif finalize_env:
def const_finalize(x):
return finalize(self.environment, x)
else:
const_finalize = finalize
else:
def const_finalize(x):
return x
def _output_child_to_const(self, node, frame, finalize):
const = node.as_const(frame.eval_ctx)
# If we are inside a frame that requires output checking, we do so.
outdent_later = False
if not has_safe_repr(const):
raise nodes.Impossible()
if frame.require_output_check:
self.writeline('if parent_template is None:')
self.indent()
outdent_later = True
if isinstance(node, nodes.TemplateData):
return const
# Try to evaluate as many chunks as possible into a static string at
# compile time.
body = []
return finalize.const(const)
for child in node.nodes:
try:
if const_finalize is None:
raise nodes.Impossible()
def _output_child_pre(self, node, frame, finalize):
if finalize.src is not None:
self.write(finalize.src)
const = child.as_const(frame.eval_ctx)
if not has_safe_repr(const):
raise nodes.Impossible()
except nodes.Impossible:
body.append(child)
continue
# the frame can't be volatile here, because otherwise the as_const
# function would raise an Impossible exception at that point
try:
if frame.eval_ctx.autoescape:
if hasattr(const, '__html__'):
const = const.__html__()
else:
const = escape(const)
const = const_finalize(const)
except Exception:
# if something goes wrong here we evaluate the node at runtime
# for easier debugging
body.append(child)
continue
if body and isinstance(body[-1], list):
body[-1].append(const)
else:
body.append([const])
# if we have less than 3 nodes or a buffer we yield or extend/append
if len(body) < 3 or frame.buffer is not None:
if frame.buffer is not None:
# for one item we append, for more we extend
if len(body) == 1:
self.writeline('%s.append(' % frame.buffer)
else:
self.writeline('%s.extend((' % frame.buffer)
self.indent()
for item in body:
if isinstance(item, list):
val = repr(native_concat(item))
if frame.buffer is None:
self.writeline('yield ' + val)
else:
self.writeline(val + ',')
else:
if frame.buffer is None:
self.writeline('yield ', item)
else:
self.newline(item)
close = 0
if finalize is not None:
self.write('environment.finalize(')
if finalize_context:
self.write('context, ')
close += 1
self.visit(item, frame)
if close > 0:
self.write(')' * close)
if frame.buffer is not None:
self.write(',')
if frame.buffer is not None:
# close the open parentheses
self.outdent()
self.writeline(len(body) == 1 and ')' or '))')
# otherwise we create a format string as this is faster in that case
else:
format = []
arguments = []
for item in body:
if isinstance(item, list):
format.append(native_concat(item).replace('%', '%%'))
else:
format.append('%s')
arguments.append(item)
self.writeline('yield ')
self.write(repr(concat(format)) + ' % (')
self.indent()
for argument in arguments:
self.newline(argument)
close = 0
if finalize is not None:
self.write('environment.finalize(')
if finalize_context:
self.write('context, ')
elif finalize_eval:
self.write('context.eval_ctx, ')
elif finalize_env:
self.write('environment, ')
close += 1
self.visit(argument, frame)
self.write(')' * close + ', ')
self.outdent()
self.writeline(')')
if outdent_later:
self.outdent()
class NativeTemplate(Template):
def render(self, *args, **kwargs):
"""Render the template to produce a native Python type. If the result
is a single node, its value is returned. Otherwise, the nodes are
concatenated as strings. If the result can be parsed with
:func:`ast.literal_eval`, the parsed value is returned. Otherwise, the
string is returned.
"""
vars = dict(*args, **kwargs)
try:
return native_concat(self.root_render_func(self.new_context(vars)))
except Exception:
exc_info = sys.exc_info()
return self.environment.handle_exception(exc_info, True)
def _output_child_post(self, node, frame, finalize):
if finalize.src is not None:
self.write(")")
class NativeEnvironment(Environment):
"""An environment that renders templates to native Python types."""
code_generator_class = NativeCodeGenerator
template_class = NativeTemplate
class NativeTemplate(Template):
environment_class = NativeEnvironment
def render(self, *args, **kwargs):
"""Render the template to produce a native Python type. If the
result is a single node, its value is returned. Otherwise, the
nodes are concatenated as strings. If the result can be parsed
with :func:`ast.literal_eval`, the parsed value is returned.
Otherwise, the string is returned.
"""
vars = dict(*args, **kwargs)
try:
return native_concat(self.root_render_func(self.new_context(vars)))
except Exception:
return self.environment.handle_exception()
NativeEnvironment.template_class = NativeTemplate

View File

@@ -1,54 +1,39 @@
# -*- coding: utf-8 -*-
"""AST nodes generated by the parser for the compiler. Also provides
some node tree helper functions used by the parser and compiler in order
to normalize nodes.
"""
jinja2.nodes
~~~~~~~~~~~~
This module implements additional nodes derived from the ast base node.
It also provides some node tree helper functions like `in_lineno` and
`get_nodes` used by the parser and translator in order to normalize
python and jinja nodes.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
import types
import operator
from collections import deque
from jinja2.utils import Markup
from jinja2._compat import izip, with_metaclass, text_type, PY2
from markupsafe import Markup
#: the types we support for context functions
_context_function_types = (types.FunctionType, types.MethodType)
from ._compat import izip
from ._compat import PY2
from ._compat import text_type
from ._compat import with_metaclass
_binop_to_func = {
'*': operator.mul,
'/': operator.truediv,
'//': operator.floordiv,
'**': operator.pow,
'%': operator.mod,
'+': operator.add,
'-': operator.sub
"*": operator.mul,
"/": operator.truediv,
"//": operator.floordiv,
"**": operator.pow,
"%": operator.mod,
"+": operator.add,
"-": operator.sub,
}
_uaop_to_func = {
'not': operator.not_,
'+': operator.pos,
'-': operator.neg
}
_uaop_to_func = {"not": operator.not_, "+": operator.pos, "-": operator.neg}
_cmpop_to_func = {
'eq': operator.eq,
'ne': operator.ne,
'gt': operator.gt,
'gteq': operator.ge,
'lt': operator.lt,
'lteq': operator.le,
'in': lambda a, b: a in b,
'notin': lambda a, b: a not in b
"eq": operator.eq,
"ne": operator.ne,
"gt": operator.gt,
"gteq": operator.ge,
"lt": operator.lt,
"lteq": operator.le,
"in": lambda a, b: a in b,
"notin": lambda a, b: a not in b,
}
@@ -61,16 +46,16 @@ class NodeType(type):
inheritance. fields and attributes from the parent class are
automatically forwarded to the child."""
def __new__(cls, name, bases, d):
for attr in 'fields', 'attributes':
def __new__(mcs, name, bases, d):
for attr in "fields", "attributes":
storage = []
storage.extend(getattr(bases[0], attr, ()))
storage.extend(d.get(attr, ()))
assert len(bases) == 1, 'multiple inheritance not allowed'
assert len(storage) == len(set(storage)), 'layout conflict'
assert len(bases) == 1, "multiple inheritance not allowed"
assert len(storage) == len(set(storage)), "layout conflict"
d[attr] = tuple(storage)
d.setdefault('abstract', False)
return type.__new__(cls, name, bases, d)
d.setdefault("abstract", False)
return type.__new__(mcs, name, bases, d)
class EvalContext(object):
@@ -97,15 +82,17 @@ def revert(self, old):
def get_eval_context(node, ctx):
if ctx is None:
if node.environment is None:
raise RuntimeError('if no eval context is passed, the '
'node must have an attached '
'environment.')
raise RuntimeError(
"if no eval context is passed, the "
"node must have an attached "
"environment."
)
return EvalContext(node.environment)
return ctx
class Node(with_metaclass(NodeType, object)):
"""Baseclass for all Jinja2 nodes. There are a number of nodes available
"""Baseclass for all Jinja nodes. There are a number of nodes available
of different types. There are four major types:
- :class:`Stmt`: statements
@@ -120,30 +107,32 @@ class Node(with_metaclass(NodeType, object)):
The `environment` attribute is set at the end of the parsing process for
all nodes automatically.
"""
fields = ()
attributes = ('lineno', 'environment')
attributes = ("lineno", "environment")
abstract = True
def __init__(self, *fields, **attributes):
if self.abstract:
raise TypeError('abstract nodes are not instanciable')
raise TypeError("abstract nodes are not instantiable")
if fields:
if len(fields) != len(self.fields):
if not self.fields:
raise TypeError('%r takes 0 arguments' %
self.__class__.__name__)
raise TypeError('%r takes 0 or %d argument%s' % (
self.__class__.__name__,
len(self.fields),
len(self.fields) != 1 and 's' or ''
))
raise TypeError("%r takes 0 arguments" % self.__class__.__name__)
raise TypeError(
"%r takes 0 or %d argument%s"
% (
self.__class__.__name__,
len(self.fields),
len(self.fields) != 1 and "s" or "",
)
)
for name, arg in izip(self.fields, fields):
setattr(self, name, arg)
for attr in self.attributes:
setattr(self, attr, attributes.pop(attr, None))
if attributes:
raise TypeError('unknown attribute %r' %
next(iter(attributes)))
raise TypeError("unknown attribute %r" % next(iter(attributes)))
def iter_fields(self, exclude=None, only=None):
"""This method iterates over all fields that are defined and yields
@@ -153,9 +142,11 @@ def iter_fields(self, exclude=None, only=None):
should be sets or tuples of field names.
"""
for name in self.fields:
if (exclude is only is None) or \
(exclude is not None and name not in exclude) or \
(only is not None and name in only):
if (
(exclude is only is None)
or (exclude is not None and name not in exclude)
or (only is not None and name in only)
):
try:
yield name, getattr(self, name)
except AttributeError:
@@ -166,7 +157,7 @@ def iter_child_nodes(self, exclude=None, only=None):
over all fields and yields the values of they are nodes. If the value
of a field is a list all the nodes in that list are returned.
"""
for field, item in self.iter_fields(exclude, only):
for _, item in self.iter_fields(exclude, only):
if isinstance(item, list):
for n in item:
if isinstance(n, Node):
@@ -200,7 +191,7 @@ def set_ctx(self, ctx):
todo = deque([self])
while todo:
node = todo.popleft()
if 'ctx' in node.fields:
if "ctx" in node.fields:
node.ctx = ctx
todo.extend(node.iter_child_nodes())
return self
@@ -210,7 +201,7 @@ def set_lineno(self, lineno, override=False):
todo = deque([self])
while todo:
node = todo.popleft()
if 'lineno' in node.attributes:
if "lineno" in node.attributes:
if node.lineno is None or override:
node.lineno = lineno
todo.extend(node.iter_child_nodes())
@@ -226,8 +217,9 @@ def set_environment(self, environment):
return self
def __eq__(self, other):
return type(self) is type(other) and \
tuple(self.iter_fields()) == tuple(other.iter_fields())
return type(self) is type(other) and tuple(self.iter_fields()) == tuple(
other.iter_fields()
)
def __ne__(self, other):
return not self.__eq__(other)
@@ -236,10 +228,9 @@ def __ne__(self, other):
__hash__ = object.__hash__
def __repr__(self):
return '%s(%s)' % (
return "%s(%s)" % (
self.__class__.__name__,
', '.join('%s=%r' % (arg, getattr(self, arg, None)) for
arg in self.fields)
", ".join("%s=%r" % (arg, getattr(self, arg, None)) for arg in self.fields),
)
def dump(self):
@@ -248,37 +239,39 @@ def _dump(node):
buf.append(repr(node))
return
buf.append('nodes.%s(' % node.__class__.__name__)
buf.append("nodes.%s(" % node.__class__.__name__)
if not node.fields:
buf.append(')')
buf.append(")")
return
for idx, field in enumerate(node.fields):
if idx:
buf.append(', ')
buf.append(", ")
value = getattr(node, field)
if isinstance(value, list):
buf.append('[')
buf.append("[")
for idx, item in enumerate(value):
if idx:
buf.append(', ')
buf.append(", ")
_dump(item)
buf.append(']')
buf.append("]")
else:
_dump(value)
buf.append(')')
buf.append(")")
buf = []
_dump(self)
return ''.join(buf)
return "".join(buf)
class Stmt(Node):
"""Base node for all statements."""
abstract = True
class Helper(Node):
"""Nodes that exist in a specific context only."""
abstract = True
@@ -286,19 +279,22 @@ class Template(Node):
"""Node that represents a template. This must be the outermost node that
is passed to the compiler.
"""
fields = ('body',)
fields = ("body",)
class Output(Stmt):
"""A node that holds multiple expressions which are then printed out.
This is used both for the `print` statement and the regular template data.
"""
fields = ('nodes',)
fields = ("nodes",)
class Extends(Stmt):
"""Represents an extends statement."""
fields = ('template',)
fields = ("template",)
class For(Stmt):
@@ -309,12 +305,14 @@ class For(Stmt):
For filtered nodes an expression can be stored as `test`, otherwise `None`.
"""
fields = ('target', 'iter', 'body', 'else_', 'test', 'recursive')
fields = ("target", "iter", "body", "else_", "test", "recursive")
class If(Stmt):
"""If `test` is true, `body` is rendered, else `else_`."""
fields = ('test', 'body', 'elif_', 'else_')
fields = ("test", "body", "elif_", "else_")
class Macro(Stmt):
@@ -322,19 +320,22 @@ class Macro(Stmt):
arguments and `defaults` a list of defaults if there are any. `body` is
a list of nodes for the macro body.
"""
fields = ('name', 'args', 'defaults', 'body')
fields = ("name", "args", "defaults", "body")
class CallBlock(Stmt):
"""Like a macro without a name but a call instead. `call` is called with
the unnamed macro as `caller` argument this node holds.
"""
fields = ('call', 'args', 'defaults', 'body')
fields = ("call", "args", "defaults", "body")
class FilterBlock(Stmt):
"""Node for filter sections."""
fields = ('body', 'filter')
fields = ("body", "filter")
class With(Stmt):
@@ -343,22 +344,26 @@ class With(Stmt):
.. versionadded:: 2.9.3
"""
fields = ('targets', 'values', 'body')
fields = ("targets", "values", "body")
class Block(Stmt):
"""A node that represents a block."""
fields = ('name', 'body', 'scoped')
fields = ("name", "body", "scoped")
class Include(Stmt):
"""A node that represents the include tag."""
fields = ('template', 'with_context', 'ignore_missing')
fields = ("template", "with_context", "ignore_missing")
class Import(Stmt):
"""A node that represents the import tag."""
fields = ('template', 'target', 'with_context')
fields = ("template", "target", "with_context")
class FromImport(Stmt):
@@ -372,26 +377,31 @@ class FromImport(Stmt):
The list of names may contain tuples if aliases are wanted.
"""
fields = ('template', 'names', 'with_context')
fields = ("template", "names", "with_context")
class ExprStmt(Stmt):
"""A statement that evaluates an expression and discards the result."""
fields = ('node',)
fields = ("node",)
class Assign(Stmt):
"""Assigns an expression to a target."""
fields = ('target', 'node')
fields = ("target", "node")
class AssignBlock(Stmt):
"""Assigns a block to a target."""
fields = ('target', 'filter', 'body')
fields = ("target", "filter", "body")
class Expr(Node):
"""Baseclass for all expressions."""
abstract = True
def as_const(self, eval_ctx=None):
@@ -414,15 +424,18 @@ def can_assign(self):
class BinExpr(Expr):
"""Baseclass for all binary expressions."""
fields = ('left', 'right')
fields = ("left", "right")
operator = None
abstract = True
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
# intercepted operators cannot be folded at compile time
if self.environment.sandboxed and \
self.operator in self.environment.intercepted_binops:
if (
self.environment.sandboxed
and self.operator in self.environment.intercepted_binops
):
raise Impossible()
f = _binop_to_func[self.operator]
try:
@@ -433,15 +446,18 @@ def as_const(self, eval_ctx=None):
class UnaryExpr(Expr):
"""Baseclass for all unary expressions."""
fields = ('node',)
fields = ("node",)
operator = None
abstract = True
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
# intercepted operators cannot be folded at compile time
if self.environment.sandboxed and \
self.operator in self.environment.intercepted_unops:
if (
self.environment.sandboxed
and self.operator in self.environment.intercepted_unops
):
raise Impossible()
f = _uaop_to_func[self.operator]
try:
@@ -458,16 +474,17 @@ class Name(Expr):
- `load`: load that name
- `param`: like `store` but if the name was defined as function parameter.
"""
fields = ('name', 'ctx')
fields = ("name", "ctx")
def can_assign(self):
return self.name not in ('true', 'false', 'none',
'True', 'False', 'None')
return self.name not in ("true", "false", "none", "True", "False", "None")
class NSRef(Expr):
"""Reference to a namespace value assignment"""
fields = ('name', 'attr')
fields = ("name", "attr")
def can_assign(self):
# We don't need any special checks here; NSRef assignments have a
@@ -479,6 +496,7 @@ def can_assign(self):
class Literal(Expr):
"""Baseclass for literals."""
abstract = True
@@ -488,14 +506,18 @@ class Const(Literal):
complex values such as lists too. Only constants with a safe
representation (objects where ``eval(repr(x)) == x`` is true).
"""
fields = ('value',)
fields = ("value",)
def as_const(self, eval_ctx=None):
rv = self.value
if PY2 and type(rv) is text_type and \
self.environment.policies['compiler.ascii_str']:
if (
PY2
and type(rv) is text_type
and self.environment.policies["compiler.ascii_str"]
):
try:
rv = rv.encode('ascii')
rv = rv.encode("ascii")
except UnicodeError:
pass
return rv
@@ -507,6 +529,7 @@ def from_untrusted(cls, value, lineno=None, environment=None):
an `Impossible` exception.
"""
from .compiler import has_safe_repr
if not has_safe_repr(value):
raise Impossible()
return cls(value, lineno=lineno, environment=environment)
@@ -514,7 +537,8 @@ def from_untrusted(cls, value, lineno=None, environment=None):
class TemplateData(Literal):
"""A constant template string."""
fields = ('data',)
fields = ("data",)
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -530,7 +554,8 @@ class Tuple(Literal):
for subscripts. Like for :class:`Name` `ctx` specifies if the tuple
is used for loading the names or storing.
"""
fields = ('items', 'ctx')
fields = ("items", "ctx")
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -545,7 +570,8 @@ def can_assign(self):
class List(Literal):
"""Any list literal such as ``[1, 2, 3]``"""
fields = ('items',)
fields = ("items",)
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -556,7 +582,8 @@ class Dict(Literal):
"""Any dict literal such as ``{1: 2, 3: 4}``. The items must be a list of
:class:`Pair` nodes.
"""
fields = ('items',)
fields = ("items",)
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -565,7 +592,8 @@ def as_const(self, eval_ctx=None):
class Pair(Helper):
"""A key, value pair for dicts."""
fields = ('key', 'value')
fields = ("key", "value")
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -574,7 +602,8 @@ def as_const(self, eval_ctx=None):
class Keyword(Helper):
"""A key, value pair for keyword arguments where key is a string."""
fields = ('key', 'value')
fields = ("key", "value")
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -585,7 +614,8 @@ class CondExpr(Expr):
"""A conditional expression (inline if expression). (``{{
foo if bar else baz }}``)
"""
fields = ('test', 'expr1', 'expr2')
fields = ("test", "expr1", "expr2")
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -626,7 +656,7 @@ class Filter(Expr):
filtered. Buffers are created by macros and filter blocks.
"""
fields = ('node', 'name', 'args', 'kwargs', 'dyn_args', 'dyn_kwargs')
fields = ("node", "name", "args", "kwargs", "dyn_args", "dyn_kwargs")
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -636,28 +666,27 @@ def as_const(self, eval_ctx=None):
# we have to be careful here because we call filter_ below.
# if this variable would be called filter, 2to3 would wrap the
# call in a list beause it is assuming we are talking about the
# call in a list because it is assuming we are talking about the
# builtin filter function here which no longer returns a list in
# python 3. because of that, do not rename filter_ to filter!
filter_ = self.environment.filters.get(self.name)
if filter_ is None or getattr(filter_, 'contextfilter', False):
if filter_ is None or getattr(filter_, "contextfilter", False) is True:
raise Impossible()
# We cannot constant handle async filters, so we need to make sure
# to not go down this path.
if (
eval_ctx.environment.is_async
and getattr(filter_, 'asyncfiltervariant', False)
if eval_ctx.environment.is_async and getattr(
filter_, "asyncfiltervariant", False
):
raise Impossible()
args, kwargs = args_as_const(self, eval_ctx)
args.insert(0, self.node.as_const(eval_ctx))
if getattr(filter_, 'evalcontextfilter', False):
if getattr(filter_, "evalcontextfilter", False) is True:
args.insert(0, eval_ctx)
elif getattr(filter_, 'environmentfilter', False):
elif getattr(filter_, "environmentfilter", False) is True:
args.insert(0, self.environment)
try:
@@ -671,7 +700,7 @@ class Test(Expr):
rest of the fields are the same as for :class:`Call`.
"""
fields = ('node', 'name', 'args', 'kwargs', 'dyn_args', 'dyn_kwargs')
fields = ("node", "name", "args", "kwargs", "dyn_args", "dyn_kwargs")
def as_const(self, eval_ctx=None):
test = self.environment.tests.get(self.name)
@@ -696,20 +725,23 @@ class Call(Expr):
node for dynamic positional (``*args``) or keyword (``**kwargs``)
arguments.
"""
fields = ('node', 'args', 'kwargs', 'dyn_args', 'dyn_kwargs')
fields = ("node", "args", "kwargs", "dyn_args", "dyn_kwargs")
class Getitem(Expr):
"""Get an attribute or item from an expression and prefer the item."""
fields = ('node', 'arg', 'ctx')
fields = ("node", "arg", "ctx")
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
if self.ctx != 'load':
if self.ctx != "load":
raise Impossible()
try:
return self.environment.getitem(self.node.as_const(eval_ctx),
self.arg.as_const(eval_ctx))
return self.environment.getitem(
self.node.as_const(eval_ctx), self.arg.as_const(eval_ctx)
)
except Exception:
raise Impossible()
@@ -721,15 +753,15 @@ class Getattr(Expr):
"""Get an attribute or item from an expression that is a ascii-only
bytestring and prefer the attribute.
"""
fields = ('node', 'attr', 'ctx')
fields = ("node", "attr", "ctx")
def as_const(self, eval_ctx=None):
if self.ctx != 'load':
if self.ctx != "load":
raise Impossible()
try:
eval_ctx = get_eval_context(self, eval_ctx)
return self.environment.getattr(self.node.as_const(eval_ctx),
self.attr)
return self.environment.getattr(self.node.as_const(eval_ctx), self.attr)
except Exception:
raise Impossible()
@@ -741,14 +773,17 @@ class Slice(Expr):
"""Represents a slice object. This must only be used as argument for
:class:`Subscript`.
"""
fields = ('start', 'stop', 'step')
fields = ("start", "stop", "step")
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
def const(obj):
if obj is None:
return None
return obj.as_const(eval_ctx)
return slice(const(self.start), const(self.stop), const(self.step))
@@ -756,82 +791,103 @@ class Concat(Expr):
"""Concatenates the list of expressions provided after converting them to
unicode.
"""
fields = ('nodes',)
fields = ("nodes",)
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
return ''.join(text_type(x.as_const(eval_ctx)) for x in self.nodes)
return "".join(text_type(x.as_const(eval_ctx)) for x in self.nodes)
class Compare(Expr):
"""Compares an expression with some other expressions. `ops` must be a
list of :class:`Operand`\\s.
"""
fields = ('expr', 'ops')
fields = ("expr", "ops")
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
result = value = self.expr.as_const(eval_ctx)
try:
for op in self.ops:
new_value = op.expr.as_const(eval_ctx)
result = _cmpop_to_func[op.op](value, new_value)
if not result:
return False
value = new_value
except Exception:
raise Impossible()
return result
class Operand(Helper):
"""Holds an operator and an expression."""
fields = ('op', 'expr')
fields = ("op", "expr")
if __debug__:
Operand.__doc__ += '\nThe following operators are available: ' + \
', '.join(sorted('``%s``' % x for x in set(_binop_to_func) |
set(_uaop_to_func) | set(_cmpop_to_func)))
Operand.__doc__ += "\nThe following operators are available: " + ", ".join(
sorted(
"``%s``" % x
for x in set(_binop_to_func) | set(_uaop_to_func) | set(_cmpop_to_func)
)
)
class Mul(BinExpr):
"""Multiplies the left with the right node."""
operator = '*'
operator = "*"
class Div(BinExpr):
"""Divides the left by the right node."""
operator = '/'
operator = "/"
class FloorDiv(BinExpr):
"""Divides the left by the right node and truncates conver the
result into an integer by truncating.
"""
operator = '//'
operator = "//"
class Add(BinExpr):
"""Add the left to the right node."""
operator = '+'
operator = "+"
class Sub(BinExpr):
"""Subtract the right from the left node."""
operator = '-'
operator = "-"
class Mod(BinExpr):
"""Left modulo right."""
operator = '%'
operator = "%"
class Pow(BinExpr):
"""Left to the power of right."""
operator = '**'
operator = "**"
class And(BinExpr):
"""Short circuited AND."""
operator = 'and'
operator = "and"
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -840,7 +896,8 @@ def as_const(self, eval_ctx=None):
class Or(BinExpr):
"""Short circuited OR."""
operator = 'or'
operator = "or"
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -849,17 +906,20 @@ def as_const(self, eval_ctx=None):
class Not(UnaryExpr):
"""Negate the expression."""
operator = 'not'
operator = "not"
class Neg(UnaryExpr):
"""Make the expression negative."""
operator = '-'
operator = "-"
class Pos(UnaryExpr):
"""Make the expression positive (noop for most expressions)"""
operator = '+'
operator = "+"
# Helpers for extensions
@@ -869,7 +929,8 @@ class EnvironmentAttribute(Expr):
"""Loads an attribute from the environment object. This is useful for
extensions that want to call a callback stored on the environment.
"""
fields = ('name',)
fields = ("name",)
class ExtensionAttribute(Expr):
@@ -879,7 +940,8 @@ class ExtensionAttribute(Expr):
This node is usually constructed by calling the
:meth:`~jinja2.ext.Extension.attr` method on an extension.
"""
fields = ('identifier', 'name')
fields = ("identifier", "name")
class ImportedName(Expr):
@@ -888,7 +950,8 @@ class ImportedName(Expr):
function from the cgi module on evaluation. Imports are optimized by the
compiler so there is no need to assign them to local variables.
"""
fields = ('importname',)
fields = ("importname",)
class InternalName(Expr):
@@ -898,16 +961,20 @@ class InternalName(Expr):
a new identifier for you. This identifier is not available from the
template and is not threated specially by the compiler.
"""
fields = ('name',)
fields = ("name",)
def __init__(self):
raise TypeError('Can\'t create internal names. Use the '
'`free_identifier` method on a parser.')
raise TypeError(
"Can't create internal names. Use the "
"`free_identifier` method on a parser."
)
class MarkSafe(Expr):
"""Mark the wrapped expression as safe (wrap it as `Markup`)."""
fields = ('expr',)
fields = ("expr",)
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -920,7 +987,8 @@ class MarkSafeIfAutoescape(Expr):
.. versionadded:: 2.5
"""
fields = ('expr',)
fields = ("expr",)
def as_const(self, eval_ctx=None):
eval_ctx = get_eval_context(self, eval_ctx)
@@ -942,6 +1010,20 @@ class ContextReference(Expr):
Assign(Name('foo', ctx='store'),
Getattr(ContextReference(), 'name'))
This is basically equivalent to using the
:func:`~jinja2.contextfunction` decorator when using the
high-level API, which causes a reference to the context to be passed
as the first argument to a function.
"""
class DerivedContextReference(Expr):
"""Return the current template context including locals. Behaves
exactly like :class:`ContextReference`, but includes local
variables, such as from a ``for`` loop.
.. versionadded:: 2.11
"""
@@ -955,7 +1037,8 @@ class Break(Stmt):
class Scope(Stmt):
"""An artificial scope."""
fields = ('body',)
fields = ("body",)
class OverlayScope(Stmt):
@@ -971,7 +1054,8 @@ class OverlayScope(Stmt):
.. versionadded:: 2.10
"""
fields = ('context', 'body')
fields = ("context", "body")
class EvalContextModifier(Stmt):
@@ -982,7 +1066,8 @@ class EvalContextModifier(Stmt):
EvalContextModifier(options=[Keyword('autoescape', Const(True))])
"""
fields = ('options',)
fields = ("options",)
class ScopedEvalContextModifier(EvalContextModifier):
@@ -990,10 +1075,14 @@ class ScopedEvalContextModifier(EvalContextModifier):
:class:`EvalContextModifier` but will only modify the
:class:`~jinja2.nodes.EvalContext` for nodes in the :attr:`body`.
"""
fields = ('body',)
fields = ("body",)
# make sure nobody creates custom nodes
def _failing_new(*args, **kwargs):
raise TypeError('can\'t create custom node types')
NodeType.__new__ = staticmethod(_failing_new); del _failing_new
raise TypeError("can't create custom node types")
NodeType.__new__ = staticmethod(_failing_new)
del _failing_new

View File

@@ -1,23 +1,15 @@
# -*- coding: utf-8 -*-
"""The optimizer tries to constant fold expressions and modify the AST
in place so that it should be faster to evaluate.
Because the AST does not contain all the scoping information and the
compiler has to find that out, we cannot do all the optimizations we
want. For example, loop unrolling doesn't work because unrolled loops
would have a different scope. The solution would be a second syntax tree
that stored the scoping rules.
"""
jinja2.optimizer
~~~~~~~~~~~~~~~~
The jinja optimizer is currently trying to constant fold a few expressions
and modify the AST in place so that it should be easier to evaluate it.
Because the AST does not contain all the scoping information and the
compiler has to find that out, we cannot do all the optimizations we
want. For example loop unrolling doesn't work because unrolled loops would
have a different scoping.
The solution would be a second syntax tree that has the scoping rules stored.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD.
"""
from jinja2 import nodes
from jinja2.visitor import NodeTransformer
from . import nodes
from .visitor import NodeTransformer
def optimize(node, environment):
@@ -28,22 +20,22 @@ def optimize(node, environment):
class Optimizer(NodeTransformer):
def __init__(self, environment):
self.environment = environment
def fold(self, node, eval_ctx=None):
"""Do constant folding."""
node = self.generic_visit(node)
try:
return nodes.Const.from_untrusted(node.as_const(eval_ctx),
lineno=node.lineno,
environment=self.environment)
except nodes.Impossible:
return node
def generic_visit(self, node, *args, **kwargs):
node = super(Optimizer, self).generic_visit(node, *args, **kwargs)
visit_Add = visit_Sub = visit_Mul = visit_Div = visit_FloorDiv = \
visit_Pow = visit_Mod = visit_And = visit_Or = visit_Pos = visit_Neg = \
visit_Not = visit_Compare = visit_Getitem = visit_Getattr = visit_Call = \
visit_Filter = visit_Test = visit_CondExpr = fold
del fold
# Do constant folding. Some other nodes besides Expr have
# as_const, but folding them causes errors later on.
if isinstance(node, nodes.Expr):
try:
return nodes.Const.from_untrusted(
node.as_const(args[0] if args else None),
lineno=node.lineno,
environment=self.environment,
)
except nodes.Impossible:
pass
return node

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,76 +1,66 @@
# -*- coding: utf-8 -*-
"""A sandbox layer that ensures unsafe operations cannot be performed.
Useful when the template itself comes from an untrusted source.
"""
jinja2.sandbox
~~~~~~~~~~~~~~
Adds a sandbox layer to Jinja as it was the default behavior in the old
Jinja 1 releases. This sandbox is slightly different from Jinja 1 as the
default behavior is easier to use.
The behavior can be changed by subclassing the environment.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD.
"""
import types
import operator
import sys
from jinja2.environment import Environment
from jinja2.exceptions import SecurityError
from jinja2._compat import string_types, PY2
from jinja2.utils import Markup
from markupsafe import EscapeFormatter
import types
import warnings
from collections import deque
from string import Formatter
if sys.version_info >= (3, 3):
from collections.abc import Mapping
else:
from collections import Mapping
from markupsafe import EscapeFormatter
from markupsafe import Markup
from ._compat import abc
from ._compat import PY2
from ._compat import range_type
from ._compat import string_types
from .environment import Environment
from .exceptions import SecurityError
#: maximum number of items a range may produce
MAX_RANGE = 100000
#: attributes of function objects that are considered unsafe.
if PY2:
UNSAFE_FUNCTION_ATTRIBUTES = set(['func_closure', 'func_code', 'func_dict',
'func_defaults', 'func_globals'])
UNSAFE_FUNCTION_ATTRIBUTES = {
"func_closure",
"func_code",
"func_dict",
"func_defaults",
"func_globals",
}
else:
# On versions > python 2 the special attributes on functions are gone,
# but they remain on methods and generators for whatever reason.
UNSAFE_FUNCTION_ATTRIBUTES = set()
#: unsafe method attributes. function attributes are unsafe for methods too
UNSAFE_METHOD_ATTRIBUTES = set(['im_class', 'im_func', 'im_self'])
UNSAFE_METHOD_ATTRIBUTES = {"im_class", "im_func", "im_self"}
#: unsafe generator attirbutes.
UNSAFE_GENERATOR_ATTRIBUTES = set(['gi_frame', 'gi_code'])
#: unsafe generator attributes.
UNSAFE_GENERATOR_ATTRIBUTES = {"gi_frame", "gi_code"}
#: unsafe attributes on coroutines
UNSAFE_COROUTINE_ATTRIBUTES = set(['cr_frame', 'cr_code'])
UNSAFE_COROUTINE_ATTRIBUTES = {"cr_frame", "cr_code"}
#: unsafe attributes on async generators
UNSAFE_ASYNC_GENERATOR_ATTRIBUTES = set(['ag_code', 'ag_frame'])
import warnings
UNSAFE_ASYNC_GENERATOR_ATTRIBUTES = {"ag_code", "ag_frame"}
# make sure we don't warn in python 2.6 about stuff we don't care about
warnings.filterwarnings('ignore', 'the sets module', DeprecationWarning,
module='jinja2.sandbox')
from collections import deque
warnings.filterwarnings(
"ignore", "the sets module", DeprecationWarning, module=__name__
)
_mutable_set_types = (set,)
_mutable_mapping_types = (dict,)
_mutable_sequence_types = (list,)
# on python 2.x we can register the user collection types
try:
from UserDict import UserDict, DictMixin
from UserList import UserList
_mutable_mapping_types += (UserDict, DictMixin)
_mutable_set_types += (UserList,)
except ImportError:
@@ -79,39 +69,60 @@
# if sets is still available, register the mutable set from there as well
try:
from sets import Set
_mutable_set_types += (Set,)
except ImportError:
pass
#: register Python 2.6 abstract base classes
if sys.version_info >= (3, 3):
from collections.abc import MutableSet, MutableMapping, MutableSequence
else:
from collections import MutableSet, MutableMapping, MutableSequence
_mutable_set_types += (MutableSet,)
_mutable_mapping_types += (MutableMapping,)
_mutable_sequence_types += (MutableSequence,)
_mutable_set_types += (abc.MutableSet,)
_mutable_mapping_types += (abc.MutableMapping,)
_mutable_sequence_types += (abc.MutableSequence,)
_mutable_spec = (
(_mutable_set_types, frozenset([
'add', 'clear', 'difference_update', 'discard', 'pop', 'remove',
'symmetric_difference_update', 'update'
])),
(_mutable_mapping_types, frozenset([
'clear', 'pop', 'popitem', 'setdefault', 'update'
])),
(_mutable_sequence_types, frozenset([
'append', 'reverse', 'insert', 'sort', 'extend', 'remove'
])),
(deque, frozenset([
'append', 'appendleft', 'clear', 'extend', 'extendleft', 'pop',
'popleft', 'remove', 'rotate'
]))
(
_mutable_set_types,
frozenset(
[
"add",
"clear",
"difference_update",
"discard",
"pop",
"remove",
"symmetric_difference_update",
"update",
]
),
),
(
_mutable_mapping_types,
frozenset(["clear", "pop", "popitem", "setdefault", "update"]),
),
(
_mutable_sequence_types,
frozenset(["append", "reverse", "insert", "sort", "extend", "remove"]),
),
(
deque,
frozenset(
[
"append",
"appendleft",
"clear",
"extend",
"extendleft",
"pop",
"popleft",
"remove",
"rotate",
]
),
),
)
class _MagicFormatMapping(Mapping):
class _MagicFormatMapping(abc.Mapping):
"""This class implements a dummy wrapper to fix a bug in the Python
standard library for string formatting.
@@ -125,7 +136,7 @@ def __init__(self, args, kwargs):
self._last_index = 0
def __getitem__(self, key):
if key == '':
if key == "":
idx = self._last_index
self._last_index += 1
try:
@@ -143,9 +154,9 @@ def __len__(self):
def inspect_format_method(callable):
if not isinstance(callable, (types.MethodType,
types.BuiltinMethodType)) or \
callable.__name__ != 'format':
if not isinstance(
callable, (types.MethodType, types.BuiltinMethodType)
) or callable.__name__ not in ("format", "format_map"):
return None
obj = callable.__self__
if isinstance(obj, string_types):
@@ -156,10 +167,14 @@ def safe_range(*args):
"""A range that can't generate ranges with a length of more than
MAX_RANGE items.
"""
rng = range(*args)
rng = range_type(*args)
if len(rng) > MAX_RANGE:
raise OverflowError('range too big, maximum size for range is %d' %
MAX_RANGE)
raise OverflowError(
"Range too big. The sandbox blocks ranges larger than"
" MAX_RANGE (%d)." % MAX_RANGE
)
return rng
@@ -192,24 +207,25 @@ def is_internal_attribute(obj, attr):
if attr in UNSAFE_FUNCTION_ATTRIBUTES:
return True
elif isinstance(obj, types.MethodType):
if attr in UNSAFE_FUNCTION_ATTRIBUTES or \
attr in UNSAFE_METHOD_ATTRIBUTES:
if attr in UNSAFE_FUNCTION_ATTRIBUTES or attr in UNSAFE_METHOD_ATTRIBUTES:
return True
elif isinstance(obj, type):
if attr == 'mro':
if attr == "mro":
return True
elif isinstance(obj, (types.CodeType, types.TracebackType, types.FrameType)):
return True
elif isinstance(obj, types.GeneratorType):
if attr in UNSAFE_GENERATOR_ATTRIBUTES:
return True
elif hasattr(types, 'CoroutineType') and isinstance(obj, types.CoroutineType):
elif hasattr(types, "CoroutineType") and isinstance(obj, types.CoroutineType):
if attr in UNSAFE_COROUTINE_ATTRIBUTES:
return True
elif hasattr(types, 'AsyncGeneratorType') and isinstance(obj, types.AsyncGeneratorType):
elif hasattr(types, "AsyncGeneratorType") and isinstance(
obj, types.AsyncGeneratorType
):
if attr in UNSAFE_ASYNC_GENERATOR_ATTRIBUTES:
return True
return attr.startswith('__')
return attr.startswith("__")
def modifies_known_mutable(obj, attr):
@@ -250,28 +266,26 @@ class SandboxedEnvironment(Environment):
raised. However also other exceptions may occur during the rendering so
the caller has to ensure that all exceptions are caught.
"""
sandboxed = True
#: default callback table for the binary operators. A copy of this is
#: available on each instance of a sandboxed environment as
#: :attr:`binop_table`
default_binop_table = {
'+': operator.add,
'-': operator.sub,
'*': operator.mul,
'/': operator.truediv,
'//': operator.floordiv,
'**': operator.pow,
'%': operator.mod
"+": operator.add,
"-": operator.sub,
"*": operator.mul,
"/": operator.truediv,
"//": operator.floordiv,
"**": operator.pow,
"%": operator.mod,
}
#: default callback table for the unary operators. A copy of this is
#: available on each instance of a sandboxed environment as
#: :attr:`unop_table`
default_unop_table = {
'+': operator.pos,
'-': operator.neg
}
default_unop_table = {"+": operator.pos, "-": operator.neg}
#: a set of binary operators that should be intercepted. Each operator
#: that is added to this set (empty by default) is delegated to the
@@ -307,7 +321,7 @@ class SandboxedEnvironment(Environment):
def intercept_unop(self, operator):
"""Called during template compilation with the name of a unary
operator to check if it should be intercepted at runtime. If this
method returns `True`, :meth:`call_unop` is excuted for this unary
method returns `True`, :meth:`call_unop` is executed for this unary
operator. The default implementation of :meth:`call_unop` will use
the :attr:`unop_table` dictionary to perform the operator with the
same logic as the builtin one.
@@ -321,10 +335,9 @@ def intercept_unop(self, operator):
"""
return False
def __init__(self, *args, **kwargs):
Environment.__init__(self, *args, **kwargs)
self.globals['range'] = safe_range
self.globals["range"] = safe_range
self.binop_table = self.default_binop_table.copy()
self.unop_table = self.default_unop_table.copy()
@@ -335,7 +348,7 @@ def is_safe_attribute(self, obj, attr, value):
special attributes of internal python objects as returned by the
:func:`is_internal_attribute` function.
"""
return not (attr.startswith('_') or is_internal_attribute(obj, attr))
return not (attr.startswith("_") or is_internal_attribute(obj, attr))
def is_safe_callable(self, obj):
"""Check if an object is safely callable. Per default a function is
@@ -343,8 +356,9 @@ def is_safe_callable(self, obj):
True. Override this method to alter the behavior, but this won't
affect the `unsafe` decorator from this module.
"""
return not (getattr(obj, 'unsafe_callable', False) or
getattr(obj, 'alters_data', False))
return not (
getattr(obj, "unsafe_callable", False) or getattr(obj, "alters_data", False)
)
def call_binop(self, context, operator, left, right):
"""For intercepted binary operator calls (:meth:`intercepted_binops`)
@@ -404,13 +418,15 @@ def getattr(self, obj, attribute):
def unsafe_undefined(self, obj, attribute):
"""Return an undefined object for unsafe attributes."""
return self.undefined('access to attribute %r of %r '
'object is unsafe.' % (
attribute,
obj.__class__.__name__
), name=attribute, obj=obj, exc=SecurityError)
return self.undefined(
"access to attribute %r of %r "
"object is unsafe." % (attribute, obj.__class__.__name__),
name=attribute,
obj=obj,
exc=SecurityError,
)
def format_string(self, s, args, kwargs):
def format_string(self, s, args, kwargs, format_func=None):
"""If a format call is detected, then this is routed through this
method so that our safety sandbox can be used for it.
"""
@@ -418,20 +434,31 @@ def format_string(self, s, args, kwargs):
formatter = SandboxedEscapeFormatter(self, s.escape)
else:
formatter = SandboxedFormatter(self)
if format_func is not None and format_func.__name__ == "format_map":
if len(args) != 1 or kwargs:
raise TypeError(
"format_map() takes exactly one argument %d given"
% (len(args) + (kwargs is not None))
)
kwargs = args[0]
args = None
kwargs = _MagicFormatMapping(args, kwargs)
rv = formatter.vformat(s, args, kwargs)
return type(s)(rv)
def call(__self, __context, __obj, *args, **kwargs):
def call(__self, __context, __obj, *args, **kwargs): # noqa: B902
"""Call an object from sandboxed code."""
fmt = inspect_format_method(__obj)
if fmt is not None:
return __self.format_string(fmt, args, kwargs)
return __self.format_string(fmt, args, kwargs, __obj)
# the double prefixes are to avoid double keyword argument
# errors when proxying the call.
if not __self.is_safe_callable(__obj):
raise SecurityError('%r is not safely callable' % (__obj,))
raise SecurityError("%r is not safely callable" % (__obj,))
return __context.call(__obj, *args, **kwargs)
@@ -447,16 +474,16 @@ def is_safe_attribute(self, obj, attr, value):
return not modifies_known_mutable(obj, attr)
# This really is not a public API apparenlty.
# This really is not a public API apparently.
try:
from _string import formatter_field_name_split
except ImportError:
def formatter_field_name_split(field_name):
return field_name._formatter_field_name_split()
class SandboxedFormatterMixin(object):
def __init__(self, env):
self._env = env
@@ -470,14 +497,14 @@ def get_field(self, field_name, args, kwargs):
obj = self._env.getitem(obj, i)
return obj, first
class SandboxedFormatter(SandboxedFormatterMixin, Formatter):
class SandboxedFormatter(SandboxedFormatterMixin, Formatter):
def __init__(self, env):
SandboxedFormatterMixin.__init__(self, env)
Formatter.__init__(self)
class SandboxedEscapeFormatter(SandboxedFormatterMixin, EscapeFormatter):
class SandboxedEscapeFormatter(SandboxedFormatterMixin, EscapeFormatter):
def __init__(self, env, escape):
SandboxedFormatterMixin.__init__(self, env)
EscapeFormatter.__init__(self, escape)

View File

@@ -1,29 +1,17 @@
# -*- coding: utf-8 -*-
"""
jinja2.tests
~~~~~~~~~~~~
Jinja test functions. Used with the "is" operator.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
"""Built-in template tests used with the ``is`` operator."""
import decimal
import operator
import re
import sys
from jinja2.runtime import Undefined
from jinja2._compat import text_type, string_types, integer_types
import decimal
if sys.version_info >= (3, 3):
from collections.abc import Mapping
else:
from collections import Mapping
from ._compat import abc
from ._compat import integer_types
from ._compat import string_types
from ._compat import text_type
from .runtime import Undefined
number_re = re.compile(r'^-?\d+(\.\d+)?$')
number_re = re.compile(r"^-?\d+(\.\d+)?$")
regex_type = type(number_re)
test_callable = callable
@@ -69,6 +57,48 @@ def test_none(value):
return value is None
def test_boolean(value):
"""Return true if the object is a boolean value.
.. versionadded:: 2.11
"""
return value is True or value is False
def test_false(value):
"""Return true if the object is False.
.. versionadded:: 2.11
"""
return value is False
def test_true(value):
"""Return true if the object is True.
.. versionadded:: 2.11
"""
return value is True
# NOTE: The existing 'number' test matches booleans and floats
def test_integer(value):
"""Return true if the object is an integer.
.. versionadded:: 2.11
"""
return isinstance(value, integer_types) and value is not True and value is not False
# NOTE: The existing 'number' test matches booleans and integers
def test_float(value):
"""Return true if the object is a float.
.. versionadded:: 2.11
"""
return isinstance(value, float)
def test_lower(value):
"""Return true if the variable is lowercased."""
return text_type(value).islower()
@@ -89,7 +119,7 @@ def test_mapping(value):
.. versionadded:: 2.6
"""
return isinstance(value, Mapping)
return isinstance(value, abc.Mapping)
def test_number(value):
@@ -104,7 +134,7 @@ def test_sequence(value):
try:
len(value)
value.__getitem__
except:
except Exception:
return False
return True
@@ -133,7 +163,7 @@ def test_iterable(value):
def test_escaped(value):
"""Check if the value is escaped."""
return hasattr(value, '__html__')
return hasattr(value, "__html__")
def test_in(value, seq):
@@ -145,36 +175,41 @@ def test_in(value, seq):
TESTS = {
'odd': test_odd,
'even': test_even,
'divisibleby': test_divisibleby,
'defined': test_defined,
'undefined': test_undefined,
'none': test_none,
'lower': test_lower,
'upper': test_upper,
'string': test_string,
'mapping': test_mapping,
'number': test_number,
'sequence': test_sequence,
'iterable': test_iterable,
'callable': test_callable,
'sameas': test_sameas,
'escaped': test_escaped,
'in': test_in,
'==': operator.eq,
'eq': operator.eq,
'equalto': operator.eq,
'!=': operator.ne,
'ne': operator.ne,
'>': operator.gt,
'gt': operator.gt,
'greaterthan': operator.gt,
'ge': operator.ge,
'>=': operator.ge,
'<': operator.lt,
'lt': operator.lt,
'lessthan': operator.lt,
'<=': operator.le,
'le': operator.le,
"odd": test_odd,
"even": test_even,
"divisibleby": test_divisibleby,
"defined": test_defined,
"undefined": test_undefined,
"none": test_none,
"boolean": test_boolean,
"false": test_false,
"true": test_true,
"integer": test_integer,
"float": test_float,
"lower": test_lower,
"upper": test_upper,
"string": test_string,
"mapping": test_mapping,
"number": test_number,
"sequence": test_sequence,
"iterable": test_iterable,
"callable": test_callable,
"sameas": test_sameas,
"escaped": test_escaped,
"in": test_in,
"==": operator.eq,
"eq": operator.eq,
"equalto": operator.eq,
"!=": operator.ne,
"ne": operator.ne,
">": operator.gt,
"gt": operator.gt,
"greaterthan": operator.gt,
"ge": operator.ge,
">=": operator.ge,
"<": operator.lt,
"lt": operator.lt,
"lessthan": operator.lt,
"<=": operator.le,
"le": operator.le,
}

View File

@@ -1,44 +1,32 @@
# -*- coding: utf-8 -*-
"""
jinja2.utils
~~~~~~~~~~~~
Utility functions.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
import re
import json
import errno
import os
import re
import warnings
from collections import deque
from random import choice
from random import randrange
from string import ascii_letters as _letters
from string import digits as _digits
from threading import Lock
from jinja2._compat import text_type, string_types, implements_iterator, \
url_quote
from markupsafe import escape
from markupsafe import Markup
_word_split_re = re.compile(r'(\s+)')
_punctuation_re = re.compile(
'^(?P<lead>(?:%s)*)(?P<middle>.*?)(?P<trail>(?:%s)*)$' % (
'|'.join(map(re.escape, ('(', '<', '&lt;'))),
'|'.join(map(re.escape, ('.', ',', ')', '>', '\n', '&gt;')))
)
)
_simple_email_re = re.compile(r'^\S+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9._-]+$')
_striptags_re = re.compile(r'(<!--.*?-->|<[^>]*>)')
_entity_re = re.compile(r'&([^;]+);')
_letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
_digits = '0123456789'
from ._compat import abc
from ._compat import string_types
from ._compat import text_type
from ._compat import url_quote
# special singleton representing missing values for the runtime
missing = type('MissingType', (), {'__repr__': lambda x: 'missing'})()
missing = type("MissingType", (), {"__repr__": lambda x: "missing"})()
# internal code
internal_code = set()
concat = u''.join
concat = u"".join
_slash_escape = '\\/' not in json.dumps('/')
_slash_escape = "\\/" not in json.dumps("/")
def contextfunction(f):
@@ -98,24 +86,26 @@ def default(var, default=''):
return default
return var
"""
from jinja2.runtime import Undefined
from .runtime import Undefined
return isinstance(obj, Undefined)
def consume(iterable):
"""Consumes an iterable without doing anything with it."""
for event in iterable:
for _ in iterable:
pass
def clear_caches():
"""Jinja2 keeps internal caches for environments and lexers. These are
used so that Jinja2 doesn't have to recreate environments and lexers all
"""Jinja keeps internal caches for environments and lexers. These are
used so that Jinja doesn't have to recreate environments and lexers all
the time. Normally you don't have to care about that but if you are
measuring memory consumption you may want to clean the caches.
"""
from jinja2.environment import _spontaneous_environments
from jinja2.lexer import _lexer_cache
from .environment import _spontaneous_environments
from .lexer import _lexer_cache
_spontaneous_environments.clear()
_lexer_cache.clear()
@@ -132,12 +122,10 @@ def import_string(import_name, silent=False):
:return: imported object
"""
try:
if ':' in import_name:
module, obj = import_name.split(':', 1)
elif '.' in import_name:
items = import_name.split('.')
module = '.'.join(items[:-1])
obj = items[-1]
if ":" in import_name:
module, obj = import_name.split(":", 1)
elif "." in import_name:
module, _, obj = import_name.rpartition(".")
else:
return __import__(import_name)
return getattr(__import__(module, None, None, [obj]), obj)
@@ -146,15 +134,14 @@ def import_string(import_name, silent=False):
raise
def open_if_exists(filename, mode='rb'):
def open_if_exists(filename, mode="rb"):
"""Returns a file descriptor for the filename if that file exists,
otherwise `None`.
otherwise ``None``.
"""
try:
return open(filename, mode)
except IOError as e:
if e.errno not in (errno.ENOENT, errno.EISDIR, errno.EINVAL):
raise
if not os.path.isfile(filename):
return None
return open(filename, mode)
def object_type_repr(obj):
@@ -163,15 +150,19 @@ def object_type_repr(obj):
example for `None` and `Ellipsis`).
"""
if obj is None:
return 'None'
return "None"
elif obj is Ellipsis:
return 'Ellipsis'
return "Ellipsis"
cls = type(obj)
# __builtin__ in 2.x, builtins in 3.x
if obj.__class__.__module__ in ('__builtin__', 'builtins'):
name = obj.__class__.__name__
if cls.__module__ in ("__builtin__", "builtins"):
name = cls.__name__
else:
name = obj.__class__.__module__ + '.' + obj.__class__.__name__
return '%s object' % name
name = cls.__module__ + "." + cls.__name__
return "%s object" % name
def pformat(obj, verbose=False):
@@ -180,9 +171,11 @@ def pformat(obj, verbose=False):
"""
try:
from pretty import pretty
return pretty(obj, verbose=verbose)
except ImportError:
from pprint import pformat
return pformat(obj)
@@ -200,45 +193,77 @@ def urlize(text, trim_url_limit=None, rel=None, target=None):
If target is not None, a target attribute will be added to the link.
"""
trim_url = lambda x, limit=trim_url_limit: limit is not None \
and (x[:limit] + (len(x) >=limit and '...'
or '')) or x
words = _word_split_re.split(text_type(escape(text)))
rel_attr = rel and ' rel="%s"' % text_type(escape(rel)) or ''
target_attr = target and ' target="%s"' % escape(target) or ''
trim_url = (
lambda x, limit=trim_url_limit: limit is not None
and (x[:limit] + (len(x) >= limit and "..." or ""))
or x
)
words = re.split(r"(\s+)", text_type(escape(text)))
rel_attr = rel and ' rel="%s"' % text_type(escape(rel)) or ""
target_attr = target and ' target="%s"' % escape(target) or ""
for i, word in enumerate(words):
match = _punctuation_re.match(word)
head, middle, tail = "", word, ""
match = re.match(r"^([(<]|&lt;)+", middle)
if match:
lead, middle, trail = match.groups()
if middle.startswith('www.') or (
'@' not in middle and
not middle.startswith('http://') and
not middle.startswith('https://') and
len(middle) > 0 and
middle[0] in _letters + _digits and (
middle.endswith('.org') or
middle.endswith('.net') or
middle.endswith('.com')
)):
middle = '<a href="http://%s"%s%s>%s</a>' % (middle,
rel_attr, target_attr, trim_url(middle))
if middle.startswith('http://') or \
middle.startswith('https://'):
middle = '<a href="%s"%s%s>%s</a>' % (middle,
rel_attr, target_attr, trim_url(middle))
if '@' in middle and not middle.startswith('www.') and \
not ':' in middle and _simple_email_re.match(middle):
middle = '<a href="mailto:%s">%s</a>' % (middle, middle)
if lead + middle + trail != word:
words[i] = lead + middle + trail
return u''.join(words)
head = match.group()
middle = middle[match.end() :]
# Unlike lead, which is anchored to the start of the string,
# need to check that the string ends with any of the characters
# before trying to match all of them, to avoid backtracking.
if middle.endswith((")", ">", ".", ",", "\n", "&gt;")):
match = re.search(r"([)>.,\n]|&gt;)+$", middle)
if match:
tail = match.group()
middle = middle[: match.start()]
if middle.startswith("www.") or (
"@" not in middle
and not middle.startswith("http://")
and not middle.startswith("https://")
and len(middle) > 0
and middle[0] in _letters + _digits
and (
middle.endswith(".org")
or middle.endswith(".net")
or middle.endswith(".com")
)
):
middle = '<a href="http://%s"%s%s>%s</a>' % (
middle,
rel_attr,
target_attr,
trim_url(middle),
)
if middle.startswith("http://") or middle.startswith("https://"):
middle = '<a href="%s"%s%s>%s</a>' % (
middle,
rel_attr,
target_attr,
trim_url(middle),
)
if (
"@" in middle
and not middle.startswith("www.")
and ":" not in middle
and re.match(r"^\S+@\w[\w.-]*\.\w+$", middle)
):
middle = '<a href="mailto:%s">%s</a>' % (middle, middle)
words[i] = head + middle + tail
return u"".join(words)
def generate_lorem_ipsum(n=5, html=True, min=20, max=100):
"""Generate some lorem ipsum for the template."""
from jinja2.constants import LOREM_IPSUM_WORDS
from random import choice, randrange
from .constants import LOREM_IPSUM_WORDS
words = LOREM_IPSUM_WORDS.split()
result = []
@@ -263,43 +288,53 @@ def generate_lorem_ipsum(n=5, html=True, min=20, max=100):
if idx - randrange(3, 8) > last_comma:
last_comma = idx
last_fullstop += 2
word += ','
word += ","
# add end of sentences
if idx - randrange(10, 20) > last_fullstop:
last_comma = last_fullstop = idx
word += '.'
word += "."
next_capitalized = True
p.append(word)
# ensure that the paragraph ends with a dot.
p = u' '.join(p)
if p.endswith(','):
p = p[:-1] + '.'
elif not p.endswith('.'):
p += '.'
p = u" ".join(p)
if p.endswith(","):
p = p[:-1] + "."
elif not p.endswith("."):
p += "."
result.append(p)
if not html:
return u'\n\n'.join(result)
return Markup(u'\n'.join(u'<p>%s</p>' % escape(x) for x in result))
return u"\n\n".join(result)
return Markup(u"\n".join(u"<p>%s</p>" % escape(x) for x in result))
def unicode_urlencode(obj, charset='utf-8', for_qs=False):
"""URL escapes a single bytestring or unicode string with the
given charset if applicable to URL safe quoting under all rules
that need to be considered under all supported Python versions.
def unicode_urlencode(obj, charset="utf-8", for_qs=False):
"""Quote a string for use in a URL using the given charset.
If non strings are provided they are converted to their unicode
representation first.
This function is misnamed, it is a wrapper around
:func:`urllib.parse.quote`.
:param obj: String or bytes to quote. Other types are converted to
string then encoded to bytes using the given charset.
:param charset: Encode text to bytes using this charset.
:param for_qs: Quote "/" and use "+" for spaces.
"""
if not isinstance(obj, string_types):
obj = text_type(obj)
if isinstance(obj, text_type):
obj = obj.encode(charset)
safe = not for_qs and b'/' or b''
rv = text_type(url_quote(obj, safe))
safe = b"" if for_qs else b"/"
rv = url_quote(obj, safe)
if not isinstance(rv, text_type):
rv = rv.decode("utf-8")
if for_qs:
rv = rv.replace('%20', '+')
rv = rv.replace("%20", "+")
return rv
@@ -326,9 +361,9 @@ def _postinit(self):
def __getstate__(self):
return {
'capacity': self.capacity,
'_mapping': self._mapping,
'_queue': self._queue
"capacity": self.capacity,
"_mapping": self._mapping,
"_queue": self._queue,
}
def __setstate__(self, d):
@@ -342,7 +377,7 @@ def copy(self):
"""Return a shallow copy of the instance."""
rv = self.__class__(self.capacity)
rv._mapping.update(self._mapping)
rv._queue = deque(self._queue)
rv._queue.extend(self._queue)
return rv
def get(self, key, default=None):
@@ -356,15 +391,11 @@ def setdefault(self, key, default=None):
"""Set `default` if the key is not in the cache otherwise
leave unchanged. Return the value of this key.
"""
self._wlock.acquire()
try:
try:
return self[key]
except KeyError:
self[key] = default
return default
finally:
self._wlock.release()
return self[key]
except KeyError:
self[key] = default
return default
def clear(self):
"""Clear the cache."""
@@ -384,10 +415,7 @@ def __len__(self):
return len(self._mapping)
def __repr__(self):
return '<%s %r>' % (
self.__class__.__name__,
self._mapping
)
return "<%s %r>" % (self.__class__.__name__, self._mapping)
def __getitem__(self, key):
"""Get an item from the cache. Moves the item up so that it has the
@@ -436,7 +464,6 @@ def __delitem__(self, key):
try:
self._remove(key)
except ValueError:
# __getitem__ is not locked, it might happen
pass
finally:
self._wlock.release()
@@ -449,6 +476,12 @@ def items(self):
def iteritems(self):
"""Iterate over all items."""
warnings.warn(
"'iteritems()' will be removed in version 3.0. Use"
" 'iter(cache.items())' instead.",
DeprecationWarning,
stacklevel=2,
)
return iter(self.items())
def values(self):
@@ -457,6 +490,22 @@ def values(self):
def itervalue(self):
"""Iterate over all values."""
warnings.warn(
"'itervalue()' will be removed in version 3.0. Use"
" 'iter(cache.values())' instead.",
DeprecationWarning,
stacklevel=2,
)
return iter(self.values())
def itervalues(self):
"""Iterate over all values."""
warnings.warn(
"'itervalues()' will be removed in version 3.0. Use"
" 'iter(cache.values())' instead.",
DeprecationWarning,
stacklevel=2,
)
return iter(self.values())
def keys(self):
@@ -467,12 +516,19 @@ def iterkeys(self):
"""Iterate over all keys in the cache dict, ordered by
the most recent usage.
"""
warnings.warn(
"'iterkeys()' will be removed in version 3.0. Use"
" 'iter(cache.keys())' instead.",
DeprecationWarning,
stacklevel=2,
)
return iter(self)
def __iter__(self):
return reversed(tuple(self._queue))
__iter__ = iterkeys
def __reversed__(self):
"""Iterate over the values in the cache dict, oldest items
"""Iterate over the keys in the cache dict, oldest items
coming first.
"""
return iter(tuple(self._queue))
@@ -480,22 +536,15 @@ def __reversed__(self):
__copy__ = copy
# register the LRU cache as mutable mapping if possible
try:
from collections.abc import MutableMapping
MutableMapping.register(LRUCache)
except ImportError:
try:
from collections import MutableMapping
MutableMapping.register(LRUCache)
except ImportError:
pass
abc.MutableMapping.register(LRUCache)
def select_autoescape(enabled_extensions=('html', 'htm', 'xml'),
disabled_extensions=(),
default_for_string=True,
default=False):
def select_autoescape(
enabled_extensions=("html", "htm", "xml"),
disabled_extensions=(),
default_for_string=True,
default=False,
):
"""Intelligently sets the initial value of autoescaping based on the
filename of the template. This is the recommended way to configure
autoescaping if you do not want to write a custom function yourself.
@@ -530,10 +579,9 @@ def select_autoescape(enabled_extensions=('html', 'htm', 'xml'),
.. versionadded:: 2.9
"""
enabled_patterns = tuple('.' + x.lstrip('.').lower()
for x in enabled_extensions)
disabled_patterns = tuple('.' + x.lstrip('.').lower()
for x in disabled_extensions)
enabled_patterns = tuple("." + x.lstrip(".").lower() for x in enabled_extensions)
disabled_patterns = tuple("." + x.lstrip(".").lower() for x in disabled_extensions)
def autoescape(template_name):
if template_name is None:
return default_for_string
@@ -543,6 +591,7 @@ def autoescape(template_name):
if template_name.endswith(disabled_patterns):
return False
return default
return autoescape
@@ -566,35 +615,63 @@ def htmlsafe_json_dumps(obj, dumper=None, **kwargs):
"""
if dumper is None:
dumper = json.dumps
rv = dumper(obj, **kwargs) \
.replace(u'<', u'\\u003c') \
.replace(u'>', u'\\u003e') \
.replace(u'&', u'\\u0026') \
.replace(u"'", u'\\u0027')
rv = (
dumper(obj, **kwargs)
.replace(u"<", u"\\u003c")
.replace(u">", u"\\u003e")
.replace(u"&", u"\\u0026")
.replace(u"'", u"\\u0027")
)
return Markup(rv)
@implements_iterator
class Cycler(object):
"""A cycle helper for templates."""
"""Cycle through values by yield them one at a time, then restarting
once the end is reached. Available as ``cycler`` in templates.
Similar to ``loop.cycle``, but can be used outside loops or across
multiple loops. For example, render a list of folders and files in a
list, alternating giving them "odd" and "even" classes.
.. code-block:: html+jinja
{% set row_class = cycler("odd", "even") %}
<ul class="browser">
{% for folder in folders %}
<li class="folder {{ row_class.next() }}">{{ folder }}
{% endfor %}
{% for file in files %}
<li class="file {{ row_class.next() }}">{{ file }}
{% endfor %}
</ul>
:param items: Each positional argument will be yielded in the order
given for each cycle.
.. versionadded:: 2.1
"""
def __init__(self, *items):
if not items:
raise RuntimeError('at least one item has to be provided')
raise RuntimeError("at least one item has to be provided")
self.items = items
self.reset()
self.pos = 0
def reset(self):
"""Resets the cycle."""
"""Resets the current item to the first item."""
self.pos = 0
@property
def current(self):
"""Returns the current item."""
"""Return the current item. Equivalent to the item that will be
returned next time :meth:`next` is called.
"""
return self.items[self.pos]
def next(self):
"""Goes one item ahead and returns it."""
"""Return the current item, then advance :attr:`current` to the
next item.
"""
rv = self.current
self.pos = (self.pos + 1) % len(self.items)
return rv
@@ -605,27 +682,28 @@ def next(self):
class Joiner(object):
"""A joining helper for templates."""
def __init__(self, sep=u', '):
def __init__(self, sep=u", "):
self.sep = sep
self.used = False
def __call__(self):
if not self.used:
self.used = True
return u''
return u""
return self.sep
class Namespace(object):
"""A namespace object that can hold arbitrary attributes. It may be
initialized from a dictionary or with keyword argments."""
initialized from a dictionary or with keyword arguments."""
def __init__(*args, **kwargs):
def __init__(*args, **kwargs): # noqa: B902
self, args = args[0], args[1:]
self.__attrs = dict(*args, **kwargs)
def __getattribute__(self, name):
if name == '_Namespace__attrs':
# __class__ is needed for the awaitable check in async mode
if name in {"_Namespace__attrs", "__class__"}:
return object.__getattribute__(self, name)
try:
return self.__attrs[name]
@@ -636,16 +714,24 @@ def __setitem__(self, name, value):
self.__attrs[name] = value
def __repr__(self):
return '<Namespace %r>' % self.__attrs
return "<Namespace %r>" % self.__attrs
# does this python version support async for in and async generators?
try:
exec('async def _():\n async for _ in ():\n yield _')
exec("async def _():\n async for _ in ():\n yield _")
have_async_gen = True
except SyntaxError:
have_async_gen = False
# Imported here because that's where it was in the past
from markupsafe import Markup, escape, soft_unicode
def soft_unicode(s):
from markupsafe import soft_unicode
warnings.warn(
"'jinja2.utils.soft_unicode' will be removed in version 3.0."
" Use 'markupsafe.soft_unicode' instead.",
DeprecationWarning,
stacklevel=2,
)
return soft_unicode(s)

View File

@@ -1,14 +1,8 @@
# -*- coding: utf-8 -*-
"""API for traversing the AST nodes. Implemented by the compiler and
meta introspection.
"""
jinja2.visitor
~~~~~~~~~~~~~~
This module implements a visitor for the nodes.
:copyright: (c) 2017 by the Jinja Team.
:license: BSD.
"""
from jinja2.nodes import Node
from .nodes import Node
class NodeVisitor(object):
@@ -28,7 +22,7 @@ def get_visitor(self, node):
exists for this node. In that case the generic visit function is
used instead.
"""
method = 'visit_' + node.__class__.__name__
method = "visit_" + node.__class__.__name__
return getattr(self, method, None)
def visit(self, node, *args, **kwargs):

View File

@@ -1,13 +0,0 @@
MarkupSafe is written and maintained by Armin Ronacher and
various contributors:
Development Lead
````````````````
- Armin Ronacher <armin.ronacher@active-4.com>
Patches and Suggestions
```````````````````````
- Georg Brandl
- Mickaël Guérin

View File

@@ -1,33 +0,0 @@
Copyright (c) 2010 by Armin Ronacher and contributors. See AUTHORS
for more details.
Some rights reserved.
Redistribution and use in source and binary forms of the software as well
as documentation, with or without modification, are permitted provided
that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* The names of the contributors may not be used to endorse or
promote products derived from this software without specific
prior written permission.
THIS SOFTWARE AND DOCUMENTATION IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER
OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.

View File

@@ -0,0 +1,28 @@
Copyright 2010 Pallets
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,113 +1,69 @@
MarkupSafe
==========
Implements a unicode subclass that supports HTML strings:
MarkupSafe implements a text object that escapes characters so it is
safe to use in HTML and XML. Characters that have special meanings are
replaced so that they display as the actual characters. This mitigates
injection attacks, meaning untrusted user input can safely be displayed
on a page.
.. code-block:: python
Installing
----------
Install and update using `pip`_:
.. code-block:: text
pip install -U MarkupSafe
.. _pip: https://pip.pypa.io/en/stable/quickstart/
Examples
--------
.. code-block:: pycon
>>> from markupsafe import Markup, escape
>>> escape("<script>alert(document.cookie);</script>")
>>> # escape replaces special characters and wraps in Markup
>>> escape('<script>alert(document.cookie);</script>')
Markup(u'&lt;script&gt;alert(document.cookie);&lt;/script&gt;')
>>> tmpl = Markup("<em>%s</em>")
>>> tmpl % "Peter > Lustig"
Markup(u'<em>Peter &gt; Lustig</em>')
>>> # wrap in Markup to mark text "safe" and prevent escaping
>>> Markup('<strong>Hello</strong>')
Markup('<strong>hello</strong>')
>>> escape(Markup('<strong>Hello</strong>'))
Markup('<strong>hello</strong>')
>>> # Markup is a text subclass (str on Python 3, unicode on Python 2)
>>> # methods and operators escape their arguments
>>> template = Markup("Hello <em>%s</em>")
>>> template % '"World"'
Markup('Hello <em>&#34;World&#34;</em>')
If you want to make an object unicode that is not yet unicode
but don't want to lose the taint information, you can use the
``soft_unicode`` function. (On Python 3 you can also use ``soft_str`` which
is a different name for the same function).
.. code-block:: python
Donate
------
>>> from markupsafe import soft_unicode
>>> soft_unicode(42)
u'42'
>>> soft_unicode(Markup('foo'))
Markup(u'foo')
The Pallets organization develops and supports MarkupSafe and other
libraries that use it. In order to grow the community of contributors
and users, and allow the maintainers to devote more time to the
projects, `please donate today`_.
HTML Representations
--------------------
.. _please donate today: https://palletsprojects.com/donate
Objects can customize their HTML markup equivalent by overriding
the ``__html__`` function:
.. code-block:: python
Links
-----
>>> class Foo(object):
... def __html__(self):
... return '<strong>Nice</strong>'
...
>>> escape(Foo())
Markup(u'<strong>Nice</strong>')
>>> Markup(Foo())
Markup(u'<strong>Nice</strong>')
* Website: https://palletsprojects.com/p/markupsafe/
* Documentation: https://markupsafe.palletsprojects.com/
* License: `BSD-3-Clause <https://github.com/pallets/markupsafe/blob/master/LICENSE.rst>`_
* Releases: https://pypi.org/project/MarkupSafe/
* Code: https://github.com/pallets/markupsafe
* Issue tracker: https://github.com/pallets/markupsafe/issues
* Test status:
Silent Escapes
--------------
* Linux, Mac: https://travis-ci.org/pallets/markupsafe
* Windows: https://ci.appveyor.com/project/pallets/markupsafe
Since MarkupSafe 0.10 there is now also a separate escape function
called ``escape_silent`` that returns an empty string for ``None`` for
consistency with other systems that return empty strings for ``None``
when escaping (for instance Pylons' webhelpers).
If you also want to use this for the escape method of the Markup
object, you can create your own subclass that does that:
.. code-block:: python
from markupsafe import Markup, escape_silent as escape
class SilentMarkup(Markup):
__slots__ = ()
@classmethod
def escape(cls, s):
return cls(escape(s))
New-Style String Formatting
---------------------------
Starting with MarkupSafe 0.21 new style string formats from Python 2.6 and
3.x are now fully supported. Previously the escape behavior of those
functions was spotty at best. The new implementations operates under the
following algorithm:
1. if an object has an ``__html_format__`` method it is called as
replacement for ``__format__`` with the format specifier. It either
has to return a string or markup object.
2. if an object has an ``__html__`` method it is called.
3. otherwise the default format system of Python kicks in and the result
is HTML escaped.
Here is how you can implement your own formatting:
.. code-block:: python
class User(object):
def __init__(self, id, username):
self.id = id
self.username = username
def __html_format__(self, format_spec):
if format_spec == 'link':
return Markup('<a href="/user/{0}">{1}</a>').format(
self.id,
self.__html__(),
)
elif format_spec:
raise ValueError('Invalid format spec')
return self.__html__()
def __html__(self):
return Markup('<span class=user>{0}</span>').format(self.username)
And to format that user:
.. code-block:: python
>>> user = User(1, 'foo')
>>> Markup('<p>User: {0:link}').format(user)
Markup(u'<p>User: <a href="/user/1"><span class=user>foo</span></a>')
Markupsafe supports Python 2.6, 2.7 and Python 3.3 and higher.
* Test coverage: https://codecov.io/gh/pallets/markupsafe

View File

@@ -1,80 +1,74 @@
# -*- coding: utf-8 -*-
"""
markupsafe
~~~~~~~~~~
markupsafe
~~~~~~~~~~
Implements a Markup string.
Implements an escape function and a Markup string to replace HTML
special characters with safe representations.
:copyright: (c) 2010 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
:copyright: 2010 Pallets
:license: BSD-3-Clause
"""
import re
import string
import sys
from markupsafe._compat import text_type, string_types, int_types, \
unichr, iteritems, PY2
if sys.version_info >= (3, 3):
from collections.abc import Mapping
else:
from collections import Mapping
from ._compat import int_types
from ._compat import iteritems
from ._compat import Mapping
from ._compat import PY2
from ._compat import string_types
from ._compat import text_type
from ._compat import unichr
__version__ = "1.0"
__version__ = "1.1.1"
__all__ = ['Markup', 'soft_unicode', 'escape', 'escape_silent']
__all__ = ["Markup", "soft_unicode", "escape", "escape_silent"]
_striptags_re = re.compile(r'(<!--.*?-->|<[^>]*>)')
_entity_re = re.compile(r'&([^& ;]+);')
_striptags_re = re.compile(r"(<!--.*?-->|<[^>]*>)")
_entity_re = re.compile(r"&([^& ;]+);")
class Markup(text_type):
r"""Marks a string as being safe for inclusion in HTML/XML output without
needing to be escaped. This implements the `__html__` interface a couple
of frameworks and web applications use. :class:`Markup` is a direct
subclass of `unicode` and provides all the methods of `unicode` just that
it escapes arguments passed and always returns `Markup`.
"""A string that is ready to be safely inserted into an HTML or XML
document, either because it was escaped or because it was marked
safe.
The `escape` function returns markup objects so that double escaping can't
happen.
Passing an object to the constructor converts it to text and wraps
it to mark it safe without escaping. To escape the text, use the
:meth:`escape` class method instead.
The constructor of the :class:`Markup` class can be used for three
different things: When passed an unicode object it's assumed to be safe,
when passed an object with an HTML representation (has an `__html__`
method) that representation is used, otherwise the object passed is
converted into a unicode string and then assumed to be safe:
>>> Markup('Hello, <em>World</em>!')
Markup('Hello, <em>World</em>!')
>>> Markup(42)
Markup('42')
>>> Markup.escape('Hello, <em>World</em>!')
Markup('Hello &lt;em&gt;World&lt;/em&gt;!')
>>> Markup("Hello <em>World</em>!")
Markup(u'Hello <em>World</em>!')
>>> class Foo(object):
... def __html__(self):
... return '<a href="#">foo</a>'
This implements the ``__html__()`` interface that some frameworks
use. Passing an object that implements ``__html__()`` will wrap the
output of that method, marking it safe.
>>> class Foo:
... def __html__(self):
... return '<a href="/foo">foo</a>'
...
>>> Markup(Foo())
Markup(u'<a href="#">foo</a>')
Markup('<a href="/foo">foo</a>')
If you want object passed being always treated as unsafe you can use the
:meth:`escape` classmethod to create a :class:`Markup` object:
This is a subclass of the text type (``str`` in Python 3,
``unicode`` in Python 2). It has the same methods as that type, but
all methods escape their arguments and return a ``Markup`` instance.
>>> Markup.escape("Hello <em>World</em>!")
Markup(u'Hello &lt;em&gt;World&lt;/em&gt;!')
Operations on a markup string are markup aware which means that all
arguments are passed through the :func:`escape` function:
>>> em = Markup("<em>%s</em>")
>>> em % "foo & bar"
Markup(u'<em>foo &amp; bar</em>')
>>> strong = Markup("<strong>%(text)s</strong>")
>>> strong % {'text': '<blink>hacker here</blink>'}
Markup(u'<strong>&lt;blink&gt;hacker here&lt;/blink&gt;</strong>')
>>> Markup("<em>Hello</em> ") + "<foo>"
Markup(u'<em>Hello</em> &lt;foo&gt;')
>>> Markup('<em>%s</em>') % 'foo & bar'
Markup('<em>foo &amp; bar</em>')
>>> Markup('<em>Hello</em> ') + '<foo>'
Markup('<em>Hello</em> &lt;foo&gt;')
"""
__slots__ = ()
def __new__(cls, base=u'', encoding=None, errors='strict'):
if hasattr(base, '__html__'):
def __new__(cls, base=u"", encoding=None, errors="strict"):
if hasattr(base, "__html__"):
base = base.__html__()
if encoding is None:
return text_type.__new__(cls, base)
@@ -84,12 +78,12 @@ def __html__(self):
return self
def __add__(self, other):
if isinstance(other, string_types) or hasattr(other, '__html__'):
if isinstance(other, string_types) or hasattr(other, "__html__"):
return self.__class__(super(Markup, self).__add__(self.escape(other)))
return NotImplemented
def __radd__(self, other):
if hasattr(other, '__html__') or isinstance(other, string_types):
if hasattr(other, "__html__") or isinstance(other, string_types):
return self.escape(other).__add__(self)
return NotImplemented
@@ -97,6 +91,7 @@ def __mul__(self, num):
if isinstance(num, int_types):
return self.__class__(text_type.__mul__(self, num))
return NotImplemented
__rmul__ = __mul__
def __mod__(self, arg):
@@ -107,115 +102,124 @@ def __mod__(self, arg):
return self.__class__(text_type.__mod__(self, arg))
def __repr__(self):
return '%s(%s)' % (
self.__class__.__name__,
text_type.__repr__(self)
)
return "%s(%s)" % (self.__class__.__name__, text_type.__repr__(self))
def join(self, seq):
return self.__class__(text_type.join(self, map(self.escape, seq)))
join.__doc__ = text_type.join.__doc__
def split(self, *args, **kwargs):
return list(map(self.__class__, text_type.split(self, *args, **kwargs)))
split.__doc__ = text_type.split.__doc__
def rsplit(self, *args, **kwargs):
return list(map(self.__class__, text_type.rsplit(self, *args, **kwargs)))
rsplit.__doc__ = text_type.rsplit.__doc__
def splitlines(self, *args, **kwargs):
return list(map(self.__class__, text_type.splitlines(
self, *args, **kwargs)))
return list(map(self.__class__, text_type.splitlines(self, *args, **kwargs)))
splitlines.__doc__ = text_type.splitlines.__doc__
def unescape(self):
r"""Unescape markup again into an text_type string. This also resolves
known HTML4 and XHTML entities:
"""Convert escaped markup back into a text string. This replaces
HTML entities with the characters they represent.
>>> Markup("Main &raquo; <em>About</em>").unescape()
u'Main \xbb <em>About</em>'
>>> Markup('Main &raquo; <em>About</em>').unescape()
'Main » <em>About</em>'
"""
from markupsafe._constants import HTML_ENTITIES
from ._constants import HTML_ENTITIES
def handle_match(m):
name = m.group(1)
if name in HTML_ENTITIES:
return unichr(HTML_ENTITIES[name])
try:
if name[:2] in ('#x', '#X'):
if name[:2] in ("#x", "#X"):
return unichr(int(name[2:], 16))
elif name.startswith('#'):
elif name.startswith("#"):
return unichr(int(name[1:]))
except ValueError:
pass
# Don't modify unexpected input.
return m.group()
return _entity_re.sub(handle_match, text_type(self))
def striptags(self):
r"""Unescape markup into an text_type string and strip all tags. This
also resolves known HTML4 and XHTML entities. Whitespace is
normalized to one:
""":meth:`unescape` the markup, remove tags, and normalize
whitespace to single spaces.
>>> Markup("Main &raquo; <em>About</em>").striptags()
u'Main \xbb About'
>>> Markup('Main &raquo;\t<em>About</em>').striptags()
'Main » About'
"""
stripped = u' '.join(_striptags_re.sub('', self).split())
stripped = u" ".join(_striptags_re.sub("", self).split())
return Markup(stripped).unescape()
@classmethod
def escape(cls, s):
"""Escape the string. Works like :func:`escape` with the difference
that for subclasses of :class:`Markup` this function would return the
correct subclass.
"""Escape a string. Calls :func:`escape` and ensures that for
subclasses the correct type is returned.
"""
rv = escape(s)
if rv.__class__ is not cls:
return cls(rv)
return rv
def make_simple_escaping_wrapper(name):
def make_simple_escaping_wrapper(name): # noqa: B902
orig = getattr(text_type, name)
def func(self, *args, **kwargs):
args = _escape_argspec(list(args), enumerate(args), self.escape)
_escape_argspec(kwargs, iteritems(kwargs), self.escape)
return self.__class__(orig(self, *args, **kwargs))
func.__name__ = orig.__name__
func.__doc__ = orig.__doc__
return func
for method in '__getitem__', 'capitalize', \
'title', 'lower', 'upper', 'replace', 'ljust', \
'rjust', 'lstrip', 'rstrip', 'center', 'strip', \
'translate', 'expandtabs', 'swapcase', 'zfill':
for method in (
"__getitem__",
"capitalize",
"title",
"lower",
"upper",
"replace",
"ljust",
"rjust",
"lstrip",
"rstrip",
"center",
"strip",
"translate",
"expandtabs",
"swapcase",
"zfill",
):
locals()[method] = make_simple_escaping_wrapper(method)
# new in python 2.5
if hasattr(text_type, 'partition'):
def partition(self, sep):
return tuple(map(self.__class__,
text_type.partition(self, self.escape(sep))))
def rpartition(self, sep):
return tuple(map(self.__class__,
text_type.rpartition(self, self.escape(sep))))
def partition(self, sep):
return tuple(map(self.__class__, text_type.partition(self, self.escape(sep))))
# new in python 2.6
if hasattr(text_type, 'format'):
def format(*args, **kwargs):
self, args = args[0], args[1:]
formatter = EscapeFormatter(self.escape)
kwargs = _MagicFormatMapping(args, kwargs)
return self.__class__(formatter.vformat(self, args, kwargs))
def rpartition(self, sep):
return tuple(map(self.__class__, text_type.rpartition(self, self.escape(sep))))
def __html_format__(self, format_spec):
if format_spec:
raise ValueError('Unsupported format specification '
'for Markup.')
return self
def format(self, *args, **kwargs):
formatter = EscapeFormatter(self.escape)
kwargs = _MagicFormatMapping(args, kwargs)
return self.__class__(formatter.vformat(self, args, kwargs))
def __html_format__(self, format_spec):
if format_spec:
raise ValueError("Unsupported format specification " "for Markup.")
return self
# not in python 3
if hasattr(text_type, '__getslice__'):
__getslice__ = make_simple_escaping_wrapper('__getslice__')
if hasattr(text_type, "__getslice__"):
__getslice__ = make_simple_escaping_wrapper("__getslice__")
del method, make_simple_escaping_wrapper
@@ -234,7 +238,7 @@ def __init__(self, args, kwargs):
self._last_index = 0
def __getitem__(self, key):
if key == '':
if key == "":
idx = self._last_index
self._last_index += 1
try:
@@ -251,35 +255,37 @@ def __len__(self):
return len(self._kwargs)
if hasattr(text_type, 'format'):
class EscapeFormatter(string.Formatter):
if hasattr(text_type, "format"):
class EscapeFormatter(string.Formatter):
def __init__(self, escape):
self.escape = escape
def format_field(self, value, format_spec):
if hasattr(value, '__html_format__'):
if hasattr(value, "__html_format__"):
rv = value.__html_format__(format_spec)
elif hasattr(value, '__html__'):
elif hasattr(value, "__html__"):
if format_spec:
raise ValueError('No format specification allowed '
'when formatting an object with '
'its __html__ method.')
raise ValueError(
"Format specifier {0} given, but {1} does not"
" define __html_format__. A class that defines"
" __html__ must define __html_format__ to work"
" with format specifiers.".format(format_spec, type(value))
)
rv = value.__html__()
else:
# We need to make sure the format spec is unicode here as
# otherwise the wrong callback methods are invoked. For
# instance a byte string there would invoke __str__ and
# not __unicode__.
rv = string.Formatter.format_field(
self, value, text_type(format_spec))
rv = string.Formatter.format_field(self, value, text_type(format_spec))
return text_type(self.escape(rv))
def _escape_argspec(obj, iterable, escape):
"""Helper for various string-wrapped functions."""
for key, value in iterable:
if hasattr(value, '__html__') or isinstance(value, string_types):
if hasattr(value, "__html__") or isinstance(value, string_types):
obj[key] = escape(value)
return obj
@@ -291,20 +297,31 @@ def __init__(self, obj, escape):
self.obj = obj
self.escape = escape
__getitem__ = lambda s, x: _MarkupEscapeHelper(s.obj[x], s.escape)
__unicode__ = __str__ = lambda s: text_type(s.escape(s.obj))
__repr__ = lambda s: str(s.escape(repr(s.obj)))
__int__ = lambda s: int(s.obj)
__float__ = lambda s: float(s.obj)
def __getitem__(self, item):
return _MarkupEscapeHelper(self.obj[item], self.escape)
def __str__(self):
return text_type(self.escape(self.obj))
__unicode__ = __str__
def __repr__(self):
return str(self.escape(repr(self.obj)))
def __int__(self):
return int(self.obj)
def __float__(self):
return float(self.obj)
# we have to import it down here as the speedups and native
# modules imports the markup type which is define above.
try:
from markupsafe._speedups import escape, escape_silent, soft_unicode
from ._speedups import escape, escape_silent, soft_unicode
except ImportError:
from markupsafe._native import escape, escape_silent, soft_unicode
from ._native import escape, escape_silent, soft_unicode
if not PY2:
soft_str = soft_unicode
__all__.append('soft_str')
__all__.append("soft_str")

View File

@@ -1,12 +1,10 @@
# -*- coding: utf-8 -*-
"""
markupsafe._compat
~~~~~~~~~~~~~~~~~~
markupsafe._compat
~~~~~~~~~~~~~~~~~~
Compatibility module for different Python versions.
:copyright: (c) 2013 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
:copyright: 2010 Pallets
:license: BSD-3-Clause
"""
import sys
@@ -17,10 +15,19 @@
string_types = (str,)
unichr = chr
int_types = (int,)
iteritems = lambda x: iter(x.items())
def iteritems(x):
return iter(x.items())
from collections.abc import Mapping
else:
text_type = unicode
string_types = (str, unicode)
unichr = unichr
int_types = (int, long)
iteritems = lambda x: x.iteritems()
def iteritems(x):
return x.iteritems()
from collections import Mapping

View File

@@ -1,267 +1,264 @@
# -*- coding: utf-8 -*-
"""
markupsafe._constants
~~~~~~~~~~~~~~~~~~~~~
markupsafe._constants
~~~~~~~~~~~~~~~~~~~~~
Highlevel implementation of the Markup string.
:copyright: (c) 2010 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
:copyright: 2010 Pallets
:license: BSD-3-Clause
"""
HTML_ENTITIES = {
'AElig': 198,
'Aacute': 193,
'Acirc': 194,
'Agrave': 192,
'Alpha': 913,
'Aring': 197,
'Atilde': 195,
'Auml': 196,
'Beta': 914,
'Ccedil': 199,
'Chi': 935,
'Dagger': 8225,
'Delta': 916,
'ETH': 208,
'Eacute': 201,
'Ecirc': 202,
'Egrave': 200,
'Epsilon': 917,
'Eta': 919,
'Euml': 203,
'Gamma': 915,
'Iacute': 205,
'Icirc': 206,
'Igrave': 204,
'Iota': 921,
'Iuml': 207,
'Kappa': 922,
'Lambda': 923,
'Mu': 924,
'Ntilde': 209,
'Nu': 925,
'OElig': 338,
'Oacute': 211,
'Ocirc': 212,
'Ograve': 210,
'Omega': 937,
'Omicron': 927,
'Oslash': 216,
'Otilde': 213,
'Ouml': 214,
'Phi': 934,
'Pi': 928,
'Prime': 8243,
'Psi': 936,
'Rho': 929,
'Scaron': 352,
'Sigma': 931,
'THORN': 222,
'Tau': 932,
'Theta': 920,
'Uacute': 218,
'Ucirc': 219,
'Ugrave': 217,
'Upsilon': 933,
'Uuml': 220,
'Xi': 926,
'Yacute': 221,
'Yuml': 376,
'Zeta': 918,
'aacute': 225,
'acirc': 226,
'acute': 180,
'aelig': 230,
'agrave': 224,
'alefsym': 8501,
'alpha': 945,
'amp': 38,
'and': 8743,
'ang': 8736,
'apos': 39,
'aring': 229,
'asymp': 8776,
'atilde': 227,
'auml': 228,
'bdquo': 8222,
'beta': 946,
'brvbar': 166,
'bull': 8226,
'cap': 8745,
'ccedil': 231,
'cedil': 184,
'cent': 162,
'chi': 967,
'circ': 710,
'clubs': 9827,
'cong': 8773,
'copy': 169,
'crarr': 8629,
'cup': 8746,
'curren': 164,
'dArr': 8659,
'dagger': 8224,
'darr': 8595,
'deg': 176,
'delta': 948,
'diams': 9830,
'divide': 247,
'eacute': 233,
'ecirc': 234,
'egrave': 232,
'empty': 8709,
'emsp': 8195,
'ensp': 8194,
'epsilon': 949,
'equiv': 8801,
'eta': 951,
'eth': 240,
'euml': 235,
'euro': 8364,
'exist': 8707,
'fnof': 402,
'forall': 8704,
'frac12': 189,
'frac14': 188,
'frac34': 190,
'frasl': 8260,
'gamma': 947,
'ge': 8805,
'gt': 62,
'hArr': 8660,
'harr': 8596,
'hearts': 9829,
'hellip': 8230,
'iacute': 237,
'icirc': 238,
'iexcl': 161,
'igrave': 236,
'image': 8465,
'infin': 8734,
'int': 8747,
'iota': 953,
'iquest': 191,
'isin': 8712,
'iuml': 239,
'kappa': 954,
'lArr': 8656,
'lambda': 955,
'lang': 9001,
'laquo': 171,
'larr': 8592,
'lceil': 8968,
'ldquo': 8220,
'le': 8804,
'lfloor': 8970,
'lowast': 8727,
'loz': 9674,
'lrm': 8206,
'lsaquo': 8249,
'lsquo': 8216,
'lt': 60,
'macr': 175,
'mdash': 8212,
'micro': 181,
'middot': 183,
'minus': 8722,
'mu': 956,
'nabla': 8711,
'nbsp': 160,
'ndash': 8211,
'ne': 8800,
'ni': 8715,
'not': 172,
'notin': 8713,
'nsub': 8836,
'ntilde': 241,
'nu': 957,
'oacute': 243,
'ocirc': 244,
'oelig': 339,
'ograve': 242,
'oline': 8254,
'omega': 969,
'omicron': 959,
'oplus': 8853,
'or': 8744,
'ordf': 170,
'ordm': 186,
'oslash': 248,
'otilde': 245,
'otimes': 8855,
'ouml': 246,
'para': 182,
'part': 8706,
'permil': 8240,
'perp': 8869,
'phi': 966,
'pi': 960,
'piv': 982,
'plusmn': 177,
'pound': 163,
'prime': 8242,
'prod': 8719,
'prop': 8733,
'psi': 968,
'quot': 34,
'rArr': 8658,
'radic': 8730,
'rang': 9002,
'raquo': 187,
'rarr': 8594,
'rceil': 8969,
'rdquo': 8221,
'real': 8476,
'reg': 174,
'rfloor': 8971,
'rho': 961,
'rlm': 8207,
'rsaquo': 8250,
'rsquo': 8217,
'sbquo': 8218,
'scaron': 353,
'sdot': 8901,
'sect': 167,
'shy': 173,
'sigma': 963,
'sigmaf': 962,
'sim': 8764,
'spades': 9824,
'sub': 8834,
'sube': 8838,
'sum': 8721,
'sup': 8835,
'sup1': 185,
'sup2': 178,
'sup3': 179,
'supe': 8839,
'szlig': 223,
'tau': 964,
'there4': 8756,
'theta': 952,
'thetasym': 977,
'thinsp': 8201,
'thorn': 254,
'tilde': 732,
'times': 215,
'trade': 8482,
'uArr': 8657,
'uacute': 250,
'uarr': 8593,
'ucirc': 251,
'ugrave': 249,
'uml': 168,
'upsih': 978,
'upsilon': 965,
'uuml': 252,
'weierp': 8472,
'xi': 958,
'yacute': 253,
'yen': 165,
'yuml': 255,
'zeta': 950,
'zwj': 8205,
'zwnj': 8204
"AElig": 198,
"Aacute": 193,
"Acirc": 194,
"Agrave": 192,
"Alpha": 913,
"Aring": 197,
"Atilde": 195,
"Auml": 196,
"Beta": 914,
"Ccedil": 199,
"Chi": 935,
"Dagger": 8225,
"Delta": 916,
"ETH": 208,
"Eacute": 201,
"Ecirc": 202,
"Egrave": 200,
"Epsilon": 917,
"Eta": 919,
"Euml": 203,
"Gamma": 915,
"Iacute": 205,
"Icirc": 206,
"Igrave": 204,
"Iota": 921,
"Iuml": 207,
"Kappa": 922,
"Lambda": 923,
"Mu": 924,
"Ntilde": 209,
"Nu": 925,
"OElig": 338,
"Oacute": 211,
"Ocirc": 212,
"Ograve": 210,
"Omega": 937,
"Omicron": 927,
"Oslash": 216,
"Otilde": 213,
"Ouml": 214,
"Phi": 934,
"Pi": 928,
"Prime": 8243,
"Psi": 936,
"Rho": 929,
"Scaron": 352,
"Sigma": 931,
"THORN": 222,
"Tau": 932,
"Theta": 920,
"Uacute": 218,
"Ucirc": 219,
"Ugrave": 217,
"Upsilon": 933,
"Uuml": 220,
"Xi": 926,
"Yacute": 221,
"Yuml": 376,
"Zeta": 918,
"aacute": 225,
"acirc": 226,
"acute": 180,
"aelig": 230,
"agrave": 224,
"alefsym": 8501,
"alpha": 945,
"amp": 38,
"and": 8743,
"ang": 8736,
"apos": 39,
"aring": 229,
"asymp": 8776,
"atilde": 227,
"auml": 228,
"bdquo": 8222,
"beta": 946,
"brvbar": 166,
"bull": 8226,
"cap": 8745,
"ccedil": 231,
"cedil": 184,
"cent": 162,
"chi": 967,
"circ": 710,
"clubs": 9827,
"cong": 8773,
"copy": 169,
"crarr": 8629,
"cup": 8746,
"curren": 164,
"dArr": 8659,
"dagger": 8224,
"darr": 8595,
"deg": 176,
"delta": 948,
"diams": 9830,
"divide": 247,
"eacute": 233,
"ecirc": 234,
"egrave": 232,
"empty": 8709,
"emsp": 8195,
"ensp": 8194,
"epsilon": 949,
"equiv": 8801,
"eta": 951,
"eth": 240,
"euml": 235,
"euro": 8364,
"exist": 8707,
"fnof": 402,
"forall": 8704,
"frac12": 189,
"frac14": 188,
"frac34": 190,
"frasl": 8260,
"gamma": 947,
"ge": 8805,
"gt": 62,
"hArr": 8660,
"harr": 8596,
"hearts": 9829,
"hellip": 8230,
"iacute": 237,
"icirc": 238,
"iexcl": 161,
"igrave": 236,
"image": 8465,
"infin": 8734,
"int": 8747,
"iota": 953,
"iquest": 191,
"isin": 8712,
"iuml": 239,
"kappa": 954,
"lArr": 8656,
"lambda": 955,
"lang": 9001,
"laquo": 171,
"larr": 8592,
"lceil": 8968,
"ldquo": 8220,
"le": 8804,
"lfloor": 8970,
"lowast": 8727,
"loz": 9674,
"lrm": 8206,
"lsaquo": 8249,
"lsquo": 8216,
"lt": 60,
"macr": 175,
"mdash": 8212,
"micro": 181,
"middot": 183,
"minus": 8722,
"mu": 956,
"nabla": 8711,
"nbsp": 160,
"ndash": 8211,
"ne": 8800,
"ni": 8715,
"not": 172,
"notin": 8713,
"nsub": 8836,
"ntilde": 241,
"nu": 957,
"oacute": 243,
"ocirc": 244,
"oelig": 339,
"ograve": 242,
"oline": 8254,
"omega": 969,
"omicron": 959,
"oplus": 8853,
"or": 8744,
"ordf": 170,
"ordm": 186,
"oslash": 248,
"otilde": 245,
"otimes": 8855,
"ouml": 246,
"para": 182,
"part": 8706,
"permil": 8240,
"perp": 8869,
"phi": 966,
"pi": 960,
"piv": 982,
"plusmn": 177,
"pound": 163,
"prime": 8242,
"prod": 8719,
"prop": 8733,
"psi": 968,
"quot": 34,
"rArr": 8658,
"radic": 8730,
"rang": 9002,
"raquo": 187,
"rarr": 8594,
"rceil": 8969,
"rdquo": 8221,
"real": 8476,
"reg": 174,
"rfloor": 8971,
"rho": 961,
"rlm": 8207,
"rsaquo": 8250,
"rsquo": 8217,
"sbquo": 8218,
"scaron": 353,
"sdot": 8901,
"sect": 167,
"shy": 173,
"sigma": 963,
"sigmaf": 962,
"sim": 8764,
"spades": 9824,
"sub": 8834,
"sube": 8838,
"sum": 8721,
"sup": 8835,
"sup1": 185,
"sup2": 178,
"sup3": 179,
"supe": 8839,
"szlig": 223,
"tau": 964,
"there4": 8756,
"theta": 952,
"thetasym": 977,
"thinsp": 8201,
"thorn": 254,
"tilde": 732,
"times": 215,
"trade": 8482,
"uArr": 8657,
"uacute": 250,
"uarr": 8593,
"ucirc": 251,
"ugrave": 249,
"uml": 168,
"upsih": 978,
"upsilon": 965,
"uuml": 252,
"weierp": 8472,
"xi": 958,
"yacute": 253,
"yen": 165,
"yuml": 255,
"zeta": 950,
"zwj": 8205,
"zwnj": 8204,
}

View File

@@ -1,36 +1,49 @@
# -*- coding: utf-8 -*-
"""
markupsafe._native
~~~~~~~~~~~~~~~~~~
markupsafe._native
~~~~~~~~~~~~~~~~~~
Native Python implementation the C module is not compiled.
Native Python implementation used when the C module is not compiled.
:copyright: (c) 2010 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
:copyright: 2010 Pallets
:license: BSD-3-Clause
"""
from markupsafe import Markup
from markupsafe._compat import text_type
from . import Markup
from ._compat import text_type
def escape(s):
"""Convert the characters &, <, >, ' and " in string s to HTML-safe
sequences. Use this if you need to display text that might contain
such characters in HTML. Marks return value as markup string.
"""Replace the characters ``&``, ``<``, ``>``, ``'``, and ``"`` in
the string with HTML-safe sequences. Use this if you need to display
text that might contain such characters in HTML.
If the object has an ``__html__`` method, it is called and the
return value is assumed to already be safe for HTML.
:param s: An object to be converted to a string and escaped.
:return: A :class:`Markup` string with the escaped text.
"""
if hasattr(s, '__html__'):
return s.__html__()
return Markup(text_type(s)
.replace('&', '&amp;')
.replace('>', '&gt;')
.replace('<', '&lt;')
.replace("'", '&#39;')
.replace('"', '&#34;')
if hasattr(s, "__html__"):
return Markup(s.__html__())
return Markup(
text_type(s)
.replace("&", "&amp;")
.replace(">", "&gt;")
.replace("<", "&lt;")
.replace("'", "&#39;")
.replace('"', "&#34;")
)
def escape_silent(s):
"""Like :func:`escape` but converts `None` into an empty
markup string.
"""Like :func:`escape` but treats ``None`` as the empty string.
Useful with optional values, as otherwise you get the string
``'None'`` when the value is ``None``.
>>> escape(None)
Markup('None')
>>> escape_silent(None)
Markup('')
"""
if s is None:
return Markup()
@@ -38,8 +51,18 @@ def escape_silent(s):
def soft_unicode(s):
"""Make a string unicode if it isn't already. That way a markup
string is not converted back to unicode.
"""Convert an object to a string if it isn't already. This preserves
a :class:`Markup` string rather than converting it back to a basic
string, so it will still be marked as safe and won't be escaped
again.
>>> value = escape('<User 1>')
>>> value
Markup('&lt;User 1&gt;')
>>> escape(str(value))
Markup('&amp;lt;User 1&amp;gt;')
>>> escape(soft_unicode(value))
Markup('&lt;User 1&gt;')
"""
if not isinstance(s, text_type):
s = text_type(s)

View File

@@ -1,22 +0,0 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""This file dispatches to the correct implementation of OrderedDict."""
# TODO: this file, along with py26/ordereddict.py, can be removed when
# TODO: support for python 2.6 will be dropped
# Removing this import will make python 2.6
# fail on import of ordereddict
from __future__ import absolute_import
import sys
if sys.version_info[:2] == (2, 6):
import ordereddict
OrderedDict = ordereddict.OrderedDict
else:
import collections
OrderedDict = collections.OrderedDict

View File

@@ -1,127 +0,0 @@
# Copyright (c) 2009 Raymond Hettinger
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
from UserDict import DictMixin
class OrderedDict(dict, DictMixin):
def __init__(self, *args, **kwds):
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__end
except AttributeError:
self.clear()
self.update(*args, **kwds)
def clear(self):
self.__end = end = []
end += [None, end, end] # sentinel node for doubly linked list
self.__map = {} # key --> [key, prev, next]
dict.clear(self)
def __setitem__(self, key, value):
if key not in self:
end = self.__end
curr = end[1]
curr[2] = end[1] = self.__map[key] = [key, curr, end]
dict.__setitem__(self, key, value)
def __delitem__(self, key):
dict.__delitem__(self, key)
key, prev, next = self.__map.pop(key)
prev[2] = next
next[1] = prev
def __iter__(self):
end = self.__end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.__end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
def popitem(self, last=True):
if not self:
raise KeyError('dictionary is empty')
if last:
key = reversed(self).next()
else:
key = iter(self).next()
value = self.pop(key)
return key, value
def __reduce__(self):
items = [[k, self[k]] for k in self]
tmp = self.__map, self.__end
del self.__map, self.__end
inst_dict = vars(self).copy()
self.__map, self.__end = tmp
if inst_dict:
return (self.__class__, (items,), inst_dict)
return self.__class__, (items,)
def keys(self):
return list(self)
setdefault = DictMixin.setdefault
update = DictMixin.update
pop = DictMixin.pop
values = DictMixin.values
items = DictMixin.items
iterkeys = DictMixin.iterkeys
itervalues = DictMixin.itervalues
iteritems = DictMixin.iteritems
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, self.items())
def copy(self):
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
d = cls()
for key in iterable:
d[key] = value
return d
def __eq__(self, other):
if isinstance(other, OrderedDict):
if len(self) != len(other):
return False
for p, q in zip(self.items(), other.items()):
if p != q:
return False
return True
return dict.__eq__(self, other)
def __ne__(self, other):
return not self == other

View File

@@ -1,5 +1,6 @@
from __future__ import absolute_import, division, print_function
import collections
import inspect
import sys
import warnings
@@ -21,9 +22,6 @@
from _pytest.outcomes import fail, TEST_OUTCOME
from ordereddict_backport import OrderedDict
def pytest_sessionstart(session):
import _pytest.python
scopename2class.update({
@@ -165,7 +163,7 @@ def reorder_items(items):
for scopenum in range(0, scopenum_function):
argkeys_cache[scopenum] = d = {}
for item in items:
keys = OrderedDict.fromkeys(get_parametrized_fixture_keys(item, scopenum))
keys = collections.OrderedDict.fromkeys(get_parametrized_fixture_keys(item, scopenum))
if keys:
d[item] = keys
return reorder_items_atscope(items, set(), argkeys_cache, 0)
@@ -200,7 +198,7 @@ def slice_items(items, ignore, scoped_argkeys_cache):
for i, item in enumerate(it):
argkeys = scoped_argkeys_cache.get(item)
if argkeys is not None:
newargkeys = OrderedDict.fromkeys(k for k in argkeys if k not in ignore)
newargkeys = collections.OrderedDict.fromkeys(k for k in argkeys if k not in ignore)
if newargkeys: # found a slicing key
slicing_argkey, _ = newargkeys.popitem()
items_before = items[:i]

Some files were not shown because too many files have changed in this diff Show More