mlx/benchmarks/python/comparative
Jason b0cd092b7f
Added activation functions: leaky_relu relu6 softplus elu celu logsigmoid (#108)
* added leaky_relu relu6 softplus elu celu logsigmoid
* minor fixes for docstring and benchmark imports
* fixed elu implementation and added tests
* added tests for optional param, changed leaky_relu param to fit pytorch documentation
2023-12-10 16:31:38 -08:00
..
bench_mlx.py Added activation functions: leaky_relu relu6 softplus elu celu logsigmoid (#108) 2023-12-10 16:31:38 -08:00
bench_torch.py Added activation functions: leaky_relu relu6 softplus elu celu logsigmoid (#108) 2023-12-10 16:31:38 -08:00
compare.py Added activation functions: leaky_relu relu6 softplus elu celu logsigmoid (#108) 2023-12-10 16:31:38 -08:00
README.md awni's commit files 2023-11-29 10:30:41 -08:00

Microbenchmarks comparing MLX to PyTorch

Implement the same microbenchmarks in MLX and PyTorch to compare and make a list of the biggest possible performance improvements and/or regressions.

Run with python bench_mlx.py sum_axis --size 8x1024x128 --axis 2 --cpu for instance to measure the times it takes to sum across the 3rd axis of the above tensor on the cpu.

compare.py runs several benchmarks and compares the speed-up or lack thereof in comparison to PyTorch.

Each bench script can be run with --print-pid to print the PID and wait for a key in order to ease attaching a debugger.