mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-09-01 21:01:32 +08:00
updates + format
This commit is contained in:
@@ -1,9 +1,13 @@
|
||||
# CIFAR and ResNets
|
||||
|
||||
An example of training a ResNet on CIFAR-10 with MLX. Several ResNet configurations in accordance with the original [paper](https://arxiv.org/abs/1512.03385) are available. Also illustrates how to use `mlx-data` to download and load the dataset.
|
||||
|
||||
An example of training a ResNet on CIFAR-10 with MLX. Several ResNet
|
||||
configurations in accordance with the original
|
||||
[paper](https://arxiv.org/abs/1512.03385) are available. The example also
|
||||
illustrates how to use [MLX Data](https://github.com/ml-explore/mlx-data) to
|
||||
load the dataset.
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
Install the dependencies:
|
||||
|
||||
```
|
||||
@@ -11,6 +15,7 @@ pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Running the example
|
||||
|
||||
Run the example with:
|
||||
|
||||
```
|
||||
@@ -29,23 +34,18 @@ For all available options, run:
|
||||
python main.py --help
|
||||
```
|
||||
|
||||
|
||||
## Throughput
|
||||
|
||||
On the tested device (M1 Macbook Pro, 16GB RAM), I get the following throughput with a `batch_size=256`:
|
||||
```
|
||||
Epoch: 0 | avg. tr_loss 2.074 | avg. tr_acc 0.216 | Train Throughput: 415.39 images/sec
|
||||
```
|
||||
|
||||
When training on just the CPU (with the `--cpu` argument), the throughput is significantly lower (almost 30x!):
|
||||
```
|
||||
Epoch: 0 | avg. tr_loss 2.074 | avg. tr_acc 0.216 | Train Throughput: 13.5 images/sec
|
||||
```
|
||||
|
||||
## Results
|
||||
After training for 100 epochs, the following results were observed:
|
||||
|
||||
After training with the default `resnet20` architecture for 100 epochs, you
|
||||
should see the following results:
|
||||
|
||||
```
|
||||
Epoch: 99 | avg. tr_loss 0.320 | avg. tr_acc 0.888 | Train Throughput: 416.77 images/sec
|
||||
Epoch: 99 | test_acc 0.807
|
||||
Epoch: 99 | avg. Train loss 0.320 | avg. Train acc 0.888 | Throughput: 416.77 images/sec
|
||||
Epoch: 99 | Test acc 0.807
|
||||
```
|
||||
At the time of writing, `mlx` doesn't have in-built `schedulers`, nor a `BatchNorm` layer. We'll revisit this example for exact reproduction once these features are added.
|
||||
|
||||
Note this was run on an M1 Macbook Pro with 16GB RAM.
|
||||
|
||||
At the time of writing, `mlx` doesn't have built-in learning rate schedules,
|
||||
nor a `BatchNorm` layer. We intend to update this example once these features
|
||||
are added.
|
||||
|
Reference in New Issue
Block a user