1 min readMay 23, 2019
If you run out of memory, reduce the batch size in half until it all fits. Usually, the bigger batch size, the faster it will train, but it will take more memory. In a separate terminal you can run nvidia-smi to see the GPU memory usage.