Chris Fotache
1 min readMay 23, 2019

--

If you run out of memory, reduce the batch size in half until it all fits. Usually, the bigger batch size, the faster it will train, but it will take more memory. In a separate terminal you can run nvidia-smi to see the GPU memory usage.

--

--

Chris Fotache
Chris Fotache

Written by Chris Fotache

AI researcher at CYNET.ai, writing about artificial intelligence, Python programming, machine learning, computer vision, robotics, natural language processing

Responses (2)