Clearing CUDA memory on Kaggle

Sometimes when run PyTorch model with GPU on Kaggle we get error “RuntimeError: CUDA out of memory. Tried to allocate …”

Clear memory with command:

torch.cuda.empty_cache()

Check CUDA memory

!pip install GPUtil
from GPUtil import showUtilization as gpu_usage
gpu_usage()

Output

| ID | GPU | MEM |
------------------
| 0 | 0% | 0% |

--

--

No responses yet