... | ... | @@ -22,11 +22,18 @@ This code first checks if your script is using GPUs. In that case, it limits the |
|
|
|
|
|
## Limiting GPU Memory in PyTorch
|
|
|
|
|
|
Differently from TensorFlow, PyTorch does not currently support an easy way to limit the GPU memory of your scripts from Python. The easiest practical way to obtain this reduction, in case of a deep neural network training, is to **reduce the batch size**.
|
|
|
As for Tensorflow, PyTorch GPU memory should be controlled to avoid crashing other users' scripts. In order to do so, the following function should be called at the beginning of the script:
|
|
|
|
|
|
However, it is not trivial to estimate in advance the total memory occupied by your model, so you have to be very careful in your initial tests to avoid crashing other users' scripts. Always remember to monitor your GPU usage as explained in the ["Monitoring Resources"](/monitoring) page.
|
|
|
```python
|
|
|
torch.cuda.set_per_process_memory_fraction(fraction, device=None)
|
|
|
```
|
|
|
|
|
|
where fraction is the percentage of the memory of the target device that the script will use. More information can be found [here](https://pytorch.org/docs/stable/generated/torch.cuda.set_per_process_memory_fraction.html#torch.cuda.set_per_process_memory_fraction).
|
|
|
|
|
|
**N.B.**: when using multiple GPUs, the above function must be called for each device.
|
|
|
|
|
|
Thanks to Francesco Daghero for this guide.
|
|
|
|
|
|
## Other Tools and Libraries
|
|
|
|
|
|
If you are currently working with another program or library that uses the GPU and you can share some code or some tips on how to limit the GPU memory usage, please contact the sysadmins. We will be glad to add your suggestions to this page. |
|
|
\ No newline at end of file |
|
|
|