... | ... | @@ -20,7 +20,15 @@ if gpus: |
|
|
tf.config.experimental.set_virtual_device_configuration(gpu, [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=lim)])
|
|
|
```
|
|
|
|
|
|
This code first checks if your script is using GPUs. In that case, it limits the GPU memory requested to `lim` MB (2GB in the example). If you are using a single GPU you can also simplify this removing the for loop and setting the limit just for `gpus[0]`.
|
|
|
This code first checks if your script is using GPUs. In that case, it limits the GPU memory requested to `lim` MB (2GB in the example). If you are using a single GPU you can also simplify this by removing the for loop and setting the limit just for `gpus[0]`.
|
|
|
|
|
|
## Limiting GPU Memory in PyTorch
|
|
|
|
|
|
Differently from TensorFlow, PyTorch does not currently support an easy way to limit the GPU memory of your scripts from Python. The easiest practical way to obtain this reduction, in case of a deep neural network training, is to **reduce the batch size**.
|
|
|
|
|
|
However, it is not trivial to estimate in advance the total memory occupied by your model, so you have to be very careful in your initial tests to avoid crashing other users' scripts. Always remember to monitor your GPU usage as explained in the ["Monitoring Resources"](/monitoring) page.
|
|
|
|
|
|
## Other Tools and Libraries
|
|
|
|
|
|
If you are currently working with another program or library that uses the GPU and you can share some code or some tips on how to limit the GPU memory usage, please contact the sysadmins. We will be glad to add your suggestions to this page.
|
|
|
|