... | @@ -18,15 +18,22 @@ if gpus: |
... | @@ -18,15 +18,22 @@ if gpus: |
|
tf.config.experimental.set_virtual_device_configuration(gpu, [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=lim)])
|
|
tf.config.experimental.set_virtual_device_configuration(gpu, [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=lim)])
|
|
```
|
|
```
|
|
|
|
|
|
This code first checks if your script is using GPUs. In that case, it limits the GPU memory requested to `lim` MB (2GB in the example). If you are using a single GPU you can also simplify this by removing the for loop and setting the limit just for `gpus[0]`.
|
|
This code first checks if your script is using GPUs. In that case, it limits the GPU memory requested to `lim` MB (2GB in the example). If you are using a single GPU you can also simplify this by removing the for loop and setting the limit just for `gpus[0]`.
|
|
|
|
|
|
## Limiting GPU Memory in PyTorch
|
|
## Limiting GPU Memory in PyTorch
|
|
|
|
|
|
Differently from TensorFlow, PyTorch does not currently support an easy way to limit the GPU memory of your scripts from Python. The easiest practical way to obtain this reduction, in case of a deep neural network training, is to **reduce the batch size**.
|
|
As for Tensorflow, PyTorch GPU memory should be controlled to avoid crashing other users' scripts. In order to do so, the following function should be called at the beginning of the script:
|
|
|
|
|
|
However, it is not trivial to estimate in advance the total memory occupied by your model, so you have to be very careful in your initial tests to avoid crashing other users' scripts. Always remember to monitor your GPU usage as explained in the ["Monitoring Resources"](/monitoring) page.
|
|
```python
|
|
|
|
torch.cuda.set_per_process_memory_fraction(fraction, device=None)
|
|
|
|
```
|
|
|
|
|
|
## Other Tools and Libraries
|
|
where fraction is the percentage of the memory of the target device that the script will use. More information can be found [here](https://pytorch.org/docs/stable/generated/torch.cuda.set_per_process_memory_fraction.html#torch.cuda.set_per_process_memory_fraction).
|
|
|
|
|
|
If you are currently working with another program or library that uses the GPU and you can share some code or some tips on how to limit the GPU memory usage, please contact the sysadmins. We will be glad to add your suggestions to this page.
|
|
**N.B.**: when using multiple GPUs, the above function must be called for each device.
|
|
|
|
|
|
|
|
Thanks to Francesco Daghero for this guide.
|
|
|
|
|
|
|
|
## Other Tools and Libraries
|
|
|
|
|
|
|
|
If you are currently working with another program or library that uses the GPU and you can share some code or some tips on how to limit the GPU memory usage, please contact the sysadmins. We will be glad to add your suggestions to this page. |
|
|
|
\ No newline at end of file |