... | ... | @@ -2,4 +2,24 @@ |
|
|
|
|
|
> :warning: Thesis students only have access to `philae.polito.it`, which does not have a GPU installed (see ["Servers Information"](/servers)). So, this page is only for other group members.
|
|
|
|
|
|
As explained in ["Monitoring Resources"](/monitoring) |
|
|
\ No newline at end of file |
|
|
As explained in the ["Monitoring Resources"](/monitoring) page, GPU memory is a critical resource, since most programs will simply crash when the GPU runs out of memory. It is your responsibility to make sure that you are not monopolizing the entire memory of a GPU for a long time.
|
|
|
|
|
|
This page lists a set of tricks to limit your GPU memory from commonly used applications and libraries.
|
|
|
|
|
|
## Limiting GPU Memory in Tensorflow 2.0
|
|
|
|
|
|
If you use TensorFlow 2.0 (or Keras with a TensorFlow backend), you can limit the total GPU memory requested by your script directly from the Python code. The basic code snippet to insert in your script is the following:
|
|
|
|
|
|
```python
|
|
|
import tensorflow as tf
|
|
|
|
|
|
gpus = tf.config.experimental.list_physical_devices('GPU')
|
|
|
if gpus:
|
|
|
tf.config.experimental.set_virtual_device_configuration(
|
|
|
gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=megabytes)]
|
|
|
)
|
|
|
```
|
|
|
|
|
|
|
|
|
## Limiting GPU Memory in PyTorch
|
|
|
|