Skip to content
GitLab
  • Explore
  • Sign in
  • EDA Guides
  • eda-servers-guide
  • Wiki
  • gpu limit

gpu limit · Changes

Page history
Update gpu limit authored May 07, 2021 by Daniele Jahier Pagliari's avatar Daniele Jahier Pagliari
Hide whitespace changes
Inline Side-by-side
gpu-limit.md
View page @ 38f52f99
...@@ -13,13 +13,14 @@ If you use TensorFlow 2.0 (or Keras with a TensorFlow backend), you can limit th ...@@ -13,13 +13,14 @@ If you use TensorFlow 2.0 (or Keras with a TensorFlow backend), you can limit th
```python ```python
import tensorflow as tf import tensorflow as tf
megabytes = 2048 lim = 2048 # megabytes
gpus = tf.config.experimental.list_physical_devices('GPU') gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus: if gpus:
for gpu in gpus: for gpu in gpus:
tf.config.experimental.set_virtual_device_configuration(gpu [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=megabytes)]) tf.config.experimental.set_virtual_device_configuration(gpu, [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=lim)])
``` ```
This code first checks if your script is using GPUs. In that case, it limits the GPU memory requested to `lim` MB (2GB in the example). If you are using a single GPU you can also simplify this removing the for loop and setting the limit just for `gpus[0]`.
## Limiting GPU Memory in PyTorch ## Limiting GPU Memory in PyTorch
Clone repository
Home

Servers Information
Accounts
Connecting to the Servers
Storage Management and Quotas
Monitoring Resources
Software, Libraries and Data
  • Using Custom Software
  • Software Modules on Philae
  • Remote Code Deployment
  • Python Virtual Environments
  • EDA Technology Libraries
  • Shared Datasets
Miscellaneous Settings
  • Setting Up CUDA Libraries
  • Choosing which GPU to Use
  • Limiting GPU Memory
  • Limiting CPU Threads
Gitlab

Sidebar