Message boards : Number crunching : RuntimeError: Unable to find a valid cuDNN algorithm to run convolution when running python
Author | Message |
---|---|
I have nvidia gtx 1650. | |
ID: 58983 | Rating: 0 | rate:
![]() ![]() ![]() | |
Another got RuntimeError: CUDA out of memory. Tried to allocate 28.00 MiB (GPU 0; 4.00 GiB total capacity; 1.32 GiB already allocated; 1011.70 MiB free; 1.36 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF | |
ID: 58984 | Rating: 0 | rate:
![]() ![]() ![]() | |
First off I would say that the Python apps seem to have a high error rate. I'm noting about 40% failures on my windows systems without finding a good reason why. There could be a cause for this but it might also be normal. | |
ID: 58987 | Rating: 0 | rate:
![]() ![]() ![]() | |
There's a problem with how Windows allocates virtual memory for Python libraries. | |
ID: 58988 | Rating: 0 | rate:
![]() ![]() ![]() | |
One also crashed because of CUDA out of memory. Tried to allocate 28.00 MiB (GPU 0; 4.00 GiB total capacity; 1.32 GiB already allocated; 1011.70 MiB free; 1.36 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF | |
ID: 58993 | Rating: 0 | rate:
![]() ![]() ![]() | |
The GTX 1650 is a 4GB card so it should have plenty of memory for the Python app. There is something else going on there. | |
ID: 58994 | Rating: 0 | rate:
![]() ![]() ![]() | |
The GTX 1650 is a 4GB card so it should have plenty of memory for the Python app. There is something else going on there. from what I remember, the python app was using more than 4GB of VRAM. it's definitely possible that 4GB isnt enough. ____________ ![]() | |
ID: 58996 | Rating: 0 | rate:
![]() ![]() ![]() | |
That would be an interesting development. From what I have been gathering the Python app is not putting much of a load on the GPU. Not quite sure about the actual memory usage. CUDA out of memory. Tried to allocate 28.00 MiB (GPU 0; 4.00 GiB total capacity; 1.32 GiB already allocated; 1011.70 MiB free; 1.36 GiB reserved in total by PyTorch) That actually seems more like a memory error related to CUDA or the driver etc. Not the memory capacity of the card. | |
ID: 58998 | Rating: 0 | rate:
![]() ![]() ![]() | |
The memory utilization seems to be constant on my gpus when they are running a Python task. Currently using 3349MB out of the 8GB on the card. | |
ID: 58999 | Rating: 0 | rate:
![]() ![]() ![]() | |
I found a few tasks running on my Windows servers and checked them with GPU-Z. The GPU memory used was between 2518 and 3287 MB. I think with that usage these should run OK on a 4GB card. | |
ID: 59000 | Rating: 0 | rate:
![]() ![]() ![]() | |
Message boards : Number crunching : RuntimeError: Unable to find a valid cuDNN algorithm to run convolution when running python