Specific environment:
Untubu:16.04
pytorch :1.5.0
CUDA:10.0
When I was running code using pytorch, I encountered the following error:
RuntimeError: CUDA error:out of memory
I tried to look at many methods on the Internet, but there was no solution. Then I thought that I had run a similar code before, and there seemed to be such a line of code:
Then I try to add the following two lines of code to my code:
import torch.backends.cudnn as cudnn
cudnn.benchmark = True
Fortunately, my code is working properly. Of course, this may be one of the n methods. Everyone is likely to modify their own code according to the specific situation of their own code.
Similar Posts:
- [Solved] pytorchImportError: numpy.core.multiarray failed to import
- Solutions to errors encountered by Python
- [Solved] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
- [Solved] GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation
- Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
- ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory [Solved]
- PyTorch UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` inst
- [Solved] DCNv2 Compile Error: error: identifier “THCudaBlas_SgemmBatched” is undefined
- Pytorch Install Error: OSError: [WinError 126] Could not Find the Module, Error loading “xx\lib\site-packages\torch\lib\asmjit.dl
- Examples of torch.NN.Functional.Relu() and torch.NN.Relu()