When I was running code using pytorch, I encountered the following error:
RuntimeError: CUDA error:out of memory
I tried to look at many methods on the Internet, but there was no solution. Then I thought that I had run a similar code before, and there seemed to be such a line of code:
Then I try to add the following two lines of code to my code:
import torch.backends.cudnn as cudnn cudnn.benchmark = True
Fortunately, my code is working properly. Of course, this may be one of the n methods. Everyone is likely to modify their own code according to the specific situation of their own code.
- [Solved] pytorchImportError: numpy.core.multiarray failed to import
- Solutions to errors encountered by Python
- [Solved] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
- [Solved] GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation
- Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
- ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory [Solved]
- PyTorch UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` inst
- Examples of torch.NN.Functional.Relu() and torch.NN.Relu()
- [Solved] RuntimeError: Attempting to deserialize object on CUDA device 2 but torch.cuda.device_count() is 1
- [Solved] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected