The error content is: the input is cuda type data, but the weight type used is not, and their types should be the same.
Solution :
Just change your network model to the cuda type (before using the model).
Such as model_class = yourModelName()
old version: model_class(x)
new version: model_class.cuda()
model_class(x)
Similar Posts:
- Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
- Solutions to errors encountered by Python
- [Solved] RuntimeError: Attempting to deserialize object on CUDA device 2 but torch.cuda.device_count() is 1
- [Solved] Backend Internal error: Exception during IR lowering
- [Solved] GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation
- [Solved] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
- [Solved] Computed property “xxxx” was assigned to but it has no setter
- InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runti…
- PyTorch :TypeError: exceptions must derive from BaseException
- [Solved] Pytoch nn.CrossEntropyLoss Error: RuntimeError: expected scalar type Long but found Float