# First look at the size of the incoming tensor, the incoming tensor's dimension is not enough, then you need to add dimensionality # Here the size of the labels is tensor([n]), pass in the need to add dimensionality # If the above problem occurs, just add to.(torch.int64) at the end to solve it # n is the kind to be divided labels = torch.nn.functional.one_hot(labels.unsqueeze(0).to(torch.int64), n)
Pro test valid!!!
Similar Posts:
- Examples of torch.NN.Functional.Relu() and torch.NN.Relu()
- [Solved] PyTorch error: TypeError: ‘builtin_function_or_method‘ object is unsubscriptable
- Python TypeError: softmax() got an unexpected keyword argument ‘axis’
- Pytorch: How to Use pack_padded_sequence & pad_packed_sequence
- [Solved] TypeError: can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
- tf.nn.top_k(input, k, name=None) & tf.nn.in_top_k(predictions, targets, k, name=None)
- [Solved] tf.summary Error: tags and values not the same shape
- Solutions to errors encountered by Python
- TypeError: ‘Tensor‘ object does not support item assignment in TensorFlow
- [Solved] Pytoch nn.CrossEntropyLoss Error: RuntimeError: expected scalar type Long but found Float