# First look at the size of the incoming tensor, the incoming tensor's dimension is not enough, then you need to add dimensionality # Here the size of the labels is tensor([n]), pass in the need to add dimensionality # If the above problem occurs, just add to.(torch.int64) at the end to solve it # n is the kind to be divided labels = torch.nn.functional.one_hot(labels.unsqueeze(0).to(torch.int64), n)
Pro test valid!!!