pytorch / 2 / generated / torch.nn.functional.cross_entropy.html

torch.nn.functional.cross_entropy

torch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean', label_smoothing=0.0) [source]

This criterion computes the cross entropy loss between input logits and target.

See CrossEntropyLoss for details.

Parameters
  • input (Tensor) – Predicted unnormalized logits; see Shape section below for supported shapes.
  • target (Tensor) – Ground truth class indices or class probabilities; see Shape section below for supported shapes.
  • weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
  • size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
  • ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Note that ignore_index is only applicable when the target contains class indices. Default: -100
  • reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
  • reduction (str, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
  • label_smoothing (float, optional) – A float in [0.0, 1.0]. Specifies the amount of smoothing when computing the loss, where 0.0 means no smoothing. The targets become a mixture of the original ground truth and a uniform distribution as described in Rethinking the Inception Architecture for Computer Vision. Default: 0.0 0.0 .
Return type

Tensor

Shape:
  • Input: Shape ( C ) (C) , ( N , C ) (N, C) or ( N , C , d 1 , d 2 , . . . , d K ) (N, C, d_1, d_2, ..., d_K) with K 1 K \geq 1 in the case of K-dimensional loss.
  • Target: If containing class indices, shape ( ) () , ( N ) (N) or ( N , d 1 , d 2 , . . . , d K ) (N, d_1, d_2, ..., d_K) with K 1 K \geq 1 in the case of K-dimensional loss where each value should be between [ 0 , C ) [0, C) . If containing class probabilities, same shape as the input and each value should be between [ 0 , 1 ] [0, 1] .

where:

C = number of classes N = batch size \begin{aligned} C ={} & \text{number of classes} \\ N ={} & \text{batch size} \\ \end{aligned}

Examples:

>>> # Example of target with class indices
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randint(5, (3,), dtype=torch.int64)
>>> loss = F.cross_entropy(input, target)
>>> loss.backward()
>>>
>>> # Example of target with class probabilities
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5).softmax(dim=1)
>>> loss = F.cross_entropy(input, target)
>>> loss.backward()

© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.nn.functional.cross_entropy.html