On this page
HuberLoss
class torch.nn.HuberLoss(reduction='mean', delta=1.0)[source]-
Creates a criterion that uses a squared term if the absolute element-wise error falls below delta and a delta-scaled L1 term otherwise. This loss combines advantages of both
L1LossandMSELoss; the delta-scaled L1 region makes the loss less sensitive to outliers thanMSELoss, while the L2 region provides smoothness overL1Lossnear 0. See Huber loss for more information.For a batch of size , the unreduced loss can be described as:
with
If
reductionis notnone, then:Note
When delta is set to 1, this loss is equivalent to
SmoothL1Loss. In general, this loss differs fromSmoothL1Lossby a factor of delta (AKA beta in Smooth L1). SeeSmoothL1Lossfor additional discussion on the differences in behavior between the two losses.- Parameters
-
- reduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Default:'mean' - delta (float, optional) – Specifies the threshold at which to change between delta-scaled L1 and L2 loss. The value must be positive. Default: 1.0
- reduction (str, optional) – Specifies the reduction to apply to the output:
- Shape:
-
- Input: where means any number of dimensions.
- Target: , same shape as the input.
- Output: scalar. If
reductionis'none', then , same shape as the input.
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.nn.HuberLoss.html