soft_cross_entropy#
- composer.loss.soft_cross_entropy(input, target, weight=None, ignore_index=- 100, reduction='mean')[source]#
Drop-in replacement for
cross_entropy
that handles class indices or one-hot labels.Note
This function will be obsolete with this update.
- Parameters
input (Tensor) โ \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K \geq 1\) in the case of K-dimensional loss. input is expected to contain unnormalized scores (often referred to as logits).
target (Tensor) โ If containing class indices, shape \((N)\) where each value is \(0 \leq \text{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss. If containing class probabilities, same shape as the input.
weight (Tensor, optional) โ a manual rescaling weight given to each class. If given, has to be a Tensor of size C. Default:
None
.ignore_index (int, optional) โ Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets. Note thatignore_index
is only applicable when the target contains class indices. Default:-100
reduction (str, optional) โ Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:'mean'