binary_cross_entropy_with_logits#

composer.loss.binary_cross_entropy_with_logits(input, target, weight=None, reduction='sum', pos_weight=None, scale_by_batch_size=True)[source]#

Replacement for binary_cross_entropy_with_logits that handles class indices or one-hot labels.

binary_cross_entropy_with_logits with reduction = 'mean'` will typically result in very small loss values (on the order of 1e-3), which necessitates scaling the learning rate by 1e3 to allow the model to learn. This implementation avoids this by using ``reduction = sum and scaling the loss inversely proportionally to the batch size.

Parameters
  • input (Tensor) โ€“ \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K \geq 1\) in the case of K-dimensional loss. input is expected to contain unnormalized scores (often referred to as logits).

  • target (Tensor) โ€“ If containing class indices, shape \((N)\) where each value is \(0 \leq \text{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss. If containing class probabilities, same shape as the input.

  • weight (Tensor, optional) โ€“ a manual rescaling weight given to each class. If given, has to be a Tensor of size C. Default: None.

  • reduction (str, optional) โ€“ Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'sum'

  • pos_weight (Tensor, optional) โ€“ a weight of positive examples. Must be a vector with length equal to the number of classes.

  • scale_by_batch_size (bool, optional) โ€“ Whether to scale the loss by the batch size (i.e. input.shape[0]). Default: True.