# LanguageCrossEntropy#

class composer.metrics.LanguageCrossEntropy(vocab_size, dist_sync_on_step=False, ignore_index=- 100)[source]#

Torchmetric that computes cross entropy on language modeling outputs.

sum_loss (float): The sum of the per-example loss in the batch. total_items (float): The number of batches to average across.

Parameters
• vocab_size (int) – The size of the tokenizer vocabulary.

• dist_sync_on_step (bool, optional) – Synchronize metric state across processes at each forward() before returning the value at the step. Default: False.

• ignore_index (int, optional) – The class index to ignore. Default: -100.

compute()[source]#

Aggregate the state over all processes to compute the metric.

Returns

loss – The loss averaged across all batches as a Tensor.

update(output, target)[source]#

Updates the internal state with results from a new batch.

Parameters
• output (Mapping) – The output from the model, which must contain either the Tensor or a Mapping type that contains the loss or model logits.

• target (Tensor) – A Tensor of ground-truth values to compare against.