Skip to content

training curves look fragmented / not clear what is "step" #4113

@undertherain

Description

@undertherain

❓ train log looks fragmented / docs not clear on how to log multiple parameters

logs

The docs show how single metric is logged, what is the right way to log multiple metrics?

Also examples show the dictionary of metrics being returned as results of training_step and validation_step functions - can't logging be done from there automatically, and if logging is done manually why do we need to return those dicts?

Code

    def training_step(self, batch, batch_idx):
        s1, s2, target = batch
        logits = self(s1, s2)
        loss = F.cross_entropy(logits, target)
        acc = accuracy(logits, target)
        self.log('train_loss', loss)
        self.log('train_acc', acc)
        result = OrderedDict({
            'train_loss': loss,
            'train_acc': acc,
        })
        return result

    def validation_step(self, batch, batch_idx):
        s1, s2, target = batch

        logits = self(s1, s2)
        loss = F.cross_entropy(logits, target)
        acc = accuracy(logits, target)
        self.log('val_loss', loss)
        self.log('val_acc', acc)
        result = OrderedDict({
            'val_loss': loss,
            'val_acc': acc,
        })
        return result

What's your environment?

using PL 0.9.1rc4 and wandb 0.10.4

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions