-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
questionFurther information is requestedFurther information is requested
Description
❓ train log looks fragmented / docs not clear on how to log multiple parameters
The docs show how single metric is logged, what is the right way to log multiple metrics?
Also examples show the dictionary of metrics being returned as results of training_step and validation_step functions - can't logging be done from there automatically, and if logging is done manually why do we need to return those dicts?
Code
def training_step(self, batch, batch_idx):
s1, s2, target = batch
logits = self(s1, s2)
loss = F.cross_entropy(logits, target)
acc = accuracy(logits, target)
self.log('train_loss', loss)
self.log('train_acc', acc)
result = OrderedDict({
'train_loss': loss,
'train_acc': acc,
})
return result
def validation_step(self, batch, batch_idx):
s1, s2, target = batch
logits = self(s1, s2)
loss = F.cross_entropy(logits, target)
acc = accuracy(logits, target)
self.log('val_loss', loss)
self.log('val_acc', acc)
result = OrderedDict({
'val_loss': loss,
'val_acc': acc,
})
return result
What's your environment?
using PL 0.9.1rc4 and wandb 0.10.4
vatai and tadej-redstone
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested
