Skip to content
This repository was archived by the owner on May 6, 2021. It is now read-only.
This repository was archived by the owner on May 6, 2021. It is now read-only.

Reward transformations #85

@jbrea

Description

@jbrea

Sometimes one wants to give a transformed reward to the learner, but keep the true reward given by the environment for evaluation purposes. For example Dopamine clamps all rewards to [-1, 1] and I believe some of our methods are unstable in the Atari domain, because we don't clip the rewards. Where would it be best to transform rewards? Should we add a POST_OBSERVE hook, or allow for applying the transformation when observations are put into buffers or just before the actual learning takes place?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions