Skip to content

Conversation

@georgesterpu
Copy link
Contributor

No description provided.

@georgesterpu
Copy link
Contributor Author

I have a question on this topic: are we supposed to explicitly call the call method of the BeamSearchDecoder class (e.g. mydecoder.call(embedding=...)) or the __call__ one (e.g. mydecoder(embedding=...) ? I receive the following error in the latter case when naming the embedding argument (embeddning as it is now):

TypeError: __call__() missing 1 required positional argument: 'inputs'

code snippet here: https://pastebin.com/Lf17f4km

Thanks

Copy link
Member

@qlzh727 qlzh727 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix.

@qlzh727
Copy link
Member

qlzh727 commented Sep 18, 2019

I have a question on this topic: are we supposed to explicitly call the call method of the BeamSearchDecoder class (e.g. mydecoder.call(embedding=...)) or the __call__ one (e.g. mydecoder(embedding=...) ? I receive the following error in the latter case when naming the embedding argument (embeddning as it is now):

TypeError: __call__() missing 1 required positional argument: 'inputs'

code snippet here: https://pastebin.com/Lf17f4km

Thanks

You should just invoke decoder by calling it, and not invoke the call() method directly. There are common logic in call() method, which we don't want user to skip. Also, you don't need to call with the keyword "embedding", just pass the value in as positional arg is fine.

@seanpmorgan seanpmorgan merged commit 4cc2b70 into tensorflow:master Sep 18, 2019
@georgesterpu
Copy link
Contributor Author

georgesterpu commented Sep 18, 2019

@qlzh727 I understand. For BeamSearchDecoder, the first argument embedding becomes the inputs of __call__, while the other ones (start_tokens, end_token, initial_state, training) are stored in kwargs. Is this because its call method does not take a proper input like most layers do ? Is embedding supposed to represent the sequence of embedded decoder inputs (of shape [batch_size, seq_len, embedding_size]) or the lookup table (of shape [vocab_size, embedding_size]) used to retrieve the embedding of an integer id in the inputs ?

facaiy pushed a commit to facaiy/addons that referenced this pull request Sep 20, 2019
seanpmorgan pushed a commit that referenced this pull request Sep 20, 2019
* Add docstring correlation cost (#514)

* Add Docstring CorrelationCost
* Add CorrelationCost Documentation
* Small reformat

* BLD: built on 2.0.0-rc1

* DOC: doc for 0.5.1

* Fix optical_flow test case (#527)

* typo fix (#523)

* CLN: fix file permission 755 in #527

* CLN: use public api, tf.keras
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants