From b5c04e3a345da3044871fd05c73830e380ecedc5 Mon Sep 17 00:00:00 2001 From: Bayberry Z Date: Thu, 14 Mar 2019 19:55:13 +0800 Subject: [PATCH] Update 20180920-unify-rnn-interface.md --- rfcs/20180920-unify-rnn-interface.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rfcs/20180920-unify-rnn-interface.md b/rfcs/20180920-unify-rnn-interface.md index 0f33201a1..3060eb0c9 100644 --- a/rfcs/20180920-unify-rnn-interface.md +++ b/rfcs/20180920-unify-rnn-interface.md @@ -286,7 +286,7 @@ It also has few differences from the original LSTM/GRU implementation: incompatible with the standard LSTM/GRU. There are internal effort to convert the weights between a CuDNN implementation and normal TF implementation. See CudnnLSTMSaveable. 1. CuDNN does not support variational recurrent dropout, which is a quite important feature. -1. CuDNN implementation only support TAN activation which is also the default implementation in the +1. CuDNN implementation only support TANH activation which is also the default implementation in the LSTM paper. The Keras one support more activation choices if user don't want the default behavior. With that, it means when users specify their LSTM/GRU layer, the underlying implementation could be