diff --git a/docs/tutorials/layers_normalizations.ipynb b/docs/tutorials/layers_normalizations.ipynb index 01fffc7aa7..0788784667 100644 --- a/docs/tutorials/layers_normalizations.ipynb +++ b/docs/tutorials/layers_normalizations.ipynb @@ -67,7 +67,7 @@ "* **Instance Normalization** (TensorFlow Addons)\n", "* **Layer Normalization** (TensorFlow Core)\n", "\n", - "The basic idea behind these layers is to normalize the output of an activation layer to improve the convergence during training. In contrast to [batch normalization](https://keras.io/layers/normalization/) these normalizations do not work on batches, instead they normalize the activations of a single sample, making them suitable for recurrent neual networks as well. \n", + "The basic idea behind these layers is to normalize the output of an activation layer to improve the convergence during training. In contrast to [batch normalization](https://keras.io/layers/normalization/) these normalizations do not work on batches, instead they normalize the activations of a single sample, making them suitable for recurrent neural networks as well. \n", "\n", "Typically the normalization is performed by calculating the mean and the standard deviation of a subgroup in your input tensor. It is also possible to apply a scale and an offset factor to this as well.\n", "\n", @@ -260,7 +260,7 @@ "### Introduction\n", "Layer Normalization is special case of group normalization where the group size is 1. The mean and standard deviation is calculated from all activations of a single sample.\n", "\n", - "Experimental results show that Layer normalization is well suited for Recurrent Neural Networks, since it works batchsize independt.\n", + "Experimental results show that Layer normalization is well suited for Recurrent Neural Networks, since it works batchsize independently.\n", "\n", "### Example\n", "\n",