From 3b062c7226c1740307fc119da8d9ae4085b2c30e Mon Sep 17 00:00:00 2001 From: zkh2016 Date: Thu, 23 Sep 2021 02:55:30 +0000 Subject: [PATCH 01/11] =?UTF-8?q?=E4=BF=AE=E6=94=B9=E9=94=99=E5=88=AB?= =?UTF-8?q?=E5=AD=97?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/api/paddle/multi_dot_cn.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/api/paddle/multi_dot_cn.rst b/docs/api/paddle/multi_dot_cn.rst index 2caf4fa6985..9e294c1a963 100755 --- a/docs/api/paddle/multi_dot_cn.rst +++ b/docs/api/paddle/multi_dot_cn.rst @@ -7,9 +7,9 @@ multi_dot Multi_dot是一个计算多个矩阵乘法的算子。 -算子支持float,double和float16三种类型。该算子不支持批量输入。 +算子支持float16,float32和float64三种类型。该算子不支持批量输入。 -输入[x]的每个tensor的shape必须是二维的,除了第一个和做后一个tensor可以是一维的。如果第一个tensor是shape为(n, )的一维向量,该tensor将被当作是shape为(1, n)的行向量处理,同样的,如果最后一个tensor的shape是(n, ),将被当作是shape为(n, 1)的列向量处理。 +输入[x]的每个tensor的shape必须是二维的,除了第一个和最后一个tensor可以是一维的。如果第一个tensor是shape为(n, )的一维向量,该tensor将被当作是shape为(1, n)的行向量处理,同样的,如果最后一个tensor的shape是(n, ),将被当作是shape为(n, 1)的列向量处理。 如果第一个和最后一个tensor是二维矩阵,那么输出也是一个二维矩阵,否则输出是一维的向量。 @@ -18,7 +18,7 @@ Multi_dot会选择计算量最小的乘法顺序进行计算。(a, b)和(b, c) - Cost((AB)C) = 20x5x100 + 20x100x10 = 30000 - Cost(A(BC)) = 5x100x10 + 20x5x10 = 6000 -在这个例子中,先算B乘以C再乘A的计算量比按顺序乘少5被。 +在这个例子中,先算B乘以C再乘A的计算量比按顺序乘少5倍。 参数 ::::::::: From 0606c9bed839265d4d8be1715d06ab9853b7f661 Mon Sep 17 00:00:00 2001 From: zkh2016 Date: Thu, 23 Sep 2021 03:04:41 +0000 Subject: [PATCH 02/11] =?UTF-8?q?=E5=A4=87=E6=B3=A8CPU=E4=B8=8D=E6=94=AF?= =?UTF-8?q?=E6=8C=81float16?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/api/paddle/multi_dot_cn.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/api/paddle/multi_dot_cn.rst b/docs/api/paddle/multi_dot_cn.rst index 9e294c1a963..01e00dfb055 100755 --- a/docs/api/paddle/multi_dot_cn.rst +++ b/docs/api/paddle/multi_dot_cn.rst @@ -7,7 +7,7 @@ multi_dot Multi_dot是一个计算多个矩阵乘法的算子。 -算子支持float16,float32和float64三种类型。该算子不支持批量输入。 +算子支持float16(只有GPU支持,CPU不支持float16),float32和float64三种类型。该算子不支持批量输入。 输入[x]的每个tensor的shape必须是二维的,除了第一个和最后一个tensor可以是一维的。如果第一个tensor是shape为(n, )的一维向量,该tensor将被当作是shape为(1, n)的行向量处理,同样的,如果最后一个tensor的shape是(n, ),将被当作是shape为(n, 1)的列向量处理。 @@ -22,7 +22,7 @@ Multi_dot会选择计算量最小的乘法顺序进行计算。(a, b)和(b, c) 参数 ::::::::: - - **x** ([tensor]): 输出的是一个tensor列表。 + - **x** ([tensor]): 输入的是一个tensor列表。 - **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 返回: From 5726d087a65f2834ad9f41df3fadef1321120f2d Mon Sep 17 00:00:00 2001 From: zhangkaihuo Date: Thu, 23 Sep 2021 08:08:58 +0000 Subject: [PATCH 03/11] update example --- docs/api/paddle/multi_dot_cn.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/api/paddle/multi_dot_cn.rst b/docs/api/paddle/multi_dot_cn.rst index 01e00dfb055..8dc63f4a419 100755 --- a/docs/api/paddle/multi_dot_cn.rst +++ b/docs/api/paddle/multi_dot_cn.rst @@ -41,7 +41,7 @@ Multi_dot会选择计算量最小的乘法顺序进行计算。(a, b)和(b, c) B_data = np.random.random([4, 5]).astype(np.float32) A = paddle.to_tensor(A_data) B = paddle.to_tensor(B_data) - out = paddle.multi_dot([A, B]) + out = paddle.linalg.multi_dot([A, B]) print(out.numpy().shape) # [3, 5] # A * B * C @@ -51,6 +51,6 @@ Multi_dot会选择计算量最小的乘法顺序进行计算。(a, b)和(b, c) A = paddle.to_tensor(A_data) B = paddle.to_tensor(B_data) C = paddle.to_tensor(C_data) - out = paddle.multi_dot([A, B, C]) + out = paddle.linalg.multi_dot([A, B, C]) print(out.numpy().shape) # [10, 7] From 8ef9a10a09119f2dc6d8f06a13d9857df78a2b54 Mon Sep 17 00:00:00 2001 From: zkh2016 Date: Fri, 22 Oct 2021 02:52:44 +0000 Subject: [PATCH 04/11] add fused_feedforward --- .../nn/functional/fused_feedforward_cn.rst | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 docs/api/paddle/nn/functional/fused_feedforward_cn.rst diff --git a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst new file mode 100644 index 00000000000..ea8f0f85a1d --- /dev/null +++ b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst @@ -0,0 +1,59 @@ +.. _cn_api_nn_functional_fused_feedforward: + +fused_feedforward +------------------------------- + +.. py:function:: paddle.nn.functional.fused_feedforward(x, linear1_weight, linear2_weight, linear1_bias=None, linear2_bias=None, ln1_scale=None, ln1_bias=None, ln2_scale=None, ln2_bias=None, dropout1_rate=0.5, dropout2_rate=0.5,activation="relu", ln1_epsilon=1e-5, ln2_epsilon=1e-5, pre_layer_norm=False, name=None): + +这是一个融合算子,该算子是对transformer模型中feed forward层的多个算子进行融合,该算子与如下为代码表达一样的功能: + +``` + residual = src; + if pre_layer_norm: + src = layer_norm(src) + src = linear(dropout(activation(dropout(linear(src))))) + if not pre_layer_norm: + src = layer_norm(out) +``` + +参数 +::::::::: + - **x** (Tensor) - 输入Tensor,数据类型支持float16, float32 和float64, 输入的形状是`[batch_size, sequence_length, d_model]`。 + - **linear1_weight** (Tensor) - 第一个linear算子的权重数据,数据类型与`x`一样,形状是`[d_model, dim_feedforward]`。 + - **linear2_weight** (Tensor) - 第二个linear算子的权重数据,数据类型与`x`一样,形状是`[dim_feedforward, d_model]`。 + - **linear1_bias** (Tensor, 可选) - 第一个linear算子的偏置数据,数据类型与`x`一样,形状是`[dim_feedforward]`。默认值为None。 + - **linear2_bias** (Tensor, 可选) - 第二个linear算子的偏置数据,数据类型与`x`一样,形状是`[d_model]`。默认值为None。 + - **ln1_scale** (Tensor, 可选) - 第一个layer_norm算子的权重数据,数据类型可以是float32或者float64,形状和`x`一样。默认值为None。 + - **ln1_bias** (Tensor, 可选) - 第一个layer_norm算子的偏置数据,数据类型和`ln1_scale`一样, 形状是`[d_model]`。默认值为None。 + - **ln2_scale** (Tensor, 可选) - 第二个layer_norm算子的权重数据,数据类型可以是float32或者float64,形状和`x`一样。默认值为None。 + - **ln2_bias** (Tensor, 可选) - 第二个layer_norm算子的偏置数据,数据类型和`ln2_scale`一样, 形状是`[d\_model]`。默认值为None。 + - **dropout1_rate** (float, 可选) - 第一个dropout算子置零的概率。默认是0.5。 + - **dropout2_rate** (float, 可选) - 第二个dropout算子置零的概率。默认是0.5。 + - **activation** (string, 可选) - 激活函数。默认值是relu。 + - **ln1_epsilon** (float, 可选) - 一个很小的浮点数,被第一个layer_norm算子加到分母,避免出现除零的情况。默认值是1e-5。 + - **ln2_epsilon** (float, 可选) - 一个很小的浮点数,被第二个layer_norm算子加到分母,避免出现除零的情况。默认值是1e-5。 + - **pre_layer_norm** (bool, 可选) - 在预处理阶段加上layer_norm,或者在后处理阶段加上layer_norm。默认值是False。 + - **name** (string, 可选) – fused_feedforward的名称, 默认值为None。更多信息请参见 :ref:`api_guide_Name` 。 + +返回 +::::::::: + - Tensor, 输出Tensor,数据类型与`x`一样。 + +代码示例 +:::::::::: + +.. code-block:: python + + # required: gpu + import paddle + import numpy as np + x_data = np.random.random((1, 8, 8)).astype("float32") + linear1_weight_data = np.random.random((8, 8)).astype("float32") + linear2_weight_data = np.random.random((8, 8)).astype("float32") + x = paddle.to_tensor(x_data) + linear1_weight = paddle.to_tensor(linear1_weight_data) + linear2_weight = paddle.to_tensor(linear2_weight_data) + out = paddle.nn.functional.fused_feedforward(x, linear1_weight, linear2_weight) + print(out.numpy().shape) + # (1, 8, 8) + From 795cd8a56bbd912b68f2bcbaa359a30a43b5528b Mon Sep 17 00:00:00 2001 From: zkh2016 Date: Fri, 22 Oct 2021 02:59:15 +0000 Subject: [PATCH 05/11] add fused_feedforward --- docs/api/paddle/nn/functional/fused_feedforward_cn.rst | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst index ea8f0f85a1d..aa9d92d63ef 100644 --- a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst +++ b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst @@ -7,14 +7,13 @@ fused_feedforward 这是一个融合算子,该算子是对transformer模型中feed forward层的多个算子进行融合,该算子与如下为代码表达一样的功能: -``` +.. code-block:: python residual = src; if pre_layer_norm: src = layer_norm(src) src = linear(dropout(activation(dropout(linear(src))))) if not pre_layer_norm: src = layer_norm(out) -``` 参数 ::::::::: From a9f69a71343396b7091ee8f302227deaba7e0303 Mon Sep 17 00:00:00 2001 From: zkh2016 Date: Fri, 22 Oct 2021 03:01:13 +0000 Subject: [PATCH 06/11] add fused_feedforward --- docs/api/paddle/nn/functional/fused_feedforward_cn.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst index aa9d92d63ef..aabb009c6a7 100644 --- a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst +++ b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst @@ -7,7 +7,7 @@ fused_feedforward 这是一个融合算子,该算子是对transformer模型中feed forward层的多个算子进行融合,该算子与如下为代码表达一样的功能: -.. code-block:: python +.. math:: residual = src; if pre_layer_norm: src = layer_norm(src) From 528a4dcc599914c2f894cb10f63ee2d8fcfdff04 Mon Sep 17 00:00:00 2001 From: zkh2016 Date: Fri, 22 Oct 2021 03:08:08 +0000 Subject: [PATCH 07/11] opt the description --- docs/api/paddle/nn/functional/fused_feedforward_cn.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst index aabb009c6a7..f4868076477 100644 --- a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst +++ b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst @@ -5,7 +5,7 @@ fused_feedforward .. py:function:: paddle.nn.functional.fused_feedforward(x, linear1_weight, linear2_weight, linear1_bias=None, linear2_bias=None, ln1_scale=None, ln1_bias=None, ln2_scale=None, ln2_bias=None, dropout1_rate=0.5, dropout2_rate=0.5,activation="relu", ln1_epsilon=1e-5, ln2_epsilon=1e-5, pre_layer_norm=False, name=None): -这是一个融合算子,该算子是对transformer模型中feed forward层的多个算子进行融合,该算子与如下为代码表达一样的功能: +这是一个融合算子,该算子是对transformer模型中feed forward层的多个算子进行融合,该算子只支持在GPU下运行,该算子与如下伪代码表达一样的功能: .. math:: residual = src; From 02d8c0b2be3d593056044a2d772b58ab7bf7dc5c Mon Sep 17 00:00:00 2001 From: zkh2016 Date: Fri, 22 Oct 2021 06:43:26 +0000 Subject: [PATCH 08/11] update docs --- docs/api/paddle/nn/functional/fused_feedforward_cn.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst index f4868076477..3fc7606fbff 100644 --- a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst +++ b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst @@ -7,7 +7,7 @@ fused_feedforward 这是一个融合算子,该算子是对transformer模型中feed forward层的多个算子进行融合,该算子只支持在GPU下运行,该算子与如下伪代码表达一样的功能: -.. math:: +.. code-block:: python residual = src; if pre_layer_norm: src = layer_norm(src) From eba59ead0a8975b59f0920bd8f5cf6d61ece6508 Mon Sep 17 00:00:00 2001 From: zkh2016 Date: Fri, 22 Oct 2021 06:50:59 +0000 Subject: [PATCH 09/11] update docs --- docs/api/paddle/nn/functional/fused_feedforward_cn.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst index 3fc7606fbff..112e39b09c6 100644 --- a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst +++ b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst @@ -8,6 +8,7 @@ fused_feedforward 这是一个融合算子,该算子是对transformer模型中feed forward层的多个算子进行融合,该算子只支持在GPU下运行,该算子与如下伪代码表达一样的功能: .. code-block:: python + residual = src; if pre_layer_norm: src = layer_norm(src) From a028800196b76474a33e5c74f6422c6d9f7c0b9c Mon Sep 17 00:00:00 2001 From: zkh2016 Date: Mon, 25 Oct 2021 06:38:50 +0000 Subject: [PATCH 10/11] update doc --- docs/api/paddle/nn/functional/fused_feedforward_cn.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst index 112e39b09c6..6f93588c74e 100644 --- a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst +++ b/docs/api/paddle/nn/functional/fused_feedforward_cn.rst @@ -7,7 +7,7 @@ fused_feedforward 这是一个融合算子,该算子是对transformer模型中feed forward层的多个算子进行融合,该算子只支持在GPU下运行,该算子与如下伪代码表达一样的功能: -.. code-block:: python +.. code-block:: ipython residual = src; if pre_layer_norm: From 2187f4d183810964c9c4cc8a5472954a279def56 Mon Sep 17 00:00:00 2001 From: zkh2016 Date: Tue, 26 Oct 2021 11:19:26 +0000 Subject: [PATCH 11/11] move fused_feedforward docs position --- .../{ => incubate}/nn/functional/fused_feedforward_cn.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) rename docs/api/paddle/{ => incubate}/nn/functional/fused_feedforward_cn.rst (87%) diff --git a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst b/docs/api/paddle/incubate/nn/functional/fused_feedforward_cn.rst similarity index 87% rename from docs/api/paddle/nn/functional/fused_feedforward_cn.rst rename to docs/api/paddle/incubate/nn/functional/fused_feedforward_cn.rst index 6f93588c74e..c3a81b56490 100644 --- a/docs/api/paddle/nn/functional/fused_feedforward_cn.rst +++ b/docs/api/paddle/incubate/nn/functional/fused_feedforward_cn.rst @@ -1,9 +1,9 @@ -.. _cn_api_nn_functional_fused_feedforward: +.. _cn_api_incubate_nn_functional_fused_feedforward: fused_feedforward ------------------------------- -.. py:function:: paddle.nn.functional.fused_feedforward(x, linear1_weight, linear2_weight, linear1_bias=None, linear2_bias=None, ln1_scale=None, ln1_bias=None, ln2_scale=None, ln2_bias=None, dropout1_rate=0.5, dropout2_rate=0.5,activation="relu", ln1_epsilon=1e-5, ln2_epsilon=1e-5, pre_layer_norm=False, name=None): +.. py:function:: paddle.incubate.nn.functional.fused_feedforward(x, linear1_weight, linear2_weight, linear1_bias=None, linear2_bias=None, ln1_scale=None, ln1_bias=None, ln2_scale=None, ln2_bias=None, dropout1_rate=0.5, dropout2_rate=0.5,activation="relu", ln1_epsilon=1e-5, ln2_epsilon=1e-5, pre_layer_norm=False, name=None): 这是一个融合算子,该算子是对transformer模型中feed forward层的多个算子进行融合,该算子只支持在GPU下运行,该算子与如下伪代码表达一样的功能: @@ -53,7 +53,7 @@ fused_feedforward x = paddle.to_tensor(x_data) linear1_weight = paddle.to_tensor(linear1_weight_data) linear2_weight = paddle.to_tensor(linear2_weight_data) - out = paddle.nn.functional.fused_feedforward(x, linear1_weight, linear2_weight) + out = paddle.incubate.nn.functional.fused_feedforward(x, linear1_weight, linear2_weight) print(out.numpy().shape) # (1, 8, 8)