Skip to content

Commit d17db04

Browse files
[CodeStyle][Typos][L-[1-5]] Fix typo( Learing,lable,leran,learing,legth , lenth) (#7625)
* fix-c19-c23 * fix-c6-c7-c24-c26 * fix-d6-d10 * fix-some-qe * test-commit * fix-d11-d15-p11-12 * fix-d16-d19 * debug * debug * fix * fix docs/practices/gan/cyclegan/cyclegan.ipynb --------- Co-authored-by: ooo oo <[email protected]> Co-authored-by: ooooo <[email protected]>
1 parent c21ab51 commit d17db04

File tree

5 files changed

+706
-711
lines changed

5 files changed

+706
-711
lines changed

_typos.toml

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -43,11 +43,6 @@ instrinsics = "instrinsics"
4343
interchangable = "interchangable"
4444
intializers = "intializers"
4545
intput = "intput"
46-
lable = "lable"
47-
learing = "learing"
48-
legth = "legth"
49-
lenth = "lenth"
50-
leran = "leran"
5146
libary = "libary"
5247
mantained = "mantained"
5348
matrics = "matrics"

docs/api/paddle/static/accuracy_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ accuracy
1010
1111
accuracy layer。参考 https://en.wikipedia.org/wiki/Precision_and_recall
1212

13-
使用输入和标签计算准确率。如果正确的标签在 topk 个预测值里,则计算结果加 1。注意:输出正确率的类型由 input 类型决定,input 和 lable 的类型可以不一样。
13+
使用输入和标签计算准确率。如果正确的标签在 topk 个预测值里,则计算结果加 1。注意:输出正确率的类型由 input 类型决定,input 和 label 的类型可以不一样。
1414

1515
参数
1616
::::::::::::

docs/design/memory/memory_optimization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ In compilers, the front end of the compiler translates programs into an intermed
5353

5454
Therefore, the compiler needs to analyze the intermediate-representation program to determine which temporary variables are in use at the same time. We say a variable is "live" if it holds a value that may be needed in the future, so this analysis is called liveness analysis.
5555

56-
We can leran these techniques from compilers. There are mainly two stages to make live variable analysis:
56+
We can learn these techniques from compilers. There are mainly two stages to make live variable analysis:
5757

5858
- construct a control flow graph
5959
- solve the dataflow equations

docs/practices/gan/cyclegan/cyclegan.ipynb

Lines changed: 702 additions & 702 deletions
Large diffs are not rendered by default.

docs/practices/nlp/transformer_in_English-to-Spanish.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1170,7 +1170,7 @@
11701170
"source": [
11711171
"### 4.2 Encoder\n",
11721172
"Encoder部分主要包含了多头注意力机制、归一化层以及前馈神经网络。输入会依次经过多头注意力模块、归一化层构成的残差模块、前馈神经网络模块、归一化层构成的残差模块。\n",
1173-
"* 多头注意力机制(MultiHeadAttention):使用[paddle.nn.MultiHeadAttention](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MultiHeadAttention_cn.html#multiheadattention)实现多头注意力机制,需要注意其掩码attn_mask需要的shape是[batch_szie,num_heads,sequence_legth,sequence_legth]。\n",
1173+
"* 多头注意力机制(MultiHeadAttention):使用[paddle.nn.MultiHeadAttention](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MultiHeadAttention_cn.html#multiheadattention)实现多头注意力机制,需要注意其掩码attn_mask需要的shape是[batch_szie,num_heads,sequence_length,sequence_length]。\n",
11741174
"* 前馈神经网络(Feed Forward):输入经过MultiHeadAttention层后,经过一层feed forward层。模型中的feed forward,采用的是一种position-wise feed-forward的方法,即先对输入加一个全连接网络,之后使用Relu激活,之后再加一个全连接网络。\n",
11751175
"* 残差网络:由归一化(LayerNorm)后的结果与之前时刻的输入相加组成。LayerNorm会在每一个样本上计算均值和方差。\n"
11761176
]
@@ -1482,7 +1482,7 @@
14821482
" def forward(self, pre, real, trg_mask):\n",
14831483
" # 返回的数据类型与pre一致,除了axis维度(未指定则为-1),其他维度也与pre一致\n",
14841484
" # logits=pre,[batch_size,sequence_len,word_size],猜测会进行argmax操作,[batch_size,sequence_len,1]\n",
1485-
" # 默认的soft_label为False,lable=real,[bacth_size,sequence_len,1]\n",
1485+
" # 默认的soft_label为False,label=real,[bacth_size,sequence_len,1]\n",
14861486
" cost = paddle.nn.functional.softmax_with_cross_entropy(\n",
14871487
" logits=pre, label=real, soft_label=False\n",
14881488
" )\n",

0 commit comments

Comments
 (0)