Skip to content

Commit 51a16a6

Browse files
authored
Fix(typo): Resolve typos S-1, S-3, S-6, S-8, S-9 (#7543) (#7621)
1 parent 297c096 commit 51a16a6

File tree

7 files changed

+7
-12
lines changed

7 files changed

+7
-12
lines changed

_typos.toml

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -27,17 +27,14 @@ Nervana = "Nervana"
2727
datas = "datas"
2828

2929
# These words need to be fixed
30-
Creenshot = "Creenshot"
3130
Learing = "Learing"
3231
Moible = "Moible"
3332
Operaton = "Operaton"
3433
Optimizaing = "Optimizaing"
3534
Optimzier = "Optimzier"
3635
Setment = "Setment"
37-
Similarily = "Similarily"
3836
Simle = "Simle"
3937
Sovler = "Sovler"
40-
Successed = "Successed"
4138
desgin = "desgin"
4239
desginated = "desginated"
4340
desigin = "desigin"
@@ -95,9 +92,7 @@ overrided = "overrided"
9592
overwrited = "overwrited"
9693
porcess = "porcess"
9794
processer = "processer"
98-
sacle = "sacle"
9995
samle = "samle"
100-
satifies = "satifies"
10196
schedual = "schedual"
10297
secenarios = "secenarios"
10398
sematic = "sematic"

docs/api/paddle/optimizer/lr/CyclicLR_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ CyclicLR
2424
- **step_size_down** (int,可选) - 学习率从最大学习率下降到初始学习率所需步数。若未指定,则其值默认等于 ``step_size_up`` 。
2525
- **mode** (str,可选) - 可以是 triangular、triangular2 或者 exp_range,对应策略已在上文描述,当 scale_fn 被指定时时,该参数将被忽略。默认值为 triangular。
2626
- **exp_gamma** (float,可选) - exp_range 缩放函数中的常量。默认值为 1.0。
27-
- **sacle_fn** (function,可选) - 一个有且仅有单个参数的函数,且对于任意的输入 x,都必须满足 0 ≤ scale_fn(x) ≤ 1;如果该参数被指定,则会忽略 mode 参数。默认值为 ``False`` 。
27+
- **scale_fn** (function,可选) - 一个有且仅有单个参数的函数,且对于任意的输入 x,都必须满足 0 ≤ scale_fn(x) ≤ 1;如果该参数被指定,则会忽略 mode 参数。默认值为 ``False`` 。
2828
- **scale_mode** (str,可选) - cycle 或者 iterations,表示缩放函数使用 cycle 数或 iterations 数作为输入。默认值为 cycle。
2929
- **last_epoch** (int,可选) - 上一轮的轮数,重启训练时设置为上一轮的 epoch 数。默认值为 -1,则为初始学习率。
3030
- **verbose** (bool,可选) - 如果是 ``True``,则在每一轮更新时在标准输出 `stdout` 输出一条信息。默认值为 ``False`` 。

docs/design/memory/memory_optimization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@ After op1, we can process variable b and variable c; After op2, we can process v
197197

198198
#### memory sharing policy
199199

200-
A memory pool will be mantained in the stage of memory optimization. Each operator node will be scanned to determine memory optimization is done or not. If an operator satifies the requirement, following policy will be taken to handle input/output variables.
200+
A memory pool will be mantained in the stage of memory optimization. Each operator node will be scanned to determine memory optimization is done or not. If an operator satisfies the requirement, following policy will be taken to handle input/output variables.
201201

202202
```
203203
if op.support_inplace():

docs/design/quantization/training_quantization_model_format.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ fake_quantize_abs_max {
2626

2727
### 1.2 静态量化
2828

29-
与动态量化不同,静态量化的量化 scale 是在量化训练时通过**窗口滑动平均**或者**窗口绝对值最大值**等方法计算求得的。静态量化主要通过`fake_quantize_moving_average_abs_max`op 或者`fake_quantize_range_abs_max`op 实现,它们利用输入的量化 scale 将输入 tensor 量化到-127~127 值域范围内。`fake_quantize_moving_average_abs_max`op 和`fake_quantize_range_abs_max`op 的输入和输出格式都是一样的,不同点在于 op 内部计算量化 scale 时使用的策略不同。`fake_quantize_moving_average_abs_max`op 使用一个窗口内绝对值最大值的滑动平均值作为量化 sacle,而`fake_quantize_range_abs_max`op 使用一个窗口内绝对值最大值的最大值作为量化 sacle。下面以`fake_quantize_moving_average_abs_max`op 为例,对其进行整体描述:
29+
与动态量化不同,静态量化的量化 scale 是在量化训练时通过**窗口滑动平均**或者**窗口绝对值最大值**等方法计算求得的。静态量化主要通过`fake_quantize_moving_average_abs_max`op 或者`fake_quantize_range_abs_max`op 实现,它们利用输入的量化 scale 将输入 tensor 量化到-127~127 值域范围内。`fake_quantize_moving_average_abs_max`op 和`fake_quantize_range_abs_max`op 的输入和输出格式都是一样的,不同点在于 op 内部计算量化 scale 时使用的策略不同。`fake_quantize_moving_average_abs_max`op 使用一个窗口内绝对值最大值的滑动平均值作为量化 scale,而`fake_quantize_range_abs_max`op 使用一个窗口内绝对值最大值的最大值作为量化 scale。下面以`fake_quantize_moving_average_abs_max`op 为例,对其进行整体描述:
3030

3131
```
3232
fake_quantize_moving_average_abs_max {

docs/dev_guides/op_optimization/kernel_primitive_api/model_example_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ nohup python tools/train.py \
4141
-c ppcls/configs/ImageNet/ResNet/ResNet50.yaml \
4242
-o Global.device=xpu > ResNet50_xpu2.log &
4343
```
44-
+ 5. Creenshot is as follows: </br>
44+
+ 5. Screenshot is as follows: </br>
4545
![Model](./images/example_model.png)
4646

4747
### XPU2 Kernel Primitive API Model List

docs/guides/jit/grammar_list_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -265,7 +265,7 @@ def sort_list(x, y):
265265

266266
- Don't support get shape after a reshape operators. You may get a -1 in shape value.
267267

268-
For example, `x = reshape(x, shape=shape_tensor)` , then use `x.shape[0]` to do other operation. Due to the difference between dynamic and static graph, it is okay in dynamic but it will fail in static graph. The reason is that APIs return computation result in dynamic graph mode, so x.shape has deterministic value after calling reshape . However, static graph doesn’t have the value shape_tensor during building network, so PaddlePaddle doesn’t know the value of x.shape after calling reshape. PaddlePaddle static graph will set -1 to represent unknown shape value for each dimension of x.shape in this case, not the expected value. Similarily, calling the shape of the output tensor of those APIs which change the shape, such as expend, cannot be converted into static graph properly.
268+
For example, `x = reshape(x, shape=shape_tensor)` , then use `x.shape[0]` to do other operation. Due to the difference between dynamic and static graph, it is okay in dynamic but it will fail in static graph. The reason is that APIs return computation result in dynamic graph mode, so x.shape has deterministic value after calling reshape . However, static graph doesn’t have the value shape_tensor during building network, so PaddlePaddle doesn’t know the value of x.shape after calling reshape. PaddlePaddle static graph will set -1 to represent unknown shape value for each dimension of x.shape in this case, not the expected value. Similarly, calling the shape of the output tensor of those APIs which change the shape, such as expend, cannot be converted into static graph properly.
269269

270270
#### examples :
271271

docs/guides/model_convert/convert_with_x2paddle_cn.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -332,14 +332,14 @@ try:
332332
if relative_diff >= 1e-05:
333333
is_successd = False
334334
if is_successd:
335-
f.write("Dygraph Successed\n")
335+
f.write("Dygraph Succeeded\n")
336336
else:
337337
f.write("!!!!!Dygraph Failed\n")
338338
except:
339339
f.write("!!!!!Failed\n")
340340
```
341341
342-
最终比较结果写在 result.txt 当中,若显示 Dygraph Successed 表示成功,验证通过后,则可使用 [Paddle Inference](https://www.paddlepaddle.org.cn/inference/product_introduction/inference_intro.html) 部署该模型。
342+
最终比较结果写在 result.txt 当中,若显示 Dygraph Succeeded 表示成功,验证通过后,则可使用 [Paddle Inference](https://www.paddlepaddle.org.cn/inference/product_introduction/inference_intro.html) 部署该模型。
343343
344344
## 三、迁移 ONNX 模型示例
345345

0 commit comments

Comments
 (0)