Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -27,17 +27,14 @@ Nervana = "Nervana"
datas = "datas"

# These words need to be fixed
Creenshot = "Creenshot"
Learing = "Learing"
Moible = "Moible"
Operaton = "Operaton"
Optimizaing = "Optimizaing"
Optimzier = "Optimzier"
Setment = "Setment"
Similarily = "Similarily"
Simle = "Simle"
Sovler = "Sovler"
Successed = "Successed"
desgin = "desgin"
desginated = "desginated"
desigin = "desigin"
Expand Down Expand Up @@ -95,9 +92,7 @@ overrided = "overrided"
overwrited = "overwrited"
porcess = "porcess"
processer = "processer"
sacle = "sacle"
samle = "samle"
satifies = "satifies"
schedual = "schedual"
secenarios = "secenarios"
sematic = "sematic"
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/optimizer/lr/CyclicLR_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ CyclicLR
- **step_size_down** (int,可选) - 学习率从最大学习率下降到初始学习率所需步数。若未指定,则其值默认等于 ``step_size_up`` 。
- **mode** (str,可选) - 可以是 triangular、triangular2 或者 exp_range,对应策略已在上文描述,当 scale_fn 被指定时时,该参数将被忽略。默认值为 triangular。
- **exp_gamma** (float,可选) - exp_range 缩放函数中的常量。默认值为 1.0。
- **sacle_fn** (function,可选) - 一个有且仅有单个参数的函数,且对于任意的输入 x,都必须满足 0 ≤ scale_fn(x) ≤ 1;如果该参数被指定,则会忽略 mode 参数。默认值为 ``False`` 。
- **scale_fn** (function,可选) - 一个有且仅有单个参数的函数,且对于任意的输入 x,都必须满足 0 ≤ scale_fn(x) ≤ 1;如果该参数被指定,则会忽略 mode 参数。默认值为 ``False`` 。
- **scale_mode** (str,可选) - cycle 或者 iterations,表示缩放函数使用 cycle 数或 iterations 数作为输入。默认值为 cycle。
- **last_epoch** (int,可选) - 上一轮的轮数,重启训练时设置为上一轮的 epoch 数。默认值为 -1,则为初始学习率。
- **verbose** (bool,可选) - 如果是 ``True``,则在每一轮更新时在标准输出 `stdout` 输出一条信息。默认值为 ``False`` 。
Expand Down
2 changes: 1 addition & 1 deletion docs/design/memory/memory_optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ After op1, we can process variable b and variable c; After op2, we can process v

#### memory sharing policy

A memory pool will be mantained in the stage of memory optimization. Each operator node will be scanned to determine memory optimization is done or not. If an operator satifies the requirement, following policy will be taken to handle input/output variables.
A memory pool will be mantained in the stage of memory optimization. Each operator node will be scanned to determine memory optimization is done or not. If an operator satisfies the requirement, following policy will be taken to handle input/output variables.

```
if op.support_inplace():
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ fake_quantize_abs_max {

### 1.2 静态量化

与动态量化不同,静态量化的量化 scale 是在量化训练时通过**窗口滑动平均**或者**窗口绝对值最大值**等方法计算求得的。静态量化主要通过`fake_quantize_moving_average_abs_max`op 或者`fake_quantize_range_abs_max`op 实现,它们利用输入的量化 scale 将输入 tensor 量化到-127~127 值域范围内。`fake_quantize_moving_average_abs_max`op 和`fake_quantize_range_abs_max`op 的输入和输出格式都是一样的,不同点在于 op 内部计算量化 scale 时使用的策略不同。`fake_quantize_moving_average_abs_max`op 使用一个窗口内绝对值最大值的滑动平均值作为量化 sacle,而`fake_quantize_range_abs_max`op 使用一个窗口内绝对值最大值的最大值作为量化 sacle。下面以`fake_quantize_moving_average_abs_max`op 为例,对其进行整体描述:
与动态量化不同,静态量化的量化 scale 是在量化训练时通过**窗口滑动平均**或者**窗口绝对值最大值**等方法计算求得的。静态量化主要通过`fake_quantize_moving_average_abs_max`op 或者`fake_quantize_range_abs_max`op 实现,它们利用输入的量化 scale 将输入 tensor 量化到-127~127 值域范围内。`fake_quantize_moving_average_abs_max`op 和`fake_quantize_range_abs_max`op 的输入和输出格式都是一样的,不同点在于 op 内部计算量化 scale 时使用的策略不同。`fake_quantize_moving_average_abs_max`op 使用一个窗口内绝对值最大值的滑动平均值作为量化 scale,而`fake_quantize_range_abs_max`op 使用一个窗口内绝对值最大值的最大值作为量化 scale。下面以`fake_quantize_moving_average_abs_max`op 为例,对其进行整体描述:

```
fake_quantize_moving_average_abs_max {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ nohup python tools/train.py \
-c ppcls/configs/ImageNet/ResNet/ResNet50.yaml \
-o Global.device=xpu > ResNet50_xpu2.log &
```
+ 5. Creenshot is as follows: </br>
+ 5. Screenshot is as follows: </br>
![Model](./images/example_model.png)

### XPU2 Kernel Primitive API Model List
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/jit/grammar_list_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ def sort_list(x, y):

- Don't support get shape after a reshape operators. You may get a -1 in shape value.

For example, `x = reshape(x, shape=shape_tensor)` , then use `x.shape[0]` to do other operation. Due to the difference between dynamic and static graph, it is okay in dynamic but it will fail in static graph. The reason is that APIs return computation result in dynamic graph mode, so x.shape has deterministic value after calling reshape . However, static graph doesn’t have the value shape_tensor during building network, so PaddlePaddle doesn’t know the value of x.shape after calling reshape. PaddlePaddle static graph will set -1 to represent unknown shape value for each dimension of x.shape in this case, not the expected value. Similarily, calling the shape of the output tensor of those APIs which change the shape, such as expend, cannot be converted into static graph properly.
For example, `x = reshape(x, shape=shape_tensor)` , then use `x.shape[0]` to do other operation. Due to the difference between dynamic and static graph, it is okay in dynamic but it will fail in static graph. The reason is that APIs return computation result in dynamic graph mode, so x.shape has deterministic value after calling reshape . However, static graph doesn’t have the value shape_tensor during building network, so PaddlePaddle doesn’t know the value of x.shape after calling reshape. PaddlePaddle static graph will set -1 to represent unknown shape value for each dimension of x.shape in this case, not the expected value. Similarly, calling the shape of the output tensor of those APIs which change the shape, such as expend, cannot be converted into static graph properly.

#### examples :

Expand Down
4 changes: 2 additions & 2 deletions docs/guides/model_convert/convert_with_x2paddle_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -332,14 +332,14 @@ try:
if relative_diff >= 1e-05:
is_successd = False
if is_successd:
f.write("Dygraph Successed\n")
f.write("Dygraph Succeeded\n")
else:
f.write("!!!!!Dygraph Failed\n")
except:
f.write("!!!!!Failed\n")
```

最终比较结果写在 result.txt 当中,若显示 Dygraph Successed 表示成功,验证通过后,则可使用 [Paddle Inference](https://www.paddlepaddle.org.cn/inference/product_introduction/inference_intro.html) 部署该模型。
最终比较结果写在 result.txt 当中,若显示 Dygraph Succeeded 表示成功,验证通过后,则可使用 [Paddle Inference](https://www.paddlepaddle.org.cn/inference/product_introduction/inference_intro.html) 部署该模型。

## 三、迁移 ONNX 模型示例

Expand Down