diff --git a/docs/api/paddle/matrix_power_cn.rst b/docs/api/paddle/linalg/matrix_power_cn.rst
similarity index 86%
rename from docs/api/paddle/matrix_power_cn.rst
rename to docs/api/paddle/linalg/matrix_power_cn.rst
index 210b41e61c9..c1f771a92f0 100644
--- a/docs/api/paddle/matrix_power_cn.rst
+++ b/docs/api/paddle/linalg/matrix_power_cn.rst
@@ -3,7 +3,7 @@
matrix_power
-------------------------------
-.. py:function:: paddle.matrix_power(x, n, name=None)
+.. py:function:: paddle.linalg.matrix_power(x, n, name=None)
计算一个或一批方阵的 ``n`` 次幂。
@@ -41,17 +41,17 @@ matrix_power
x = paddle.to_tensor([[1, 2, 3],
[1, 4, 9],
[1, 8, 27]], dtype='float64')
- print(paddle.matrix_power(x, 2))
+ print(paddle.linalg.matrix_power(x, 2))
# [[6. , 34. , 102.],
# [14. , 90. , 282.],
# [36. , 250., 804.]]
- print(paddle.matrix_power(x, 0))
+ print(paddle.linalg.matrix_power(x, 0))
# [[1., 0., 0.],
# [0., 1., 0.],
# [0., 0., 1.]]
- print(paddle.matrix_power(x, -2))
+ print(paddle.linalg.matrix_power(x, -2))
# [[ 12.91666667, -12.75000000, 2.83333333 ],
# [-7.66666667 , 8. , -1.83333333 ],
- # [ 1.80555556 , -1.91666667 , 0.44444444 ]]
\ No newline at end of file
+ # [ 1.80555556 , -1.91666667 , 0.44444444 ]]
diff --git a/docs/api/paddle/multi_dot_cn.rst b/docs/api/paddle/linalg/multi_dot_cn.rst
similarity index 97%
rename from docs/api/paddle/multi_dot_cn.rst
rename to docs/api/paddle/linalg/multi_dot_cn.rst
index 8dc63f4a419..e6200eecbdd 100755
--- a/docs/api/paddle/multi_dot_cn.rst
+++ b/docs/api/paddle/linalg/multi_dot_cn.rst
@@ -3,7 +3,7 @@
multi_dot
-------------------------------
-.. py:function:: paddle.multi_dot(x, name=None)
+.. py:function:: paddle.linalg.multi_dot(x, name=None)
Multi_dot是一个计算多个矩阵乘法的算子。
diff --git a/docs/guides/01_paddle2.0_introduction/basic_concept/amp_cn.ipynb b/docs/guides/01_paddle2.0_introduction/basic_concept/amp_cn.ipynb
new file mode 100644
index 00000000000..e5a5b2106b8
--- /dev/null
+++ b/docs/guides/01_paddle2.0_introduction/basic_concept/amp_cn.ipynb
@@ -0,0 +1,463 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "# 自动混合精度训练\n",
+ "\n",
+ "一般情况下,训练深度学习模型时使用的数据类型为单精度(FP32)。2018年,百度与NVIDIA联合发表论文:[MIXED PRECISION TRAINING](https://arxiv.org/pdf/1710.03740.pdf),提出了混合精度训练的方法。混合精度训练是指在训练过程中,同时使用单精度(FP32)和半精度(FP16),其目的是相较于使用单精度(FP32)训练模型,在保持精度持平的条件下,能够加速训练。本文将介绍如何使用飞桨框架,实现自动混合精度训练。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 一、半精度浮点类型 FP16\n",
+ "\n",
+ "首先介绍半精度(FP16)。如图1所示,半精度(FP16)是一种相对较新的浮点类型,在计算机中使用2字节(16位)存储。在IEEE 754-2008标准中,它亦被称作binary16。与计算中常用的单精度(FP32)和双精度(FP64)类型相比,FP16更适于在精度要求不高的场景中使用。\n",
+ "\n",
+ "\n",
+ "
\n",
+ " 图 1. 半精度和单精度数据示意图\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 二、NVIDIA GPU的FP16算力\n",
+ "在使用相同的超参数下,混合精度训练使用半精度浮点(FP16)和单精度(FP32)浮点即可达到与使用纯单精度训练相同的准确率,并可加速模型的训练速度。这主要得益于英伟达推出的Volta及Turing架构GPU在使用FP16计算时具有如下特点:\n",
+ "- FP16可降低一半的内存带宽和存储需求,这使得在相同的硬件条件下研究人员可使用更大更复杂的模型以及更大的batch size大小。\n",
+ "- FP16可以充分利用英伟达Volta及Turing架构GPU提供的Tensor Cores技术。在相同的GPU硬件上,Tensor Cores的FP16计算吞吐量是FP32的8倍。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 三、使用飞桨框架实现自动混合精度\n",
+ "使用飞桨框架提供的API,``paddle.amp.auto_cast`` 和 ``paddle.amp.decorate`` 和 ``paddle.amp.GradScaler`` 能够实现自动混合精度训练(Automatic Mixed Precision,AMP),即在相关OP的计算中,根据一定的规则,自动选择FP16或FP32计算。飞桨的AMP为用户提供了两种模式:\n",
+ "- level=’O1‘:采用黑名名单策略的混合精度训练,使用FP16与FP32进行计算的OP列表可见该[文档](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/amp/Overview_cn.html)。\n",
+ "- level=’O2‘:纯FP16训练,除用户自定义黑名单中指定的OP和不支持FP16计算的OP之外,全部使用FP16计算。\n",
+ "\n",
+ "下面来看一个具体的例子,来了解如果使用飞桨框架实现混合精度训练。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.1 辅助函数\n",
+ "首先定义辅助函数,用来计算训练时间。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import time\n",
+ "\n",
+ "# 开始时间\n",
+ "start_time = None\n",
+ "\n",
+ "def start_timer():\n",
+ " # 获取开始时间\n",
+ " global start_time\n",
+ " start_time = time.time()\n",
+ "\n",
+ "def end_timer_and_print(msg):\n",
+ " # 打印信息并输出训练时间\n",
+ " end_time = time.time()\n",
+ " print(\"\\n\" + msg)\n",
+ " print(\"共计耗时 = {:.3f} sec\".format(end_time - start_time))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.2 构建一个简单的网络\n",
+ "\n",
+ "构建一个简单的网络,用于对比使用普通方法进行训练与使用混合精度训练的训练速度。该网络由三层 ``Linear`` 组成,其中前两层 ``Linear`` 后接 ``ReLU`` 激活函数。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import paddle\n",
+ "import paddle.nn as nn\n",
+ "\n",
+ "class SimpleNet(nn.Layer):\n",
+ "\n",
+ " def __init__(self, input_size, output_size):\n",
+ " \n",
+ " super(SimpleNet, self).__init__()\n",
+ " self.linear1 = nn.Linear(input_size, output_size)\n",
+ " self.relu1 = nn.ReLU()\n",
+ " self.linear2 = nn.Linear(input_size, output_size)\n",
+ " self.relu2 = nn.ReLU()\n",
+ " self.linear3 = nn.Linear(input_size, output_size)\n",
+ "\n",
+ " def forward(self, x):\n",
+ "\n",
+ " x = self.linear1(x)\n",
+ " x = self.relu1(x)\n",
+ " x = self.linear2(x)\n",
+ " x = self.relu2(x)\n",
+ " x = self.linear3(x)\n",
+ "\n",
+ " return x"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "设置训练的相关参数,这里为了能有效的看出混合精度训练对于训练速度的提升,将 ``input_size`` 与 ``output_size`` 的值设为较大的值,为了使用GPU 提供的``Tensor Core`` 性能,还需将 ``batch_size`` 设置为 8 的倍数。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "W1110 18:42:02.362493 104 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1\n",
+ "W1110 18:42:02.367755 104 device_context.cc:465] device: 0, cuDNN Version: 7.6.\n"
+ ]
+ }
+ ],
+ "source": [
+ "epochs = 5\n",
+ "input_size = 4096 # 设为较大的值\n",
+ "output_size = 4096 # 设为较大的值\n",
+ "batch_size = 512 # batch_size 为8的倍数\n",
+ "nums_batch = 50\n",
+ "\n",
+ "train_data = [paddle.randn((batch_size, input_size)) for _ in range(nums_batch)]\n",
+ "labels = [paddle.randn((batch_size, output_size)) for _ in range(nums_batch)]\n",
+ "\n",
+ "mse = paddle.nn.MSELoss()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.3 使用默认的训练方式进行训练"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,\n",
+ " [1.24519622])\n",
+ "\n",
+ "默认耗时:\n",
+ "共计耗时 = 2.926 sec\n"
+ ]
+ }
+ ],
+ "source": [
+ "model = SimpleNet(input_size, output_size) # 定义模型\n",
+ "\n",
+ "optimizer = paddle.optimizer.SGD(learning_rate=0.0001, parameters=model.parameters()) # 定义优化器\n",
+ "\n",
+ "start_timer() # 获取训练开始时间\n",
+ "\n",
+ "for epoch in range(epochs):\n",
+ " datas = zip(train_data, labels)\n",
+ " for i, (data, label) in enumerate(datas):\n",
+ "\n",
+ " output = model(data)\n",
+ " loss = mse(output, label)\n",
+ "\n",
+ " # 反向传播\n",
+ " loss.backward()\n",
+ "\n",
+ " # 训练模型\n",
+ " optimizer.step()\n",
+ " optimizer.clear_grad()\n",
+ "\n",
+ "print(loss)\n",
+ "end_timer_and_print(\"默认耗时:\") # 获取结束时间并打印相关信息"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.4 使用AMP训练模型\n",
+ "\n",
+ "在飞桨框架中,使用自动混合精度训练,需要进行四个步骤:\n",
+ "\n",
+ "- Step1: 定义 ``GradScaler`` ,用于缩放 ``loss`` 比例,避免浮点数下溢\n",
+ "- Step2: 使用 ``decorate`` 在level=’O1‘模式下不做任何处理,无需调用该api,在level=’O2‘模式下,将网络参数从FP32转换为FP16\n",
+ "- Step3: 使用 ``auto_cast`` 用于创建AMP上下文环境,该上下文中自动会确定每个OP的输入数据类型(FP16或FP32)\n",
+ "- Step4: 使用 Step1中定义的 ``GradScaler`` 完成 ``loss`` 的缩放,用缩放后的 ``loss`` 进行反向传播,完成训练\n",
+ "\n",
+ "\n",
+ "采用level=’O1‘模式训练:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,\n",
+ " [1.24815702])\n",
+ "\n",
+ "使用AMP-O1模式耗时:\n",
+ "共计耗时 = 1.294 sec\n"
+ ]
+ }
+ ],
+ "source": [
+ "model = SimpleNet(input_size, output_size) # 定义模型\n",
+ "\n",
+ "optimizer = paddle.optimizer.SGD(learning_rate=0.0001, parameters=model.parameters()) # 定义优化器\n",
+ "\n",
+ "# Step1:定义 GradScaler,用于缩放loss比例,避免浮点数溢出\n",
+ "scaler = paddle.amp.GradScaler(init_loss_scaling=1024)\n",
+ "\n",
+ "start_timer() # 获取训练开始时间\n",
+ "\n",
+ "for epoch in range(epochs):\n",
+ " datas = zip(train_data, labels)\n",
+ " for i, (data, label) in enumerate(datas):\n",
+ "\n",
+ " # Step2:创建AMP上下文环境,开启自动混合精度训练\n",
+ " with paddle.amp.auto_cast():\n",
+ " output = model(data)\n",
+ " loss = mse(output, label)\n",
+ "\n",
+ " # Step3:使用 Step1中定义的 GradScaler 完成 loss 的缩放,用缩放后的 loss 进行反向传播\n",
+ " scaled = scaler.scale(loss)\n",
+ " scaled.backward()\n",
+ "\n",
+ " # 训练模型\n",
+ " scaler.minimize(optimizer, scaled)\n",
+ " optimizer.clear_grad()\n",
+ "\n",
+ "print(loss)\n",
+ "end_timer_and_print(\"使用AMP-O1模式耗时:\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "采用level=’O2‘模式训练:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "in ParamBase copy_to func\n",
+ "in ParamBase copy_to func\n",
+ "in ParamBase copy_to func\n",
+ "in ParamBase copy_to func\n",
+ "in ParamBase copy_to func\n",
+ "in ParamBase copy_to func\n",
+ "Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,\n",
+ " [1.25423336])\n",
+ "\n",
+ "使用AMP-O2模式耗时:\n",
+ "共计耗时 = 0.890 sec\n"
+ ]
+ }
+ ],
+ "source": [
+ "model = SimpleNet(input_size, output_size) # 定义模型\n",
+ "\n",
+ "optimizer = paddle.optimizer.SGD(learning_rate=0.0001, parameters=model.parameters()) # 定义优化器\n",
+ "\n",
+ "# Step1:定义 GradScaler,用于缩放loss比例,避免浮点数溢出\n",
+ "scaler = paddle.amp.GradScaler(init_loss_scaling=1024)\n",
+ "\n",
+ "# Step2:在level=’O2‘模式下,将网络参数从FP32转换为FP16\n",
+ "model, optimizer = paddle.amp.decorate(models=model, optimizers=optimizer, level='O2', master_weight=None, save_dtype=None)\n",
+ "\n",
+ "start_timer() # 获取训练开始时间\n",
+ "\n",
+ "for epoch in range(epochs):\n",
+ " datas = zip(train_data, labels)\n",
+ " for i, (data, label) in enumerate(datas):\n",
+ "\n",
+ " # Step3:创建AMP上下文环境,开启自动混合精度训练\n",
+ " with paddle.amp.auto_cast(enable=True, custom_white_list=None, custom_black_list=None, level='O2'):\n",
+ " output = model(data)\n",
+ " loss = mse(output, label)\n",
+ "\n",
+ " # Step4:使用 Step1中定义的 GradScaler 完成 loss 的缩放,用缩放后的 loss 进行反向传播\n",
+ " scaled = scaler.scale(loss)\n",
+ " scaled.backward()\n",
+ "\n",
+ " # 训练模型\n",
+ " scaler.minimize(optimizer, scaled)\n",
+ " optimizer.clear_grad()\n",
+ "\n",
+ "print(loss)\n",
+ "end_timer_and_print(\"使用AMP-O2模式耗时:\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 四、进阶用法\n",
+ "### 4.1 使用梯度累加\n",
+ "梯度累加是指在模型训练过程中,训练一个batch的数据得到梯度后,不立即用该梯度更新模型参数,而是继续下一个batch数据的训练,得到梯度后继续循环,多次循环后梯度不断累加,直至达到一定次数后,用累加的梯度更新参数,这样可以起到变相扩大 batch_size 的作用。\n",
+ "\n",
+ "在自动混合精度训练中,也支持梯度累加,使用方式如下:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,\n",
+ " [1.25602019])\n",
+ "\n",
+ "使用AMP模式耗时:\n",
+ "共计耗时 = 1.026 sec\n"
+ ]
+ }
+ ],
+ "source": [
+ "model = SimpleNet(input_size, output_size) # 定义模型\n",
+ "\n",
+ "optimizer = paddle.optimizer.SGD(learning_rate=0.0001, parameters=model.parameters()) # 定义优化器\n",
+ "\n",
+ "accumulate_batchs_num = 10 # 梯度累加中 batch 的数量\n",
+ "\n",
+ "# 定义 GradScaler\n",
+ "scaler = paddle.amp.GradScaler(init_loss_scaling=1024)\n",
+ "\n",
+ "start_timer() # 获取训练开始时间\n",
+ "\n",
+ "for epoch in range(epochs):\n",
+ " datas = zip(train_data, labels)\n",
+ " for i, (data, label) in enumerate(datas):\n",
+ "\n",
+ " # 创建AMP上下文环境,开启自动混合精度训练\n",
+ " with paddle.amp.auto_cast():\n",
+ " output = model(data)\n",
+ " loss = mse(output, label)\n",
+ "\n",
+ " # 使用 GradScaler 完成 loss 的缩放,用缩放后的 loss 进行反向传播\n",
+ " scaled = scaler.scale(loss)\n",
+ " scaled.backward()\n",
+ "\n",
+ " # 当累计的 batch 为 accumulate_batchs_num 时,更新模型参数\n",
+ " if (i + 1) % accumulate_batchs_num == 0:\n",
+ "\n",
+ " # 训练模型\n",
+ " scaler.minimize(optimizer, scaled)\n",
+ " optimizer.clear_grad()\n",
+ "\n",
+ "print(loss)\n",
+ "end_timer_and_print(\"使用AMP模式耗时:\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 五、总结\n",
+ "从上面的示例中可以看出,使用自动混合精度训练,O1模式共计耗时约 1.294s,O2模式共计耗时约 0.890s,而普通的训练方式则耗时 2.926s,O1模式训练速度提升约为 2.1倍,O2模式训练速度提升约为 3.0倍。如需更多使用混合精度训练的示例,请参考飞桨模型库: [paddlepaddle/models](https://github.com/PaddlePaddle/models)。"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "py35-paddle1.2.0"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 1
+}
diff --git a/docs/guides/01_paddle2.0_introduction/basic_concept/amp_cn.md b/docs/guides/01_paddle2.0_introduction/basic_concept/amp_cn.md
index bc96b6736a4..646e01ecd37 100644
--- a/docs/guides/01_paddle2.0_introduction/basic_concept/amp_cn.md
+++ b/docs/guides/01_paddle2.0_introduction/basic_concept/amp_cn.md
@@ -1,6 +1,6 @@
# 自动混合精度训练
-一般情况下,训练深度学习模型时使用的数据类型为单精度(FP32)。2018年,百度与NVIDIA联合发表论文:[MIXED PRECISION TRAINING](https://arxiv.org/pdf/1710.03740.pdf),提出了混合精度训练的方法。混合精度训练是指在训练过程中,同时使用单精度(FP32)和半精度(FP16),其目的是相较于使用单精度(FP32)训练模型,在保持精度持平的条件下,能够加速训练。本文将介绍如何使用飞桨框架,实现自动混合精度训练。
+一般情况下,训练深度学习模型时使用的数据类型为单精度(FP32)。2018年,百度与NVIDIA联合发表论文:[MIXED PRECISION TRAINING](https://arxiv.org/pdf/1710.03740.pdf),提出了混合精度训练的方法。混合精度训练是指在训练过程中,同时使用单精度(FP32)和半精度(FP16),其目的是相较于使用单精度(FP32)训练模型,在保持精度持平的条件下,能够加速训练。本文将介绍如何使用飞桨框架,实现自动混合精度训练。
## 一、半精度浮点类型 FP16
@@ -57,6 +57,7 @@ import paddle.nn as nn
class SimpleNet(nn.Layer):
def __init__(self, input_size, output_size):
+
super(SimpleNet, self).__init__()
self.linear1 = nn.Linear(input_size, output_size)
self.relu1 = nn.ReLU()
@@ -91,6 +92,10 @@ labels = [paddle.randn((batch_size, output_size)) for _ in range(nums_batch)]
mse = paddle.nn.MSELoss()
```
+ W1110 18:42:02.362493 104 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
+ W1110 18:42:02.367755 104 device_context.cc:465] device: 0, cuDNN Version: 7.6.
+
+
### 3.3 使用默认的训练方式进行训练
@@ -120,10 +125,10 @@ end_timer_and_print("默认耗时:") # 获取结束时间并打印相关信息
```
Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
- [1.24609220])
-
+ [1.24519622])
+
默认耗时:
- 共计耗时 = 2.819 sec
+ 共计耗时 = 2.926 sec
### 3.4 使用AMP训练模型
@@ -138,6 +143,7 @@ end_timer_and_print("默认耗时:") # 获取结束时间并打印相关信息
采用level=’O1‘模式训练:
+
```python
model = SimpleNet(input_size, output_size) # 定义模型
@@ -170,14 +176,15 @@ end_timer_and_print("使用AMP-O1模式耗时:")
```
Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
- [1.24609900])
-
+ [1.24815702])
+
使用AMP-O1模式耗时:
- 共计耗时 = 1.324 sec
+ 共计耗时 = 1.294 sec
采用level=’O2‘模式训练:
+
```python
model = SimpleNet(input_size, output_size) # 定义模型
@@ -212,11 +219,17 @@ print(loss)
end_timer_and_print("使用AMP-O2模式耗时:")
```
+ in ParamBase copy_to func
+ in ParamBase copy_to func
+ in ParamBase copy_to func
+ in ParamBase copy_to func
+ in ParamBase copy_to func
+ in ParamBase copy_to func
Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
- [1.24997652])
-
+ [1.25423336])
+
使用AMP-O2模式耗时:
- 共计耗时 = 0.933 sec
+ 共计耗时 = 0.890 sec
## 四、进阶用法
@@ -263,10 +276,11 @@ end_timer_and_print("使用AMP模式耗时:")
```
Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
- [1.24623466])
-
+ [1.25602019])
+
使用AMP模式耗时:
- 共计耗时 = 1.020 sec
+ 共计耗时 = 1.026 sec
+
## 五、总结
-从上面的示例中可以看出,使用自动混合精度训练,O1模式共计耗时约 1.324s,O2模式共计耗时约 0.933s,而普通的训练方式则耗时 2.819s,O1模式训练速度提升约为 2.1倍,O2模式训练速度提升约为 3.0倍。如需更多使用混合精度训练的示例,请参考飞桨模型库: [paddlepaddle/models](https://github.com/PaddlePaddle/models)。
+从上面的示例中可以看出,使用自动混合精度训练,O1模式共计耗时约 1.294s,O2模式共计耗时约 0.890s,而普通的训练方式则耗时 2.926s,O1模式训练速度提升约为 2.1倍,O2模式训练速度提升约为 3.0倍。如需更多使用混合精度训练的示例,请参考飞桨模型库: [paddlepaddle/models](https://github.com/PaddlePaddle/models)。
diff --git a/docs/guides/01_paddle2.0_introduction/basic_concept/amp_en.ipynb b/docs/guides/01_paddle2.0_introduction/basic_concept/amp_en.ipynb
new file mode 100644
index 00000000000..22c12fcfed1
--- /dev/null
+++ b/docs/guides/01_paddle2.0_introduction/basic_concept/amp_en.ipynb
@@ -0,0 +1,453 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "# Automatic Mixed Precision Training\n",
+ "\n",
+ "In general, the datatype of training deep learning models is single-precision floating-point format(also called FP32). In 2018, Baidu and NVIDIA jointly published the paper: [MIXED PRECISION TRAINING](https://arxiv.org/pdf/1710.03740.pdf), which proposed mixed precision training. During the process of training, some operators use FP32 and other operators use half precision(also called FP16) in the same time. Its purpose is to speed up training, while compared with the FP32 training model, the same accuracy is maintained. This tutorial will introduce how to use automatic mixed precision training with PaddlePaddle."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 1. Half Precision (FP16)\n",
+ "\n",
+ "First introduce FP16. As shown in Figure 1, FP16 occupies 16 bits (two bytes in modern computers) of computer memory. In the IEEE 754-2008 standard, it is also named binary16. Compared with FP32 and double precision (also called FP64) commonly used, FP16 is more suitable for the usage in scenarios with low precision requirements.\n",
+ "\n",
+ "\n",
+ "
\n",
+ " Figure 1. Half precision(FP16) and single precision(FP32)\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 2. FP16 Computing Power of NVIDIA GPU\n",
+ "\n",
+ "When the same hyperparameters are used, mixed precision training using FP16 and FP32 can achieve the same accuracy as that of pure single precision used, and can accelerate the training speed. It mainly attributes to the features that NVIDIA Volta and NVIDIA Turing use FP16 to calculate:\n",
+ "- FP16 can reduce memory bandwidth and storage requirements by half, which allows researchers to use more complex models and larger batch sizes under the same hardware conditions.\n",
+ "- FP16 can make full use of Tensor Cores technology provided by NVIDIA Volta and NVIDIA Turing. On the same GPU hardware, the computing throughput of Tensor Cores' FP16 is 8 times bigger than that of FP32."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 3. Automatic Mixed Precision Training with PaddlePaddle\n",
+ "\n",
+ "Using PaddlePaddle's API ``paddle.amp.auto_cast`` and ``paddle.amp.GradScaler`` can realize automatic mixed precision training (AMP), which can automatically choose FP16 or FP32 for different operators' calculation. After the AMP mode is turned on, the operator list calculated by FP16 and FP32 can be found in this [document](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/amp/Overview_cn.html). This is a specific example to understand how to use PaddlePaddle to achieve mixed precision training."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.1 Auxiliary Function\n",
+ "First define the auxiliary function to calculate the training time."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import time\n",
+ "\n",
+ "# start time\n",
+ "start_time = None\n",
+ "\n",
+ "def start_timer():\n",
+ " # get start time\n",
+ " global start_time\n",
+ " start_time = time.time()\n",
+ "\n",
+ "def end_timer_and_print(msg):\n",
+ " # print message and total training time\n",
+ " end_time = time.time()\n",
+ " print(\"\\n\" + msg)\n",
+ " print(\"total time = {:.3f} sec\".format(end_time - start_time))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.2 A Simple Network\n",
+ "\n",
+ "Define a simple network to compare the training speed of common methods and mixed precision. The network is composed of three layers of ``Linear``. The first two layers of ``Linear`` are followed by the ``ReLU`` activation function."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import paddle\n",
+ "import paddle.nn as nn\n",
+ "\n",
+ "class SimpleNet(nn.Layer):\n",
+ "\n",
+ " def __init__(self, input_size, output_size):\n",
+ " \n",
+ " super(SimpleNet, self).__init__()\n",
+ " self.linear1 = nn.Linear(input_size, output_size)\n",
+ " self.relu1 = nn.ReLU()\n",
+ " self.linear2 = nn.Linear(input_size, output_size)\n",
+ " self.relu2 = nn.ReLU()\n",
+ " self.linear3 = nn.Linear(input_size, output_size)\n",
+ "\n",
+ " def forward(self, x):\n",
+ "\n",
+ " x = self.linear1(x)\n",
+ " x = self.relu1(x)\n",
+ " x = self.linear2(x)\n",
+ " x = self.relu2(x)\n",
+ " x = self.linear3(x)\n",
+ "\n",
+ " return x"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "Set the parameters of training. In order to effectively show the improvement of training speed by mixed precision training, please set the larger values of ``input_size`` and ``output_size``. And in order to use the ``Tensor Core`` provided by GPU, ``batch_size`` needs to be set as a multiple of 8."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "epochs = 5\n",
+ "input_size = 4096 # set to a larger value\n",
+ "output_size = 4096 # set to a larger value\n",
+ "batch_size = 512 # batch_size is a multiple of 8\n",
+ "nums_batch = 50\n",
+ "\n",
+ "train_data = [paddle.randn((batch_size, input_size)) for _ in range(nums_batch)]\n",
+ "labels = [paddle.randn((batch_size, output_size)) for _ in range(nums_batch)]\n",
+ "\n",
+ "mse = paddle.nn.MSELoss()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.3 Training with Default Method"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 28,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,\n",
+ " [1.24072289])\n",
+ "\n",
+ "Default time:\n",
+ "total time = 2.935 sec\n"
+ ]
+ }
+ ],
+ "source": [
+ "model = SimpleNet(input_size, output_size) # define model\n",
+ "\n",
+ "optimizer = paddle.optimizer.SGD(learning_rate=0.0001, parameters=model.parameters()) # define optimizer\n",
+ "\n",
+ "start_timer() # get the start time of training\n",
+ "\n",
+ "for epoch in range(epochs):\n",
+ " datas = zip(train_data, labels)\n",
+ " for i, (data, label) in enumerate(datas):\n",
+ "\n",
+ " output = model(data)\n",
+ " loss = mse(output, label)\n",
+ "\n",
+ " # backpropagation\n",
+ " loss.backward()\n",
+ "\n",
+ " # update parameters\n",
+ " optimizer.step()\n",
+ " optimizer.clear_grad()\n",
+ "\n",
+ "print(loss)\n",
+ "end_timer_and_print(\"Default time:\") # print massage and total time"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.4 Training with AMP\n",
+ "\n",
+ "Using automatic mixed precision training with PaddlePaddle requires four steps:\n",
+ "\n",
+ "- Step1: Define ``GradScaler``, which is used to scale the ``loss`` to avoid underflow\n",
+ "- Step2: Use ``decorate``, to do nothing in level='O1' mode without using this api, and in level='O2' mode to convert network parameters from FP32 to FP16\n",
+ "- Step3: Use ``auto_cast`` to create an AMP context, in which the input datatype(FP16 or FP32) of each oprator will be automatically determined\n",
+ "- Step4: Use ``GradScaler`` defined in Step1 to complete the scaling of ``loss``, and use the scaled ``loss`` for backpropagation to complete the training\n",
+ "\n",
+ "In level=’O1‘ mode:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 27,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,\n",
+ " [1.24848151])\n",
+ "\n",
+ "AMP time in O1 mode:\n",
+ "total time = 1.299 sec\n"
+ ]
+ }
+ ],
+ "source": [
+ "model = SimpleNet(input_size, output_size) # define model\n",
+ "\n",
+ "optimizer = paddle.optimizer.SGD(learning_rate=0.0001, parameters=model.parameters()) # define optimizer\n",
+ "\n",
+ "# Step1:define GradScaler\n",
+ "scaler = paddle.amp.GradScaler(init_loss_scaling=1024)\n",
+ "\n",
+ "start_timer() # get start time\n",
+ "\n",
+ "for epoch in range(epochs):\n",
+ " datas = zip(train_data, labels)\n",
+ " for i, (data, label) in enumerate(datas):\n",
+ "\n",
+ " # Step2:create AMP context environment\n",
+ " with paddle.amp.auto_cast():\n",
+ " output = model(data)\n",
+ " loss = mse(output, label)\n",
+ "\n",
+ " # Step3:use GradScaler complete the loss scaling\n",
+ " scaled = scaler.scale(loss)\n",
+ " scaled.backward()\n",
+ "\n",
+ " # update parameters\n",
+ " scaler.minimize(optimizer, scaled)\n",
+ " optimizer.clear_grad()\n",
+ "\n",
+ "print(loss)\n",
+ "end_timer_and_print(\"AMP time in O1 mode:\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "In level='O2' mode:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 30,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "in ParamBase copy_to func\n",
+ "in ParamBase copy_to func\n",
+ "in ParamBase copy_to func\n",
+ "in ParamBase copy_to func\n",
+ "in ParamBase copy_to func\n",
+ "in ParamBase copy_to func\n",
+ "Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,\n",
+ " [1.25075114])\n",
+ "\n",
+ "AMP time in O2 mode:\n",
+ "total time = 0.888 sec\n"
+ ]
+ }
+ ],
+ "source": [
+ "model = SimpleNet(input_size, output_size) # define model\n",
+ "\n",
+ "optimizer = paddle.optimizer.SGD(learning_rate=0.0001, parameters=model.parameters()) # define optimizer\n",
+ "\n",
+ "# Step1:define GradScaler\n",
+ "scaler = paddle.amp.GradScaler(init_loss_scaling=1024)\n",
+ "\n",
+ "# Step2:in level='O2' mode, convert network parameters from FP32 to FP16\n",
+ "model, optimizer = paddle.amp.decorate(models=model, optimizers=optimizer, level='O2', master_weight=None, save_dtype=None)\n",
+ "\n",
+ "start_timer() # get start time\n",
+ "\n",
+ "for epoch in range(epochs):\n",
+ " datas = zip(train_data, labels)\n",
+ " for i, (data, label) in enumerate(datas):\n",
+ "\n",
+ " # Step3:create AMP context environment\n",
+ " with paddle.amp.auto_cast(enable=True, custom_white_list=None, custom_black_list=None, level='O2'):\n",
+ " output = model(data)\n",
+ " loss = mse(output, label)\n",
+ "\n",
+ " # Step4:use GradScaler complete the loss scaling\n",
+ " scaled = scaler.scale(loss)\n",
+ " scaled.backward()\n",
+ "\n",
+ " # update parameters\n",
+ " scaler.minimize(optimizer, scaled)\n",
+ " optimizer.clear_grad()\n",
+ "\n",
+ "print(loss)\n",
+ "end_timer_and_print(\"AMP time in O2 mode:\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 4. Advanced Usage\n",
+ "### 4.1 Gradient Accumulation\n",
+ "\n",
+ "Gradient accumulation means running a configured number of steps without updating the model variables. Until certain steps, use the accumulated gradients to update the variables.\n",
+ "\n",
+ "In automatic mixed precision training, gradient accumulation is also supported, and the usage is as follows:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 31,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,\n",
+ " [1.25853443])\n",
+ "\n",
+ "AMP time:\n",
+ "total time = 1.034 sec\n"
+ ]
+ }
+ ],
+ "source": [
+ "model = SimpleNet(input_size, output_size) # define model\n",
+ "\n",
+ "optimizer = paddle.optimizer.SGD(learning_rate=0.0001, parameters=model.parameters()) # define optimizer\n",
+ "\n",
+ "accumulate_batchs_num = 10 # the batch numbers of gradients accumulation\n",
+ "\n",
+ "# define GradScaler\n",
+ "scaler = paddle.amp.GradScaler(init_loss_scaling=1024)\n",
+ "\n",
+ "start_timer() # get start time\n",
+ "\n",
+ "for epoch in range(epochs):\n",
+ " datas = zip(train_data, labels)\n",
+ " for i, (data, label) in enumerate(datas):\n",
+ "\n",
+ " # create AMP context environment\n",
+ " with paddle.amp.auto_cast():\n",
+ " output = model(data)\n",
+ " loss = mse(output, label)\n",
+ "\n",
+ " # use GradScaler complete the loss scaling\n",
+ " scaled = scaler.scale(loss)\n",
+ " scaled.backward()\n",
+ "\n",
+ " # when the accumulated batch is accumulate_batchs_num, update the model parameters\n",
+ " if (i + 1) % accumulate_batchs_num == 0:\n",
+ "\n",
+ " # update parameters\n",
+ " scaler.minimize(optimizer, scaled)\n",
+ " optimizer.clear_grad()\n",
+ "\n",
+ "print(loss)\n",
+ "end_timer_and_print(\"AMP time:\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 5. Conclusion\n",
+ "\n",
+ "As can be seen from the above example, using the automatic mixed precision training, in O1 mode the total time is about 1.299s, in O2 mode the total time is about 0.888s, while the ordinary training method takes 2.935s, and the training speed is increased by about 2.4 times in O1 mode and 2.4 times in O2 mode. For more examples of using mixed precision training, please refer to paddlepaddle's models: [paddlepaddle/models](https://github.com/PaddlePaddle/models)."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "py35-paddle1.2.0"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 1
+}
diff --git a/docs/guides/01_paddle2.0_introduction/basic_concept/amp_en.md b/docs/guides/01_paddle2.0_introduction/basic_concept/amp_en.md
index ee31dc70ba1..6c5f15edfae 100644
--- a/docs/guides/01_paddle2.0_introduction/basic_concept/amp_en.md
+++ b/docs/guides/01_paddle2.0_introduction/basic_concept/amp_en.md
@@ -1,6 +1,6 @@
# Automatic Mixed Precision Training
-In general, the datatype of training deep learning models is single-precision floating-point format(also called FP32). In 2018, Baidu and NVIDIA jointly published the paper: [MIXED PRECISION TRAINING](https://arxiv.org/pdf/1710.03740.pdf), which proposed mixed precision training. During the process of training, some operators use FP32 and other operators use half precision(also called FP16) in the same time. Its purpose is to speed up training, while compared with the FP32 training model, the same accuracy is maintained. This tutorial will introduce how to use automatic mixed precision training with PaddlePaddle.
+In general, the datatype of training deep learning models is single-precision floating-point format(also called FP32). In 2018, Baidu and NVIDIA jointly published the paper: [MIXED PRECISION TRAINING](https://arxiv.org/pdf/1710.03740.pdf), which proposed mixed precision training. During the process of training, some operators use FP32 and other operators use half precision(also called FP16) in the same time. Its purpose is to speed up training, while compared with the FP32 training model, the same accuracy is maintained. This tutorial will introduce how to use automatic mixed precision training with PaddlePaddle.
## 1. Half Precision (FP16)
@@ -55,6 +55,7 @@ import paddle.nn as nn
class SimpleNet(nn.Layer):
def __init__(self, input_size, output_size):
+
super(SimpleNet, self).__init__()
self.linear1 = nn.Linear(input_size, output_size)
self.relu1 = nn.ReLU()
@@ -118,19 +119,22 @@ end_timer_and_print("Default time:") # print massage and total time
```
Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
- [1.25010288])
-
+ [1.24072289])
+
Default time:
- total time = 2.943 sec
+ total time = 2.935 sec
### 3.4 Training with AMP
-Using automatic mixed precision training with PaddlePaddle requires three steps:
+Using automatic mixed precision training with PaddlePaddle requires four steps:
+
+- Step1: Define ``GradScaler``, which is used to scale the ``loss`` to avoid underflow
+- Step2: Use ``decorate``, to do nothing in level='O1' mode without using this api, and in level='O2' mode to convert network parameters from FP32 to FP16
+- Step3: Use ``auto_cast`` to create an AMP context, in which the input datatype(FP16 or FP32) of each oprator will be automatically determined
+- Step4: Use ``GradScaler`` defined in Step1 to complete the scaling of ``loss``, and use the scaled ``loss`` for backpropagation to complete the training
-- Step1: Define ``GradScaler``, which is used to scale the ``loss`` and ``gradients``to avoid underflow
-- Step2: Use ``auto_cast`` to create an AMP context, in which the input datatype(FP16 or FP32) of each oprator will be automatically determined
-- Step3: Use ``GradScaler`` defined in Step1 to complete the scaling of ``loss``, and use the scaled ``loss`` for backpropagation to complete the training
+In level=’O1‘ mode:
```python
@@ -161,14 +165,64 @@ for epoch in range(epochs):
optimizer.clear_grad()
print(loss)
-end_timer_and_print("AMP time:")
+end_timer_and_print("AMP time in O1 mode:")
```
Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
- [1.23644269])
+ [1.24848151])
+
+ AMP time in O1 mode:
+ total time = 1.299 sec
- AMP time:
- total time = 1.222 sec
+
+In level='O2' mode:
+
+
+```python
+model = SimpleNet(input_size, output_size) # define model
+
+optimizer = paddle.optimizer.SGD(learning_rate=0.0001, parameters=model.parameters()) # define optimizer
+
+# Step1:define GradScaler
+scaler = paddle.amp.GradScaler(init_loss_scaling=1024)
+
+# Step2:in level='O2' mode, convert network parameters from FP32 to FP16
+model, optimizer = paddle.amp.decorate(models=model, optimizers=optimizer, level='O2', master_weight=None, save_dtype=None)
+
+start_timer() # get start time
+
+for epoch in range(epochs):
+ datas = zip(train_data, labels)
+ for i, (data, label) in enumerate(datas):
+
+ # Step3:create AMP context environment
+ with paddle.amp.auto_cast(enable=True, custom_white_list=None, custom_black_list=None, level='O2'):
+ output = model(data)
+ loss = mse(output, label)
+
+ # Step4:use GradScaler complete the loss scaling
+ scaled = scaler.scale(loss)
+ scaled.backward()
+
+ # update parameters
+ scaler.minimize(optimizer, scaled)
+ optimizer.clear_grad()
+
+print(loss)
+end_timer_and_print("AMP time in O2 mode:")
+```
+
+ in ParamBase copy_to func
+ in ParamBase copy_to func
+ in ParamBase copy_to func
+ in ParamBase copy_to func
+ in ParamBase copy_to func
+ in ParamBase copy_to func
+ Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
+ [1.25075114])
+
+ AMP time in O2 mode:
+ total time = 0.888 sec
## 4. Advanced Usage
@@ -204,7 +258,7 @@ for epoch in range(epochs):
scaled = scaler.scale(loss)
scaled.backward()
- # when the accumulated batch is accumulate_batchs_num, update the model parameters
+ # when the accumulated batch is accumulate_batchs_num, update the model parameters
if (i + 1) % accumulate_batchs_num == 0:
# update parameters
@@ -216,12 +270,12 @@ end_timer_and_print("AMP time:")
```
Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
- [1.25127280])
-
+ [1.25853443])
+
AMP time:
- total time = 1.006 sec
+ total time = 1.034 sec
## 5. Conclusion
-As can be seen from the above example, using the automatic mixed precision training, the total time is about 1.222s, while the ordinary training method takes 2.943s, and the training speed is increased by about 2.4 times. For more examples of using mixed precision training, please refer to paddlepaddle's models: [paddlepaddle/models](https://github.com/PaddlePaddle/models).
+As can be seen from the above example, using the automatic mixed precision training, in O1 mode the total time is about 1.299s, in O2 mode the total time is about 0.888s, while the ordinary training method takes 2.935s, and the training speed is increased by about 2.4 times in O1 mode and 2.4 times in O2 mode. For more examples of using mixed precision training, please refer to paddlepaddle's models: [paddlepaddle/models](https://github.com/PaddlePaddle/models).
diff --git a/docs/guides/01_paddle2.0_introduction/basic_concept/autograd_cn.rst b/docs/guides/01_paddle2.0_introduction/basic_concept/autograd_cn.rst
index 3951f03c09d..fcf36e1d774 100644
--- a/docs/guides/01_paddle2.0_introduction/basic_concept/autograd_cn.rst
+++ b/docs/guides/01_paddle2.0_introduction/basic_concept/autograd_cn.rst
@@ -35,7 +35,7 @@ PaddlePaddle的神经网络核心是自动微分,本篇文章主要为你介
.. parsed-literal::
- 2.1.1
+ 2.2.0
本案例首先定义网络。因为本示例着重展示如何使用飞桨进行自动微分,故组网部分不过多展开,直接使用高层API中封装好的模型\ ``vgg11``\ 。
@@ -291,4 +291,4 @@ PaddlePaddle的神经网络核心是自动微分,本篇文章主要为你介
五、总结
------------------------
-本文章主要介绍了如何使用飞桨的自动微分,以及飞桨的自动微分机制。
+本文章主要介绍了如何使用飞桨的自动微分,以及飞桨的自动微分机制。
\ No newline at end of file
diff --git a/docs/guides/01_paddle2.0_introduction/basic_concept/gradient_clip_cn.rst b/docs/guides/01_paddle2.0_introduction/basic_concept/gradient_clip_cn.rst
index 5f32441212d..7d5cd89b959 100644
--- a/docs/guides/01_paddle2.0_introduction/basic_concept/gradient_clip_cn.rst
+++ b/docs/guides/01_paddle2.0_introduction/basic_concept/gradient_clip_cn.rst
@@ -20,6 +20,8 @@ Paddle提供了三种梯度裁剪方式:
.. code:: ipython3
+ import paddle
+
linear = paddle.nn.Linear(10, 10)
clip = paddle.nn.ClipGradByValue(min=-1, max=1)
sdg = paddle.optimizer.SGD(learning_rate=0.1, parameters=linear.parameters(), grad_clip=clip)
diff --git a/docs/guides/01_paddle2.0_introduction/basic_concept/gradient_clip_en.rst b/docs/guides/01_paddle2.0_introduction/basic_concept/gradient_clip_en.rst
index b6d58570b4f..31fd73f8b11 100644
--- a/docs/guides/01_paddle2.0_introduction/basic_concept/gradient_clip_en.rst
+++ b/docs/guides/01_paddle2.0_introduction/basic_concept/gradient_clip_en.rst
@@ -20,6 +20,8 @@ By default, Gradients of all parameters in SGD optimizer will be clipped:
.. code:: ipython3
+ import paddle
+
linear = paddle.nn.Linear(10, 10)
clip = paddle.nn.ClipGradByValue(min=-1, max=1)
sdg = paddle.optimizer.SGD(learning_rate=0.1, parameters=linear.parameters(), grad_clip=clip)
diff --git a/docs/guides/01_paddle2.0_introduction/basic_concept/tensor_introduction_cn.md b/docs/guides/01_paddle2.0_introduction/basic_concept/tensor_introduction_cn.md
index 3eb03db37b8..00efa373a39 100644
--- a/docs/guides/01_paddle2.0_introduction/basic_concept/tensor_introduction_cn.md
+++ b/docs/guides/01_paddle2.0_introduction/basic_concept/tensor_introduction_cn.md
@@ -81,8 +81,8 @@ array([[1., 2., 3.],
**Tensor**不仅支持 floats、ints 类型数据,也支持 complex numbers数据,如果输入为复数数据,则**Tensor**的dtype为 ``complex64`` 或 ``complex128`` ,其每个元素均为1个复数:
```python
-ndim_2_tensor = paddle.to_tensor([[1.0, 2.0, 3.0],
- [4.0, 5.0, 6.0]])
+ndim_2_tensor = paddle.to_tensor([[(1+1j), (2+2j)],
+ [(3+3j), (4+4j)]])
print(ndim_2_tensor)
```
@@ -473,7 +473,6 @@ x.logical_not(y) #对两个bool型tensor逐元素进行逻辑非操
### 线性代数相关
```python
-x.cholesky() #矩阵的cholesky分解
x.t() #矩阵转置
x.transpose([1, 0]) #交换axis 0 与axis 1的顺序
x.norm('fro') #矩阵的Frobenius 范数
diff --git a/docs/guides/01_paddle2.0_introduction/basic_concept/tensor_introduction_en.md b/docs/guides/01_paddle2.0_introduction/basic_concept/tensor_introduction_en.md
index 9e44ad029a7..f9dfcde4c58 100644
--- a/docs/guides/01_paddle2.0_introduction/basic_concept/tensor_introduction_en.md
+++ b/docs/guides/01_paddle2.0_introduction/basic_concept/tensor_introduction_en.md
@@ -80,8 +80,8 @@ array([[1., 2., 3.],
**Tensor** supports not only floats and ints but also complex numbers data, If input complex number data, the dtype of **Tensor** is ``complex64`` or ``complex128`` :
```python
-ndim_2_tensor = paddle.to_tensor([[1.0, 2.0, 3.0],
- [4.0, 5.0, 6.0]])
+ndim_2_tensor = paddle.to_tensor([[(1+1j), (2+2j)],
+ [(3+3j), (4+4j)]])
print(ndim_2_tensor)
```
@@ -482,7 +482,6 @@ x.logical_not(y) #logic not operation for two bool tensor
### linear algebra operators
```python
-x.cholesky() #cholesky decomposition of a matrix
x.t() #matrix transpose
x.transpose([1, 0]) #swap axis 0 with axis 1
x.norm('fro') #Frobenius Norm of matrix
diff --git a/docs/guides/01_paddle2.0_introduction/load_old_format_model.rst b/docs/guides/01_paddle2.0_introduction/load_old_format_model_cn.rst
similarity index 100%
rename from docs/guides/01_paddle2.0_introduction/load_old_format_model.rst
rename to docs/guides/01_paddle2.0_introduction/load_old_format_model_cn.rst
diff --git a/docs/guides/01_paddle2.0_introduction/migration_cn.rst b/docs/guides/01_paddle2.0_introduction/migration_cn.rst
index f04a2ee8835..94f9e2ee60d 100644
--- a/docs/guides/01_paddle2.0_introduction/migration_cn.rst
+++ b/docs/guides/01_paddle2.0_introduction/migration_cn.rst
@@ -66,7 +66,7 @@ paddle_upgrade_tool 可以使用下面的方式,快速使用:
开始
^^^^
-在使用paddle_upgrade_tool前,需要确保已经安装了Paddle 2.0.0版本。
+在使用paddle_upgrade_tool前,需要确保已经安装了Paddle 2.0.0+版本。
.. code:: ipython3
diff --git a/docs/guides/01_paddle2.0_introduction/update_cn.md b/docs/guides/01_paddle2.0_introduction/update_cn.md
index 2e1c44ab4ac..7f367547d13 100644
--- a/docs/guides/01_paddle2.0_introduction/update_cn.md
+++ b/docs/guides/01_paddle2.0_introduction/update_cn.md
@@ -558,5 +558,5 @@ https://github.com/PaddlePaddle/paddle_upgrade_tool
### 2.0文档教程
以下提供了2.0版本的一些示例教程:
-你可以在官网[应用实践](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/tutorial/index_cn.html)栏目内进行在线浏览,也可以下载在这里提供的源代码:
-https://github.com/PaddlePaddle/book/tree/develop/paddle2.0_docs
+你可以在官网[应用实践](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/practices/index_cn.html)栏目内进行在线浏览,也可以下载在这里提供的源代码:
+https://github.com/PaddlePaddle/docs/tree/develop/docs/practices
diff --git a/docs/guides/02_paddle2.0_develop/05_train_eval_predict_cn.rst b/docs/guides/02_paddle2.0_develop/05_train_eval_predict_cn.rst
index 789be2a9394..3c2182c9b33 100644
--- a/docs/guides/02_paddle2.0_develop/05_train_eval_predict_cn.rst
+++ b/docs/guides/02_paddle2.0_develop/05_train_eval_predict_cn.rst
@@ -7,7 +7,7 @@
.. note::
- 高层API实现的模型训练与预测如\ ``Model.fit()、Model.evaluate()、Model.predict()``\ 都可以通过基础API实现,本文先介绍高层API的训练方式,然后会将高层API拆解为基础API的方式,方便对比学习。最后会补充介绍如何使用paddle inference进行预测。
+ 高层API实现的模型训练与预测如\ ``Model.fit()、Model.evaluate()、Model.predict()``\ 都可以通过基础API实现,本文先介绍高层API的训练方式,然后会将高层API拆解为基础API的方式,方便对比学习。
一、训练前准备
---------------------
@@ -137,11 +137,6 @@ numpy_ndarray_n是对应原始数据经过模型计算后得到的预测数据
除了通过第一部分的高层API实现模型的训练与预测,飞桨框架也同样支持通过基础API对模型进行训练与预测。简单来说,\ ``Model.prepare()、Model.fit()、Model.evaluate()、Model.predict()``\ 都是由基础API封装而来。下面通过拆解高层API到基础API的方式,来了解如何用基础API完成模型的训练与预测。
-
-.. note::
-
- 对于网络模型的创建你依旧可以选择Sequential组网方式,也可以采用SubClass组网方式,为方便后续使用paddle inference进行预测,我们使用SubClass组网方式创建网络,若后续使用paddle inference预测,需通过paddle.jit.save保存适用于预测部署的模型,并在forward函数前加@paddle.jit.to_static装饰器,将函数内的动态图API转化为静态图API。
-
.. code:: ipython3
# 定义网络结构( 采用SubClass 组网 )
@@ -153,9 +148,7 @@ numpy_ndarray_n是对应原始数据经过模型计算后得到的预测数据
self.linear_2 = paddle.nn.Linear(512, 10)
self.relu = paddle.nn.ReLU()
self.dropout = paddle.nn.Dropout(0.2)
-
- #后续若不使用paddle inferece,可对 @paddle.jit.to_static 进行注释
- @paddle.jit.to_static
+
def forward(self, inputs):
y = self.flatten(inputs)
y = self.linear_1(y)
@@ -214,9 +207,6 @@ numpy_ndarray_n是对应原始数据经过模型计算后得到的预测数据
# 梯度清零
optim.clear_grad()
- ##保存模型,会生成*.pdmodel、*.pdiparams、*.pdiparams.info三个模型文件
- path='./mnist/inference_model'
- paddle.jit.save(layer=mnist,path=path)
.. parsed-literal::
@@ -284,101 +274,3 @@ numpy_ndarray_n是对应原始数据经过模型计算后得到的预测数据
.. parsed-literal::
predict finished
-
-
-部署预测模型
-=====================
-其中预测方法除以上两种外,还可采用原生推理库paddle inference 进行推理部署,该方法支持TeansorRT加速,支持第三方框架模型,支持量化、裁剪后的模型,适合于工业部署或对推理性能、通用性有要求的用户。
-
-
-四、通过paddle inference实现预测
------------------------------------------
-
-paddle inference与model.predict()以及基础API的预测相比,可使用MKLDNN、CUDNN、TensorRT进行预测加速,同时支持用 X2Paddle 工具从第三方框架(TensorFlow、Pytorh 、 Caffe 等)产出的模型,可联动PaddleSlim,支持加载量化、裁剪和蒸馏后的模型部署。针对不同平台不同的应用场景进行了深度的适配优化,保证模型在服务器端即训即用,快速部署。在这里,我们只简单的展示如何用paddle inference实现该模型的部署预测。
-
-4.1 准备预测部署模型
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-要使用paddle inference预测需得到paddle预测格式的模型,所以你需要在训练过程中通过 paddle.jit.save(layer=mnist,path=path) 来保存模型,注意在训练时在forward函数前加@paddle.jit.to_static装饰器,将函数内的动态图API转化为静态图API。在第三章节基础API模型的训练中已加入相关配置。
-
-.. code:: ipython3
-
- #模型目录如下:
- mnist/
- ├── inference.pdmodel
- ├── inference.pdiparams.info
- └── inference.pdiparams
-4.2 准备预测部署程序
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-将以下代码保存为python_demo.py文件:
-
-.. code:: ipython3
-
- import argparse
- import numpy as np
- from skimage import transform,data
-
- # 引用 paddle inference 预测库
- import paddle.inference as paddle_infer
- from PIL import Image
-
- def main():
- args = parse_args()
-
- # 创建 config
- config = paddle_infer.Config(args.model_file, args.params_file)
-
- # 根据 config 创建 predictor
- predictor = paddle_infer.create_predictor(config)
-
- # 获取输入的名称
- input_names = predictor.get_input_names()
- input_handle = predictor.get_input_handle(input_names[0])
-
- # 设置输入,自定义一张输入照片,图片大小为28*28
- im=Image.open('./img3.png').convert('L')
- im=np.array(im).reshape(1,1,28,28).astype(np.float32)
- input_handle.copy_from_cpu(im)
-
- # 运行predictor
- predictor.run()
-
- # 获取输出
- output_names = predictor.get_output_names()
- output_handle = predictor.get_output_handle(output_names[0])
- output_data = output_handle.copy_to_cpu() # numpy.ndarray类型,是10个分类的概率
- print(output_data)
- print("Output data size is {}".format(output_data.size))
- print("Output data shape is {}".format(output_data.shape))
- pred=np.argmax(output_data) #选出概率最大的一个
- print("The predicted data is : {}".format(pred.item()))
-
- def parse_args():
- parser = argparse.ArgumentParser()
- parser.add_argument("--model_file", type=str, help="model filename")
- parser.add_argument("--params_file", type=str, help="parameter filename")
- parser.add_argument("--batch_size", type=int, default=1, help="batch size")
- return parser.parse_args()
-
- if __name__ == "__main__":
- main()
-
-
-4.3 执行预测程序
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. code:: ipython3
-
- python python_demo.py --model_file ./mnist/inference_model.pdmodel --params_file ./mnist/inference_model.pdiparams --batch_size 2
-
-.. parsed-literal::
-
- #输出如下
-
- [[-1347.5923 -1156.918 -774.73865 3387.0623 -1553.3696 107.96879
- -2631.2185 -701.50323 -1094.3896 206.71666]]
- Output data size is 10
- Output data shape is (1, 10)
- The predicted data is : 3
-
-详细教程可参照paddle inference文档:https://paddle-inference.readthedocs.io/en/latest/quick_start/python_demo.html
-
diff --git a/docs/guides/performance_improving/index_cn.rst b/docs/guides/performance_improving/index_cn.rst
index 241893eca6b..64faa2caf93 100644
--- a/docs/guides/performance_improving/index_cn.rst
+++ b/docs/guides/performance_improving/index_cn.rst
@@ -2,6 +2,11 @@
性能调优
########
+你可以通过以下内容,了解飞桨框架性能调优相关的内容:
+
+- `模型量化 <./quantization.html>`_ : 使用飞桨框架进行模型量化。
+
.. toctree::
- :maxdepth: 1
+ :hidden:
+ quantization.md
\ No newline at end of file
diff --git a/docs/install/docker/fromdocker.rst b/docs/install/docker/fromdocker.rst
index aa25d82d3d7..62905f664d7 100644
--- a/docs/install/docker/fromdocker.rst
+++ b/docs/install/docker/fromdocker.rst
@@ -5,5 +5,4 @@
.. toctree::
:maxdepth: 1
- linux-docker.md
macos-docker.md
diff --git a/docs/install/docker/fromdocker_en.rst b/docs/install/docker/fromdocker_en.rst
index c0b2b487411..af6a1a7fafe 100644
--- a/docs/install/docker/fromdocker_en.rst
+++ b/docs/install/docker/fromdocker_en.rst
@@ -5,5 +5,4 @@
.. toctree::
- linux-docker_en.md
macos-docker_en.md
diff --git a/docs/practices/cv/image_ocr.ipynb b/docs/practices/cv/image_ocr.ipynb
new file mode 100644
index 00000000000..d3b9c516c16
--- /dev/null
+++ b/docs/practices/cv/image_ocr.ipynb
@@ -0,0 +1,722 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "# 通过OCR实现验证码识别\n",
+ "\n",
+ "**作者:** [GT_老张](https://github.com/GT-ZhangAcer) \n",
+ "\n",
+ "**时间:** 2021.11\n",
+ "\n",
+ "**摘要:** 本篇将介绍如何通过飞桨实现简单的CRNN+CTC自定义数据集OCR识别模型,数据集采用[CaptchaDataset](https://github.com/GT-ZhangAcer/CaptchaDataset)中OCR部分的9453张图像,其中前8453张图像在本案例中作为训练集,后1000张则作为测试集。 \n",
+ "在更复杂的场景中推荐使用[PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)产出工业级模型,模型轻量且精度大幅提升。 \n",
+ "同样也可以在[PaddleHub](https://www.paddlepaddle.org.cn/hubdetail?name=chinese_ocr_db_crnn_mobile&en_category=TextRecognition)中快速使用PaddleOCR。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 一、环境配置\n",
+ "\n",
+ "本教程基于Paddle 2.2.0 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) PaddlePaddle 2.2 。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "2.2.0\n"
+ ]
+ }
+ ],
+ "source": [
+ "import paddle\n",
+ "print(paddle.__version__)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 二、自定义数据集读取器\n",
+ "\n",
+ "常见的开发任务中,我们并不一定会拿到标准的数据格式,好在我们可以通过自定义Reader的形式来随心所欲读取自己想要数据。 \n",
+ "\n",
+ "设计合理的Reader往往可以带来更好的性能,我们可以将读取标签文件列表、制作图像文件列表等必要操作在`__init__`特殊方法中实现。这样就可以在实例化`Reader`时装入内存,避免使用时频繁读取导致增加额外开销。同样我们可以在`__getitem__`特殊方法中实现如图像增强、归一化等个性操作,完成数据读取后即可释放该部分内存。 \n",
+ "需要我们注意的是,如果不能保证自己数据十分纯净,可以通过`try`和`expect`来捕获异常并指出该数据的位置。当然也可以制定一个策略,使其在发生数据读取异常后依旧可以正常进行训练。 "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.1 数据展示\n",
+ "
\n",
+ "

\n",
+ "
\n",
+ "\n",
+ "点此[快速获取本节数据集](https://aistudio.baidu.com/aistudio/datasetdetail/57285),待数据集下载完毕后可使用`!unzip OCR_Dataset.zip -d data/`命令或熟悉的解压软件进行解压,待数据准备工作完成后修改本文“训练准备”中的`DATA_PATH = 解压后数据集路径`。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 下载数据集 \n",
+ "!wget -O OCR_Dataset.zip https://bj.bcebos.com/v1/ai-studio-online/c91f50ef72de43b090298a38281e9c59a2d741eadd334f1cba7c710c5496e342?responseContentDisposition=attachment%3B%20filename%3DOCR_Dataset.zip&authorization=bce-auth-v1%2F0ef6765c1e494918bc0d4c3ca3e5c6d1%2F2020-10-27T09%3A50%3A21Z%2F-1%2F%2Fddc4aebed803af6c57dac46abba42d207961b78e7bc81744e8388395979b66fa"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 解压数据集\n",
+ "!unzip OCR_Dataset.zip -d data/"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "\n",
+ "import PIL.Image as Image\n",
+ "import numpy as np\n",
+ "from paddle.io import Dataset\n",
+ "\n",
+ "# 图片信息配置 - 通道数、高度、宽度\n",
+ "IMAGE_SHAPE_C = 3\n",
+ "IMAGE_SHAPE_H = 30\n",
+ "IMAGE_SHAPE_W = 70\n",
+ "# 数据集图片中标签长度最大值设置 - 因图片中均为4个字符,故该处填写为4即可\n",
+ "LABEL_MAX_LEN = 4\n",
+ "\n",
+ "\n",
+ "class Reader(Dataset):\n",
+ " def __init__(self, data_path: str, is_val: bool = False):\n",
+ " \"\"\"\n",
+ " 数据读取Reader\n",
+ " :param data_path: Dataset路径\n",
+ " :param is_val: 是否为验证集\n",
+ " \"\"\"\n",
+ " super().__init__()\n",
+ " self.data_path = data_path\n",
+ " # 读取Label字典\n",
+ " with open(os.path.join(self.data_path, \"label_dict.txt\"), \"r\", encoding=\"utf-8\") as f:\n",
+ " self.info = eval(f.read())\n",
+ " # 获取文件名列表\n",
+ " self.img_paths = [img_name for img_name in self.info]\n",
+ " # 将数据集后1024张图片设置为验证集,当is_val为真时img_path切换为后1024张\n",
+ " self.img_paths = self.img_paths[-1024:] if is_val else self.img_paths[:-1024]\n",
+ "\n",
+ " def __getitem__(self, index):\n",
+ " # 获取第index个文件的文件名以及其所在路径\n",
+ " file_name = self.img_paths[index]\n",
+ " file_path = os.path.join(self.data_path, file_name)\n",
+ " # 捕获异常 - 在发生异常时终止训练\n",
+ " try:\n",
+ " # 使用Pillow来读取图像数据\n",
+ " img = Image.open(file_path)\n",
+ " # 转为Numpy的array格式并整体除以255进行归一化\n",
+ " img = np.array(img, dtype=\"float32\").reshape((IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W)) / 255\n",
+ " except Exception as e:\n",
+ " raise Exception(file_name + \"\\t文件打开失败,请检查路径是否准确以及图像文件完整性,报错信息如下:\\n\" + str(e))\n",
+ " # 读取该图像文件对应的Label字符串,并进行处理\n",
+ " label = self.info[file_name]\n",
+ " label = list(label)\n",
+ " # 将label转化为Numpy的array格式\n",
+ " label = np.array(label, dtype=\"int32\")\n",
+ "\n",
+ " return img, label\n",
+ "\n",
+ " def __len__(self):\n",
+ " # 返回每个Epoch中图片数量\n",
+ " return len(self.img_paths)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 三、模型配置"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.1 定义模型结构以及模型输入\n",
+ "\n",
+ "模型方面使用的简单的CRNN-CTC结构,输入形为CHW的图像在经过CNN->Flatten->Linear->RNN->Linear后输出图像中每个位置所对应的字符概率。考虑到CTC解码器在面对图像中元素数量不一、相邻元素重复时会存在无法正确对齐等情况,故额外添加一个类别代表“分隔符”进行改善。\n",
+ "\n",
+ "CTC相关论文:[Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neu](http://people.idsia.ch/~santiago/papers/icml2006.pdf) \n",
+ "\n",
+ "\n",
+ "

\n",
+ "
\n",
+ "\n",
+ "网络部分,因本篇采用数据集较为简单且图像尺寸较小并不适合较深层次网络。若在对尺寸较大的图像进行模型构建,可以考虑使用更深层次网络/注意力机制来完成。当然也可以通过目标检测形式先检出文本位置,然后进行OCR部分模型构建。\n",
+ "\n",
+ "\n",
+ "

\n",
+ "
\n",
+ "\n",
+ "PaddleOCR效果图\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import paddle\n",
+ "\n",
+ "# 分类数量设置 - 因数据集中共包含0~9共10种数字+分隔符,所以是11分类任务\n",
+ "CLASSIFY_NUM = 11\n",
+ "\n",
+ "# 定义输入层,shape中第0维使用-1则可以在预测时自由调节batch size\n",
+ "input_define = paddle.static.InputSpec(shape=[-1, IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W],\n",
+ " dtype=\"float32\",\n",
+ " name=\"img\")\n",
+ "\n",
+ "# 定义网络结构\n",
+ "class Net(paddle.nn.Layer):\n",
+ " def __init__(self, is_infer: bool = False):\n",
+ " super().__init__()\n",
+ " self.is_infer = is_infer\n",
+ "\n",
+ " # 定义一层3x3卷积+BatchNorm\n",
+ " self.conv1 = paddle.nn.Conv2D(in_channels=IMAGE_SHAPE_C,\n",
+ " out_channels=32,\n",
+ " kernel_size=3)\n",
+ " self.bn1 = paddle.nn.BatchNorm2D(32)\n",
+ " # 定义一层步长为2的3x3卷积进行下采样+BatchNorm\n",
+ " self.conv2 = paddle.nn.Conv2D(in_channels=32,\n",
+ " out_channels=64,\n",
+ " kernel_size=3,\n",
+ " stride=2)\n",
+ " self.bn2 = paddle.nn.BatchNorm2D(64)\n",
+ " # 定义一层1x1卷积压缩通道数,输出通道数设置为比LABEL_MAX_LEN稍大的定值可获取更优效果,当然也可设置为LABEL_MAX_LEN\n",
+ " self.conv3 = paddle.nn.Conv2D(in_channels=64,\n",
+ " out_channels=LABEL_MAX_LEN + 4,\n",
+ " kernel_size=1)\n",
+ " # 定义全连接层,压缩并提取特征(可选)\n",
+ " self.linear = paddle.nn.Linear(in_features=429,\n",
+ " out_features=128)\n",
+ " # 定义RNN层来更好提取序列特征,此处为双向LSTM输出为2 x hidden_size,可尝试换成GRU等RNN结构\n",
+ " self.lstm = paddle.nn.LSTM(input_size=128,\n",
+ " hidden_size=64,\n",
+ " direction=\"bidirectional\")\n",
+ " # 定义输出层,输出大小为分类数\n",
+ " self.linear2 = paddle.nn.Linear(in_features=64 * 2,\n",
+ " out_features=CLASSIFY_NUM)\n",
+ "\n",
+ " def forward(self, ipt):\n",
+ " # 卷积 + ReLU + BN\n",
+ " x = self.conv1(ipt)\n",
+ " x = paddle.nn.functional.relu(x)\n",
+ " x = self.bn1(x)\n",
+ " # 卷积 + ReLU + BN\n",
+ " x = self.conv2(x)\n",
+ " x = paddle.nn.functional.relu(x)\n",
+ " x = self.bn2(x)\n",
+ " # 卷积 + ReLU\n",
+ " x = self.conv3(x)\n",
+ " x = paddle.nn.functional.relu(x)\n",
+ " # 将3维特征转换为2维特征 - 此处可以使用reshape代替\n",
+ " x = paddle.tensor.flatten(x, 2)\n",
+ " # 全连接 + ReLU\n",
+ " x = self.linear(x)\n",
+ " x = paddle.nn.functional.relu(x)\n",
+ " # 双向LSTM - [0]代表取双向结果,[1][0]代表forward结果,[1][1]代表backward结果,详细说明可在官方文档中搜索'LSTM'\n",
+ " x = self.lstm(x)[0]\n",
+ " # 输出层 - Shape = (Batch Size, Max label len, Signal) \n",
+ " x = self.linear2(x)\n",
+ "\n",
+ " # 在计算损失时ctc-loss会自动进行softmax,所以在预测模式中需额外做softmax获取标签概率\n",
+ " if self.is_infer:\n",
+ " # 输出层 - Shape = (Batch Size, Max label len, Prob) \n",
+ " x = paddle.nn.functional.softmax(x)\n",
+ " # 转换为标签\n",
+ " x = paddle.argmax(x, axis=-1)\n",
+ " return x"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 四、训练准备"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 4.1 定义label输入以及超参数\n",
+ "监督训练需要定义label,预测则不需要该步骤。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 数据集路径设置\n",
+ "DATA_PATH = \"./data/OCR_Dataset\"\n",
+ "# 训练轮数\n",
+ "EPOCH = 10\n",
+ "# 每批次数据大小\n",
+ "BATCH_SIZE = 16\n",
+ "\n",
+ "label_define = paddle.static.InputSpec(shape=[-1, LABEL_MAX_LEN],\n",
+ " dtype=\"int32\",\n",
+ " name=\"label\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 4.2 定义CTC Loss\n",
+ "\n",
+ "了解CTC解码器效果后,我们需要在训练中让模型尽可能接近这种类型输出形式,那么我们需要定义一个CTC Loss来计算模型损失。不必担心,在飞桨框架中内置了多种Loss,无需手动复现即可完成损失计算。\n",
+ " \n",
+ "使用文档:[CTCLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0-beta/api/paddle/nn/functional/loss/ctc_loss_cn.html#ctc-loss)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "class CTCLoss(paddle.nn.Layer):\n",
+ " def __init__(self):\n",
+ " \"\"\"\n",
+ " 定义CTCLoss\n",
+ " \"\"\"\n",
+ " super().__init__()\n",
+ "\n",
+ " def forward(self, ipt, label):\n",
+ " input_lengths = paddle.full(shape=[BATCH_SIZE],fill_value=LABEL_MAX_LEN + 4,dtype= \"int64\")\n",
+ " label_lengths = paddle.full(shape=[BATCH_SIZE],fill_value=LABEL_MAX_LEN,dtype= \"int64\")\n",
+ " # 按文档要求进行转换dim顺序\n",
+ " ipt = paddle.tensor.transpose(ipt, [1, 0, 2])\n",
+ " # 计算loss\n",
+ " loss = paddle.nn.functional.ctc_loss(ipt, label, input_lengths, label_lengths, blank=10)\n",
+ " return loss"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 4.3 实例化模型并配置优化策略"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 实例化模型\n",
+ "model = paddle.Model(Net(), inputs=input_define, labels=label_define)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 定义优化器\n",
+ "optimizer = paddle.optimizer.Adam(learning_rate=0.0001, parameters=model.parameters())\n",
+ "\n",
+ "# 为模型配置运行环境并设置该优化策略\n",
+ "model.prepare(optimizer=optimizer,\n",
+ " loss=CTCLoss())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 五、开始训练\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "The loss value printed in the log is the current step, and the metric is the average value of previous steps.\n",
+ "Epoch 1/10\n",
+ "step 526/526 [==============================] - loss: 0.2182 - 13ms/step \n",
+ "save checkpoint at /home/aistudio/output/0\n",
+ "Eval begin...\n",
+ "step 64/64 [==============================] - loss: 0.1953 - 6ms/step \n",
+ "Eval samples: 1024\n",
+ "Epoch 2/10\n",
+ "step 526/526 [==============================] - loss: 0.1394 - 10ms/step \n",
+ "save checkpoint at /home/aistudio/output/1\n",
+ "Eval begin...\n",
+ "step 64/64 [==============================] - loss: 0.0416 - 5ms/step \n",
+ "Eval samples: 1024\n",
+ "Epoch 3/10\n",
+ "step 526/526 [==============================] - loss: 0.0296 - 9ms/step \n",
+ "save checkpoint at /home/aistudio/output/2\n",
+ "Eval begin...\n",
+ "step 64/64 [==============================] - loss: 0.0327 - 6ms/step \n",
+ "Eval samples: 1024\n",
+ "Epoch 4/10\n",
+ "step 526/526 [==============================] - loss: 0.0150 - 9ms/step \n",
+ "save checkpoint at /home/aistudio/output/3\n",
+ "Eval begin...\n",
+ "step 64/64 [==============================] - loss: 0.0228 - 5ms/step \n",
+ "Eval samples: 1024\n",
+ "Epoch 5/10\n",
+ "step 526/526 [==============================] - loss: 0.0102 - 9ms/step \n",
+ "save checkpoint at /home/aistudio/output/4\n",
+ "Eval begin...\n",
+ "step 64/64 [==============================] - loss: 0.0161 - 6ms/step \n",
+ "Eval samples: 1024\n",
+ "Epoch 6/10\n",
+ "step 526/526 [==============================] - loss: 0.1300 - 10ms/step \n",
+ "save checkpoint at /home/aistudio/output/5\n",
+ "Eval begin...\n",
+ "step 64/64 [==============================] - loss: 0.0164 - 5ms/step \n",
+ "Eval samples: 1024\n",
+ "Epoch 7/10\n",
+ "step 526/526 [==============================] - loss: 0.0199 - 9ms/step \n",
+ "save checkpoint at /home/aistudio/output/6\n",
+ "Eval begin...\n",
+ "step 64/64 [==============================] - loss: 0.0121 - 5ms/step \n",
+ "Eval samples: 1024\n",
+ "Epoch 8/10\n",
+ "step 526/526 [==============================] - loss: 0.0060 - 9ms/step \n",
+ "save checkpoint at /home/aistudio/output/7\n",
+ "Eval begin...\n",
+ "step 64/64 [==============================] - loss: 0.0133 - 5ms/step \n",
+ "Eval samples: 1024\n",
+ "Epoch 9/10\n",
+ "step 526/526 [==============================] - loss: 0.0084 - 11ms/step \n",
+ "save checkpoint at /home/aistudio/output/8\n",
+ "Eval begin...\n",
+ "step 64/64 [==============================] - loss: 0.0098 - 5ms/step \n",
+ "Eval samples: 1024\n",
+ "Epoch 10/10\n",
+ "step 526/526 [==============================] - loss: 0.0100 - 9ms/step \n",
+ "save checkpoint at /home/aistudio/output/9\n",
+ "Eval begin...\n",
+ "step 64/64 [==============================] - loss: 0.0109 - 10ms/step \n",
+ "Eval samples: 1024\n",
+ "save checkpoint at /home/aistudio/output/final\n"
+ ]
+ }
+ ],
+ "source": [
+ "# 执行训练\n",
+ "model.fit(train_data=Reader(DATA_PATH),\n",
+ " eval_data=Reader(DATA_PATH, is_val=True),\n",
+ " batch_size=BATCH_SIZE,\n",
+ " epochs=EPOCH,\n",
+ " save_dir=\"output/\",\n",
+ " save_freq=1,\n",
+ " verbose=1,\n",
+ " drop_last=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 六、预测前准备"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 6.1 像定义训练Reader一样定义预测Reader"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 与训练近似,但不包含Label\n",
+ "class InferReader(Dataset):\n",
+ " def __init__(self, dir_path=None, img_path=None):\n",
+ " \"\"\"\n",
+ " 数据读取Reader(预测)\n",
+ " :param dir_path: 预测对应文件夹(二选一)\n",
+ " :param img_path: 预测单张图片(二选一)\n",
+ " \"\"\"\n",
+ " super().__init__()\n",
+ " if dir_path:\n",
+ " # 获取文件夹中所有图片路径\n",
+ " self.img_names = [i for i in os.listdir(dir_path) if os.path.splitext(i)[1] == \".jpg\"]\n",
+ " self.img_paths = [os.path.join(dir_path, i) for i in self.img_names]\n",
+ " elif img_path:\n",
+ " self.img_names = [os.path.split(img_path)[1]]\n",
+ " self.img_paths = [img_path]\n",
+ " else:\n",
+ " raise Exception(\"请指定需要预测的文件夹或对应图片路径\")\n",
+ "\n",
+ " def get_names(self):\n",
+ " \"\"\"\n",
+ " 获取预测文件名顺序 \n",
+ " \"\"\"\n",
+ " return self.img_names\n",
+ "\n",
+ " def __getitem__(self, index):\n",
+ " # 获取图像路径\n",
+ " file_path = self.img_paths[index]\n",
+ " # 使用Pillow来读取图像数据并转成Numpy格式\n",
+ " img = Image.open(file_path)\n",
+ " img = np.array(img, dtype=\"float32\").reshape((IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W)) / 255\n",
+ " return img\n",
+ "\n",
+ " def __len__(self):\n",
+ " return len(self.img_paths)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 6.2 参数设置"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 待预测目录 - 可在测试数据集中挑出\\b3张图像放在该目录中进行推理\n",
+ "INFER_DATA_PATH = \"./sample_img\"\n",
+ "# 训练后存档点路径 - final 代表最终训练所得模型\n",
+ "CHECKPOINT_PATH = \"./output/final.pdparams\"\n",
+ "# 每批次处理数量\n",
+ "BATCH_SIZE = 32"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 6.3 展示待预测数据"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAkMAAABmCAYAAADIx5U3AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvIxREBQAAIABJREFUeJztfXeYZVWV/Tr35Vf1XuWuqk50NxlBxcCgMqIYGJURTCMiyQQKJgyjjAkZTKCiSMuYEBQBRWRARQX0pyM6oyMKAhI7h6quXPVyuvf3x97n7v2qHtVVDNCNddb39dev7rnh3HvPOfectdfe2wRBAAcHBwcHBweHpQpvT1fAwcHBwcHBwWFPwk2GHBwcHBwcHJY03GTIwcHBwcHBYUnDTYYcHBwcHBwcljTcZMjBwcHBwcFhScNNhhwcHBwcHByWNNxkyMHBwcHBwWFJw02GHBwcHBwcHJY03GTIwcHBwcHBYUkjupidjTEBAES9SLit4TeojP/Wsyt/zvHyu8GBrwcHB8Jtw8PDAAAbFNtT+9ttEd7mLzJw9mLjbJvd7zLvcbPvfW+BfT+tnsdCn9HsZ9PqWQWz/m8qa3FAJJi7v8eVbfitygx8P0AQtDrbEwdjTGBgEDGqTwTUJ6Imyn/X9RH8f8B/SfWDlttml3mqzLYy2n/lipVh2bahrbNPEMLjunr86ALVWlu/S/+RTjWnnq3PYObsNR/MrP9bXWexbVVqMPdI+7yD3Wx7tLH67XhZ9+tNddFnNS2fkZlVJvBbjDCe58H3ffh7QZ8AgEhE2mqDO7HHHwFfZT6w3wW7ybQY9x0cNFTTCr8P85Ut5DuxqMkQHRBFT3tH+Pf0zDhVgP9OqzOW+Rtg65pMSC2ny7T1zDPeHG773Oc+R8dV6GOSisu5GhX6P8vbyhVVtoB6L3Zy0ooym/0059unusjrPVGwj1SPMfb56Wc0ewzS9x6Zta3Vc7DnqultfGDdm3u9dv6jpl5mpo0OmMo1muoJAOlkHPnynn/KBgZxJJGJtYfb8tU8AKA72QkAmCpNhWVe+JGlJxxDLCyr8dPS2xr8lGp89zEjnaISVPlcdM73v+tDYdl7P3U27xSePER7nOoar9J1GkE5LIu0mJBVQeU+6rNPFb5DP2wFEVVqBwMqq8/bC6UsNut/OYNcu6L2D2bNI3QN7Dns8b5q2fYMUT5C1y7CRzbUc6iHk0IzZ//Z0P2lsz0LAJjisbJ50KX3GueJcz3QbTrC9yBPwvBIX2oU59xPe1saM4XiPLV64hAB0NmeDv+emqY+kYgmAQCVWiksi/PtVfnlRtWLr9t1xGMwK13MDLHp0znPgf588/yFbNuj09ZHRmSeMv1sZveBRU/D53mXpkWZrVd7m2zL55tPlWmXL1I+56O+wPayqMlQLBrDQE8vbvjhdeG21SsHAQCVAg34nloFDw72AwDGp2ea/geAeHuGKt7VHW6rVKmDvOtdNJAHdekwU6NDAID+ThrIG3WZDQUebQsWZPVbnGXQBC1WYPMMg4ZfSd20ahWtrt1iW/AYWS/N3HrGGnTupskQX05v82fV31OFlsUxvGyLaibD7si0Xl1NYSr8KavaD7x6tukKNfN+xRQO7RrletG5upcNhmXdvcv3CvYtYiLIJDK46T9vDLetXEkMTaVI7XdqSiZDa/bZh8oq1H7rFfn4xZIJAMDMjPSTSJS6qBej/xOppOwfp4lRucyTGbUkGgaxrCe//mQAQDYqk7XxzSMAgO5kFwAg7SXCsnqVzhV4ii3i+Vc9wpNSVSYfAzuUyDBqfLuNdqpEpS3MHjR1+4rzci6uZr92YKzz6StqtK5H/KZ9oqph2HPYsqaB3Nga82TIlwMbvGNDtdFanX5XLfMXka92nN9LvU5t3K/LODg2Rs97+bI+AEAiJsNuuUSTl0xbak4dCjPUfmIJmQDXmF2KJegcM6VcWPaCY47G3oCIZ9CejuKXv7ot3LbfvvsDAMq8gCmVCmHZ8hX0XCanqc1WqzLux2L2GasXHvC2gJ+LfqnhF9Sf9T9g7HGYvQ8Aw+/LNPiUenSxv+eydrKHHrOjs+o137hfb1G25xFpWvI0o9WEp9VYPHe/hX3rzJwDZZ926iaYnJwMtyWTNOlOJGmM27FjZ1j2gqNfjLHJfIvazYXTDDk4ODg4ODgsabjJkIODg4ODg8OSxqLMZLV6DUO7htDdI5ohL0I0XzJFFOT0uNBXY+N8HJtTMtlUWGaiRHtOju8Ktw0M9PJ1iKrPTY6HZStXkImkXiZaOGpkHleq7J5qXJgJTdDKXmnRykw2e/9Gq8vNa/56tGUauzccsaUDvnp+tuot62z3Ufdnxe8JNsvUqopSZXrZi1odhtg6Gkzxx5PUVrqy0o7qOaLPN2x4KNyWZq2F59H+k1PSHuJxg3Jtz6srG0EDuXIOq1evDrclEmR2MqzyX7FiRVgW8IMsFsk8ottxnU0rbW3KIM7myho7KoyOjoZF7e1EC2ey9Jw2b94clu07uAYAsO2BLQCAQ/Y7OCzryVI/K02QqaK9U67X3kbnyhXF/FKpkkmvYc1kRut1rI2V2oBRDcVjO5Thdu/Hquo4NEFbJaxpSuvHQscE60ChrSZWgDvreACo8c/5dG3RBI1LdWXaarDQwFMmrWSSTGEJNgEbZUq27d3jWsSTYnpcteKpAICd27cDAH7y85/Jcfy8DNsJX/ril4Rl9v1GVB1yRaL842wi7YhLH8rlCwvSTz7eaPgB8oUa1q6Tdu9x2ymW6PsQjckbmZwiM2KlSn2iq1vuKR8KQlRfny0S02/XbjKtTFuzn44u43cfWF3YQscWurY27fhmMW9h7+QjgnmU680l/LzCV9LiXYTQz8WbtU1/j2bbtOW4UonGEN1XoywlqFaprKND2s/o6MyC+8Te+SYcHBwcHBwcHJ4gLM61HuTunG6Tw8bGiNlZt5KEr4loV1hmV7+VMgni2jpELD2dJ1FpEBFx4BtOOhEAcNVV3wEAvOLYF4dlZRbVxXg1lsuLKCrTJqJbQIk6w1ovAjyznX+W+HgwQ63wf52rzq1ngl25WglJW3sCzHOvdZpzR9RKyM7SraC0Whc2oJpnr6QG7e8rAWoqzavugqyo7Uoc7JqcbhMRcLW68LXb4wkPHpKRZBMbsWHDBgBABzsJdHVJn7Bi6oCfXVS5TJZZcN3bv2zO/lZI2tXRGZZNTNEq2wqp16xZE5a19RLbcef//oXOo/rI6mXEYpV96p8z09KX4izMRVT6eCxGDcPzaDUWMXOFpB637UhDeT55dI6Iz2XRuW/MCqcVQQbWQzdtsyt9e4aoXoDOZpl0GIGgeVurJl7ndzE+OhFus+9stQpX8PCmjQCAW267leoQk3s9+mgSLzd8Ysh6e3vDsnvupHewetUqAMBrX/WqsKxconduRfO//fVvwrKdw8SEZjsz4bZYkt51rUHvQjOvfb09GFNi/T0FzwNSKWGDAGBsYgcAIM0eZh0dck9jY8R22mfXaAjT7LUcR2et9Rfof29mi4Kb2nEzw9HUTsJxW2/1msvMnBLFTmmhdrPvrh8oFngvgjfPI23ub7O+ly3KWtsrvFnXafGiW4Vg4W9BX1/fnHPlp2gc87z5fOEeGY4ZcnBwcHBwcFjSMPPZBmfDMyaIecCGh/4SbrvtFzcDAPITbPctyyozyytjL0armYaR1WaRV2PtHT3htvGpaf5Fc8mBZcIkHfUPzwYAVFnLsHKFrHSLM7Nm/ItmYBYbHGH3559Pc7QnYd2OW7pHtnDFbwWrc4gxe9BoyEot5jWfeNcu0YTd/9CDTdvqvhx3zMtfDgBYvXqfcNuGjZsBAGnWFhWKEk7h2c85CuUq0PD3bIA5z3hBHEk8fN/D4bZKiRiwBAdMsbZsAIhGeGXTTe1+bGwsLLPPUWtRJqZppT+4fDkAIKUCbNhj7RPIK7Y0xiyMYVamv1v6S4NjfMUjxLzpIWB0gs7Zv0JWXqM5Wrn73tygL1YrEWE3+mhDVmVR3uaxwKeUkBg4YQSGFpqehbjptupfwTwMp5nFEAESIiKepWeaTktcHMvUDI0MhdvaszSe9S4j5m7rts1h2e9+/3sAEjKhr1fGtWOOOQYAMDVOzJNtAwCQSdO1CwXSb3V2CvMXidH70UxJpUZtaSLHoUyicrO/+u3/w0Vf/AK2btu2x4MuRjxg5867w20BM1gRjppbV4yx/QZlsxyXS7FbsZiONmUxa/xtGu9tI4jO2deYAjSCecc7TT3ODRsRbgtasBCW/Qnd9bWm0t433bPvZ+epw57DfK71Cw8mtHAd7EI/l4FPz09/c2y4hoFBYl59FZG5s2MFGlhY0EXHDDk4ODg4ODgsabjJkIODg4ODg8OSxqLMZMaYIApg19B94bYc0+opdpVMJ8QUFo0xPc65M6byElm0bzmJOLfsFFfhtgxRhja6ZK9ysbzxRz8AALzl9FPpXONiXoi1oioXhVaRMR85gnRLN8LZe7aIXN0qInRrLMbG9lgy4q2u+8h1rqko4BbpBFH7Kf5fR9S1EZkD3mbdgwHg2pvZ3KpMPYPLyTV3204yVZx8mqRuOeCgg5ErBqg39g4z2QP3PBBui7Dy12P158SotNU40/69HHl9y5YtYdk+HJ1aR6BOtonpBgAeePDB8Hcbm20OPpjc5scmVOgBFqNWObVNVEWZrhTZDZ5zlCWSco0CuzfXPDFjRNIcJsGbm7jFiqOteSzekP4fr9O9erxPOS7935qoIi2aV427VV1bJWYdp6NMz44u3dA5EMNkfN6c4+y5CpwaQo+FiQTVXUf8rjbomRQ5erKnzF3tGXqGVvR/zz33hGV33/VXACKqfv7zjgrLKuwqbMseekiFlmCzka5XxI6zGTKvzRSlrRzxnCOQr5TR8Bt7tE9EIiZobwOGhjaF23xO+VLnvErT09NhWZpNhX29FD5ldETCs4gYVo1NZp5QKnOiU6tGFGHzW8txOPTJ5z+lHRvfnjOhdp91/qZzcj/xeHw0apy0v9lcFvjd2BshYvPd8CXc6cK9HnX4GIE/T+u1zhz6uzI9TdKZvmUkJRhV4+26tYc5M5mDg4ODg4ODw0KwKNd6z1BSPTsTA4Aory4DZkIaDZnBz3CQPC9Ks+hsRmbBI7tITGjUzD3OyS57uml1VVb5a2JxWiXt2EGrhnhUWIVYZB6xF6PltDCcxepZfXNe96DVkeFxMpeUHE12pvx/YYEebdat3c9tJedYqzzjra5rXU7n5qOybrLWzRcAChxGYXqK24hKKRxjxiTBLtf1qtThlceRgLqokq/aVbkNvnf55d8Myz72sY/h4ku+0aK+Tyw8E0F7vD0MIwEAjRo9j0yGmBsdBMw+axs0rL+/PyzLz9Azq9WkPXey88HWHRSwb9PGjVLG7t82yKNm2np6iWUdL09xneS5dvVSX8oxU1uPSlnPMhL+bp+Q/D6NKLs8s4Bat+2gYV1r6Z1qQtQPvY55jNBdaR5Xd7tfTTXnhmcZKPpbpUdDjM/lh/tKmZxj7pUs0ZVRIRss6sys6THIjnEpzixqlP+xdeyIcG7G/h4RQh/0muMBCAtUKUtiXJ/PuXM7uZ7vu3ZdWJYr0n415WhQY3bKhrCwgRkBoFgu7hX5+nwfKOSBnTulDVnysaOD6tvXJwLzCgv6dw3TN6GhKEFp0y0S1YUMUYs4CzYvXqAE2N4sCrEVQxTYQKEq36L9RvnqXJYZstv0uVgw7dkQAIplFV9yFlDPDhOwlyAIPQ2kfuJ8IM971u2g6Xs2hyVaYAiEeSwoU+w+r9t9s5t98ziYTBqUKgu7rmOGHBwcHBwcHJY03GTIwcHBwcHBYUljUWayABSTpK5yQnXYGDBTLKRWUYWTCaKyrJmsUhHKLZPmqLwqr1i9RtRkpUq0ZHtGTAj/+I/HAgB+/f8oQusLX/B8qdg8ZrLZ5HhzZE1Lx82NcuKHOWdUzhTTbB7z1dkl95k1G7SoUysqMayopgZbRC59RLSK0NJKEE7bKi2jANNxBq3qQGgou4SlUHflOQJyQmjJaJLeXTTBglplBo1Z6pnbQaUo5oIG53aycUgAoTunOMnd2884MyzrHdgfu89I9/jDDxrIVfKhqQoADItoSxxdOBlTz4dFt1ZAmk5Kvj5r+tBxhkoFMr+18X6veMUrwjIbj+W/fvtbAM35evwqnf+VrzyBNigR7uQ09dUkxwGbLIqYdWqCzD1+Uva3YuqAzRK6TzWsUwHT6UaZycHxPgKOu6LflxUv22amRZM17mdFpX2t8wH2KcfqckB0lpmsps5VnmUm02Y8+5hrZWt6Uv2G23ig4plYs1iMTXbaTFbjExfy9Cz7usVMZt9TMkG2IpvrDQC6ushcVInReDEyMiJ1iNDdxpIypkZ4fJ2YoL5Xqcm54tEk/BZODXsEAbCsT3KTReMsnM7R/UVV5gEE1LbLJXqe3Z0y7lerLWL1gIX4oXJej3f2vCx816Jn23paRYa2v40d9/WncdY5VZ29ULCtdrd1DWOuzRVX2xx+vtmtrnfPILSJqThNdosyIRreamZH5NZHziOqbhXbLpjnmfT0UH8ZHxdnESvNYXUCYmq8XQwcM+Tg4ODg4OCwpLHo3GQ0A5TDpiZpJdnfzSKmuszgi2VaJUVZ4RiPS/TcAovmzj7r/eG2SILKq6yYTSRkJl7K0XWuuPxbAIAf33hDWPZPL9oXANDJLJVeWRc5sqt1cx5QeZ82PkxiVC1iteLwZQO0qpmYlFVznTNZV3kFftrp4updrNDq8qYbf0wb1ErBCmLTnH9reFii2q5aTdfRUWbzeXaX5VlzKiXsQYSZBRt+IFBL6m9+kwTGSVYreiqxz4knnkTb+oiRs7mxACASp/dZzIswvi3Fqyhenb/tzDPCskw3h0Ao0P65soiHuzl31rf+g+oyrfI9pQ3N2JMsoE4pt3HrUJ9QEWdtri7LilimZa+EanPW7bNV9Fz7/sK8awr2/nReqyOPPJJPT+dfNiBtdXiUVtk2cvJNN90UlgU+sRE//yVlSI9FZYX8tGc+i+rC7vaRNqFgggQ13EJd3ukV37scALB9aBsAoF+JX5P8Tk977SkAgO4eyeVVmaBz2GDDVciKLR5hFoz/rqn2GDATUlDC0yxHdJ7YQkLjL3zm4rAM08QwZrqo7e2sS3iGMz/0PgDAsoFBroR0zMIk7ZeNNDuBAECe+0JK5Y5Lp1mozsPf1LSsTs877+MARMw+OiohQxIJar/pdhqfKmXp66kUsXMf+tdz+RoyRja481XKwvYkuA4BC29jCWlj1XoV/l6RsY8Q8aSNF1lgbtuh9nS2/hfpJL1jv67YUnbQ8ZTI/2XMjqba5q7lSwXa9rOf/I7PJe3eNzRmllgUb8MhAECtTm3Ijqeekefqc9gIowTUb3vTWQCAsRE6l45enkpSHaZy5PRw44+/G5ZVKrR/Ipnge5d2b8d5G8Vcu4/b8UJ/2yzj2NZGbebVr3613E+t2TIRVbkGb7iBvp12vGn1fYnw/hMTMn63pcjSo5kXq/8OquxAVZP7eeExlFu0M0vfi43K+eOAgw4CAFxz7dV0vZi8y44uGv+2bqYcj/r7nCvyu1MCavuYomxd0M+oXA4W7FTgmCEHBwcHBweHJY1HFXRxaJsEiosGNAMtTNIMsloSdqG7k1iITDv9PzYtK/tT3/IOAEBP/+pw29A4rx7iNFOdVKzMgQfsBwDYxQH4vnrppWHZX++gLPdHHHEEAGBmSoJ27bsvsUYTbGOMKMWD1VhEtZs+r1ymZmgGGonK6qZ/BeU+KRRplXLCa/8lLBsdo/u/9dZf0j0HEgytUrVB3WjW3KvyFo2NDwOQ1QoAdPfQCrKdXde1fXT9+vUAZKWQVAHzxlhb886z3w1A2dsBrOaM5nfvouf3P7ffHpbNTNPz6u+R0Ac51gP9/Jaf07lUZvoKa0dW7LcWALBl5/awLFxtMGPwtS/Ke0pzjqrlGZr5D2/ZFpYlOqiNRNS7sC7FDV6tRRRTuM/ap6IGwF9AMK3HExETCVJoxx//+Mdwm83Z1paid1MtiTYqzwynZdCKeXnvb33zWwAADzwoARyfcfgzAABXfOdKAM0B+E54NWU/t2zE8PBwWPaLX/+Qf9F6J5mWLOFXX/N9AEC2j973qGI4nv7cZwIAbrjlP8NtD22lQI/7H7I/XWdoR1hWzdH7Huwk5uU9p58dlvW0UTsvMXOTygorm99BrFaGdRxeUt57oYu2/WmXjDMpDgfw7YsuobpPS3v8yNupvUeYQTzv618Oy3b5tMr+2HmfoPOU5Lh2FhdlwjxZsoZcsZYYrofv/Wu4zYtwpnge8774xQvDsmKBWKYYB5pNt8vzHh0hjVYmQ208N6Nc65llv/BznwcAtCk3/wqvtsvVucxQlbVBgcpNdvBTD0IN/h7vE56JBDEkMbRL8vUVKzTuZDqoPcbjwkYUZugZ+FV6PtWSZk1p/5NOPj7c0t1H+1/53cvmXPu0U6j9capMXH2VsKUF/14AwAE8bu3YuTks61tGY67tQ/Go1K+rg9j7l77gNeG2oEFsTCJCdW5X77sWsqrU101UvomXrP93AMC6femcuYp8Ey2b08UhMzS7aAOx2sCsets///M/A2jOgfinP/8ZALB1M93jKaecgtm48cYbATQzUDZHnmEhnmapfbaMDG0Vy0ZXlsYQ+3k49sUvDcvSSXpGP77ppwCAzIC4wB/M99HOeQF/cdstYZnNtxfqRyPC2VS470VUwFNbfZ/1qTprff+ydS7oooODg4ODg4PDQuAmQw4ODg4ODg5LGos2k0UAbHlI0Z9MscfYZXC1Enh6LMz6/f/7LwDAf3zjO3Icu89//iv/EW5r66JjTznlrXR8VARrY7uIau5jE9PXvioU6fU/+BAA4A1veD0AIKqEwwGr82w+rEZVhGVW9GYFjgBQyBOFXalZ10flGp4kCvuoF5AwbO26A8Ky4RF6Dj+5mSjBZFGoxFicc7SxKSyTFdPWFIck0GK+DjZf3HvnHQCAG264PixrsKuvFXoPDoj76kknncz3RdSjjfwLiBDOX0amipQS8NZZAJ2KicjulJNPBABkOT/c8Se+Kix7aJjMF9/70XUAgHVPOSgsK7HZpFGi53zFV74elsXZdTbKubGWZcVcOMVu9hEl9KvZd2YFcnF5RqvXHbpXmMk84wUxJHHfX+9T26hKM1Nk5u1QFHoHiwkndhEFfuqpp4ZldW6buk9edx09Y0tf61xZ1jx29NFHAwB6+oTSLgTUriwVPpMTUXFvH/WzKRbAJ1V7vPTy9Xy8CKirEaLfS0z/a4Ei2JTTmyEKfGbnVFj0tje+DQCw72oyVRdKKrJwju61jcNF11VIhaE0nbO+j9zPeRd+BgAwfDeNPVefL6awFRFq7zkWnm725V7P+waZ1Ypc9sl3fSAsW9dBdc5w+3/wwfvDspUr6Bn1D6io+cNk1rVRFD7ykQ+HZXWODD0zQ/ff3p4Ny9Ip+r1ziGw3hxx6eFh26inkhLF6H3pGGzaIyLSvl/p2uSqmFDGT8TgWkbZy0NP2HjNZFO0YVmayBqi9J1JU71pNidWnqc8P9BwGAPCV9vekN5AJdCYv4+n3r/8qAKBSlxxUFokovbfXv4bMZdk2EfRffu3H+Nr0PP2GmK9GxujdPvVpT6W/h3aFZaefSufKJAbDbRd//tsAgGXd9D5e8Yp3hGWFHLWBjm5q7wMD0mcvvuQ8AECpzFKOtPpWcb+3Aurubml79lu1a5fUywqmrYD6+uvlO2HlClY4rR02jjrqqKbz//CHPwzLrJi63KA+Yb+7AJBpo3GsJyv9EvydfP2r6dv7t3tkHPzLHXfSuTg8SLcyuSFF3/bnPIv7ggpT8ctf/wqAOBVVVPR8pOh5OTOZg4ODg4ODg8NjjEW51ltYpgMA+lkUVWNX1B07RRTbxgKq5x75HADAs488Oiyr+jSj3rRDxJtegmaCn/3MRQCAd79X3O4POoRWDbuGSOAWjUvV38DisF/ddmvT9QAgbYWZvJoNPBEVW+H0rmERqnV0W0EYTTfTbbKqf+GLX0b31U5izl3jItROZ0n0NpWjWf2ypmBSLKrk2f3YmMy2Ozutu6LMSx++jzJe/+IX5BZdr2tXXJq5f+QjH6E67JK69y+jd7FzJz2jnh6Zifdxzqlxj2bk0xNS9/Y4vYtSSVbU1/+AGImpAon0Rgrynp59zPMAAK99Ez33U894i9wpB4Xr5qBzNe0eyu7U8bgNDyACecPLbZ0TyONgXSrsHfY+GBhEUFb5ppJ8L/Z9aybFiuFt8EUbVBEQV/y6EkImOfCkXTVqF167grS5efR1Mhli9PLM1K1bs39YtosD+3VxO85XRcQ9MUxt881nny7n6idmY/soCeVvvvlm2X+M2tHQEK3c+zMDYVkPu8RWOfiiZbcAoCNF9UuziHlGBQsssuB8ZFRWwX++n1acT1+1BgCQy8mqvsp9vMGC+7gK2TDFrriG3YF7B2R1PzlMdU9z33vqU58als3kiEnVK/EMs7dnnvEmAMDl3xZWu8Tskg0UWi7J/dicjJOcr0/nSurhTO07dpAofZ81kpusmKdzGEXee3MC2Ok+4eHR5zV87BDAwIfXxExbt+d6g96HZoYCvzlw4QP3ifNJboYDfkKsBFF2colYplg9E+Nzf+HcZPZ4AGjUaFujQc8slRb2zjrabNpEzgJW/AsA//E1YhfbYtJ2shx6hJsXrrlKLBVnnU2Wiolx+hY+9KCwfVl2IojaR6PS4lm2w44lRRWUdmqKnonOc9jG3ybLApVVXsd6nZ6vNZLospUryWnJBn5VQ3R4zYAF1AcccGBYNjlOjNfYmLjbZ5JUh7/9jVjVQw46NCxrMGvc3U33vG2TOERY56CuNuobQ+MqFyI7/mTYClRWDhvmceRvHDPk4ODg4ODgsKSx6KCLngcUlft8hW3lnTxTTnXJbLvCq6OREZr1zRSElUl3kT28u0tmuqOTtGre/0Aq04GVHn6IZtc2HcGXVcbyU0+lVe/hHEyut09ceIeH6No2iGA6JeccHbfpJCQw1xRrcVJpqteLXvxPYdntvyP36WP+ieowPCrMxgCnF0kKQYY1AAAgAElEQVTwcVFfVjd1XrFG2GUw1SarjgizJBPKvfmaH1wDACgW7SpKVplpXp1u3UGrjlRKBS5kZifL76CsjvNY+9TbTbPtbYqVGRyk57V1o9j4c+NUPrCSymLtsjIb5mCaM5xKZcXqVWHZgw9RoKxvf4NcwSe3iLv3zT8mW/C7TiZN2PSMsvnHw2Dvso3ZENNC1rY3ckQLgTePRM9q3SpqqWZdYr/1LQo2Wi4Kk1SwQSgt81IU5qXM7EoyQW1hZkLKsglqo+Osc1v/dQl/cP5HyQW9EpFV6SSHXnjKukMAAKnjRGP3058SSzQ1wmExYlLmcXiFzRwocXmfuAWDmYGAgyAm49K+Erzy/8KXvhBu2/8Q0qVNbGJWq0v0ZqkqM2r8d1K56Zc5qF1bmoa6M99xVlj2zU/R+StlqvtMTvpENELHFQuyrbuLWK/BFfT/qaeJu/IA6/ysRuMd73xXWDY8QuzSAfvTqrkxLu/CMoUDy2nM275NVsFdHZbZnbtmDTNR6G1z9tpzMIigocQ/jRq3pwi132yH6GGSMWqjI9yGLrnkq2FZNMJal4iMZTbYYM8yla+FMT5CrEU8xsc1RCvzTzxu3/47Cn+ydfvfwrKVK+m7sHYtvYct7JIOAKtWUFiXsWHV94pUn9wUvZuUCn45MUrjWluGxvlbbvxVWLbpYWJQeno5wHBB7su2BasBWrFC9KCbuT4vOuYl4bZMhliZNg6bkc1IChjLMll3fR2wtlyib3aCwxu8+lWvDcuuuYa+Pcks3c+994oGaDnrU+sqZMuJp74BANDVQf1xeLukk/n6ZTRmvfud7wQArDrg4LAM3Ed3cHtPdci48crjKIzCf950A9+LCoD5OAYVdcyQg4ODg4ODw5KGmww5ODg4ODg4LGksOmu97wM6KazVI5crRHF6KieKFZJ2dhItn2oXimuahVqZDjFpVTkD8CuOIxdxXwnwBpiaPpldvjlIJwDge1d9EgBwysl03M6dQjWnOCqtb/2zE8q9l8WO2ayY6qamicI+653k0vn968Tt8OGNm+g5cN6agcHlYVmJaf9Kw7rky73GOau7dbtNJGUOWqrQc+hdJtE5x9lc2NtLVLLniSnsXe8hytEKpweWi2DVRrq1AsFSUSjYlSvJxXTLZjKvDXZLCISR7WRK3He5mLtsZOxN95GgMO/JuXr2p/3e/34yBWzcJc+7f4CeSUhlKzr8bW8h89iXLqC8Uh9WAvkRZeKZAyuqnif78Z5CgAANNJpcV61rfd7mY1Ku9Tbr8tgIvb8mQTRT2VoIbQW8r3kNRb/V+cc6O4kWt+JqnQvtJz/7BQCgj91Z779X3MbbOWp5N4sXz//QeWGZz3L14oyI6btZjD21g9pXf0aE+aPbqK2m4mRmGNougv4qC4WfcsjTAQAzY+J2H2WjTr1o6XsZVJLsPr5rTEzHEY482830/7QScWYb1McDdtedGpe6p1Jkckil23kfEZKmOYt8o0wmbe36G/E4n6DK21ZiF/edQyQkX7tWzH6jYyQgt1GAr7zy22GZjVh/5jveCwDoU9nc27LUNnIsLu9UUeAh1ognHQI0oCKcwERtNG0a72oq910QtIVHAcD990tbTXE4k4RyMMnyMyuW2N0+0GU0rlnvap0HbqCfxiYrHM5kRa4wNU1tezpP9WrPSF8aGiaJRntKsiUUWdC8al8at4875kypQwfV2Wf39FJRzN5rVq3m+6exwfekfgc+5SkAgJ1sEtu+dWtYtnZ/koIMLJPvpQ2XYseLqnLisE46dlwaHBTx9zhHtrbRpf26NDR7/lyV+tDaNfuGZR5nAjAZed7j7EAR89gZaVTMZO//MDn5fPT9NM5fcKFEbB95mKLsd3DmAa0fmGFHgyq/u759JDzC2IwIrR9r7H1fFwcHBwcHBweHJxCLd603gFF5qgxTQwEHSKo3ZHZarbJwmN25q8qlNJGg2b1lIADgvR+goFgpFjv7yp1y+w6aJZ/G+Zu+8bWvhWUvecmxAIDvXX0tAKBWlTq87oQTqF7MWI2pPF/LmcXYpkSLq1aTa+uvf02BIleultXfCOdOKzCr5Sv3eS+e5PunewxUMDTrAm2D5BWLIspMJGkJMzIiz8HO2ANeDc/MCGvyifPoGXUy42KDLwKyijrrLGKPurqF8bIMQ1ea7rmkjsvEaIW8faOsRFYeuAYAkMxRnctJudeT+R2UYnSP/b3CMiVZlPdOFqr+8OtXhWWFaVptDPTSaqpekPckLpN6fh7M+n/vm7sbGMRMrMm13opo7XvUucm2bydWoYtzk11xxRVh2Ws5iJrOMD2wjJ6tXc2+9KWS+8cKLRPMCN2ggq7lc8RGTNSJjRnsldVVnoNB2txhQ6Oy2prI0aqxf40wlfkytb9lWdq2YfvmsKyXA2fazN7/yu0TAGpF6v/bt7ITgyf3FTXUTiLcrBoypCDgP9auXRtuu/2euwAA69aRCHlfVWY20Sq4xgEckylxiLB5nh7gYIbPWyMizhEOQTGQpeO0u34hT+NENiOM1fr1FPLDjk85LbiO2rbJ2b4nxTmAuz8uuuhzAIB4XJxMPvqxCwAA+QKvgvuF6fVVBvAnFwKQi7+MGTacRoWD4JaUE06jQm2ho4PeqWVPAaDErJrvy7OIsbNFudicmR0A0sz823xbmmXN52n8sUxqVLFNUe5Dk9PE+CxbLf0l0kPva8tDEmahp4MC7h57DOWnzCSFsRndRe2qq4fG1X95jWSTv/IqCmra1k71nFKOLNbJZRkHRdW5xu6/626qpxIT11jIbNkfTwUIXreWGB0ryh5WQST7epn9sXkSVXDHcWZc61E6dyMqzz3CuSU9I/04zvUp5Ik1PfBA6V+FUbr2BZ8lRujzF3w6LPvAeR8HAASc76y7X0w9pV00Xtrv5eROqbsORfBYY+/7ujg4ODg4ODg4PIFYvGt9BKhUxd4bYUbIutYHOrs5uw0aQzNyLyI22gavGnp7ZQV66XoKbnXiyRTGv39QNCw2A3uC9Tf1urL9t9G1jzvuODq3KvvBD34AADj5jW+k+mKuHkPbU63r/l/vppn4ZuViObhS7KcA0Nkts9nRcVpR2EBYcV9WlFu30jkOeyqtakdGpH4248g3vyVBu2ZydK5kkl6PTtXxlrcQK2PdKT/7WbHDWg3U15g1e9ObJBhiZydnzJ6kWXdfp6y+8ryKzabk/fi8Yimy7gVGNDGXXUqrmzd9kHRVk8rdu8bMmF3d2czKANAWJabqxH+h0O3XXnl1WHbsSSfikbH3ztkDBKgFtSbNkF2VWndW3ebsc6lVqA3YIJoAcNtttwEAjj322HDbpk2kU+vnAIbZrLAK9vxFfv5aM5T26F2WWZOzsl/0MF6F6rd9MzE23X0qXUgHHZcvy8rdrvA91qIN9qjAhRxeIhalNuqpIaVeoz7bwexRVDG25RmqVw8H/IR6DkEj33TvAPD0p5PuaPh+0rwN7RA2a22Er83pZBpKtzg9SczYCtb3Pfjgg2HZmjVrAAC1aWKG4yrVSSRqWW3pq2eccQYAIJuxTHde7U/t/v2sj9BhQSYnSUfR0UXb6orlqLJeo6OLmELLIgNA7NHFxN3jMDCIwkNFvW8/Su87FiN2wWo5AaDAjJ5lbCyDQxtp/4hKUVPgNDKZjrlBFwszOT4XPeOGohwT7Wxx4P6ZiMpxcQ7tEOdgvtPTokkr51mfOSABMQtT1Odu+jF9X176wteFZQP8PZmapvfe2yN91rK59Trdo84KX+e2MMJBUdtUCJbVq0lrpPuEzWBv2R+domLDBgpxYtu4dq23LLNlTfXztkxaknWFlrnR9Yuq8dhGAbHu76W8WDHst9BaAN7JLvYAcNEnSedrs2QMK/YnlqRztbdxiBiVjqYI+dY81th7vzIODg4ODg4ODk8A3GTIwcHBwcHBYUljUTxsBEBX3cOqiIiKKw2ioWfGieZKZlSEyDTRY3mfcwAp2i/gnESjYxvCbSuXU/6x71xGQsOz3v6JsKy3gyjKap6o7HNO+2JYdtUPJOcZABiVtfaNJxJ9WWbXVS8q9Gy6g2i8F/7TCeG2n95M+c0m2Ow1uFxMY0M7iY5MclTUeE3uNcLmiDYW+5ZVvqe1+9Lz2r6TcrM0Rchl13rr0g8AfX2WZiWq9pz3iQv6LhbnpVJEq5/7bx8Nyz7zGcrsHeEcYP/x9fVh2dlnU+bldMy6l0pusmiK6lOoCqU8EVLRHNU0KvcaKxFVeeX55CL/7nPeHZY1OOp4W5bOee45ks358m+Tu/HDU9RWhvuVO32MI6VWhBI1LCRuRGjOXlPZixvA4xiLdLFoYLokbuNtLGS3Qv72tLT7AouR0xypWUfXLrGJJKPyDwUsirR3PjEp1+nrX9Z03LHHSzu+6Rf/CwAo5ulZ7xiR41IcCT2SoP9jbWIe2jRBIvrYgFD7k5xRvsbi1E9/Rfpe+zKi+d/1VqLAy0Wh3JczzV1g2n96QO6rWmY6nk1vUWWyqPH7rqulWpWV1mUesb5/i4QYOOe1pwIAdnGbHY+JeaaYpjYdBFSvjk4xJU6VqT/3ePQuxnaJW7DNrVdvSF9N83u0GeNnip7an+71i5d8n849JWaWz32O+mWdIzLnclJ24YUkOL/oIhJnjysHD8RJJpDuUlHm2SQaidO1bU40AJR/cc8mrAdAuckaiCEVl/dtPGr3+RKZ5GOeiKv7OWPA9o3URiPqFnwbhV6JsQ2bxYxvx3mVm6zBOQ597l++tO1GhYTCdvytBfLtqXF4ha4sibjL0yLf8LjRVaIyXsXaqZ8UG38GAPzstx8Jy17y3PcBAAa7n0v3UJI296LnnwQAuP1PPwIADOUkwrM1W8UT1M4mJqTPxtlB5+CDRaD88MOS8wyQfGQA0NPT17St0VDjN5vTE/wc8gXlhJOm60yV6drtaTGhR9ikXZ5RMpkEmc6KUyy1yMi3t9Kgdp5g83OjLCbg976fvgs333ojACCoy3GTM2TG86wJvS7fJS9K5yiovG3dLAif4KwJEdWA2rMp5POy73xwzJCDg4ODg4PDksaiBdQxGEznZPVSqtGKK5lm98hOWcVM5mg22psh8WdJuR/HE7R/Ugm0imX6HYnSTPLqa68Jy844418BAB4L3bJqtW0z8lpXPJ3ZdudOElquW0tBq8YmRND7iuNeDgD4y5/vDrftGqXZbHs7rfQmJ4VBCV0ymXnS4mArbC3m6Hl0p2U1YAVx3T2cJTwvx1k3XRtAD1A5atpTTdcFREhr66WD9lkBte/PzXBuBb4Fzv9jWQUAGBqh4GWdSugXC+hcht3nh8dF4Jbl/QwLgz/3xc+HZR/9OK10R8ZtAEiZ1VsRn72HV79acuL8/Ec/BwAc88IXh9usuDjHrukl5aKezUYwnd87XI8DBOE9AcAMr1DsCqxXua5uYhdvP0IsQZvKlXf88ZSTR69QfvrTnwEQYe0rTzg+LLPntwHWIkpA/fxnUZ6+239FeZESgTyrDK8Ix3LUhqaHRIx84H7ERvzhYekTnWvp3ob4OiklAM6y4DIo0GqxOylOBYaFk708NowpcXAvv9vaGK22q2rlH2FX9/32Eff5//rLXwAA/7gvZZa3ATwB4NqvXgEAeOXJJMzvzcjY0Ma5+6ybckGxMo0qvYORUVoFL18hbu2WOQ1U5MNJDoa6zxpyu+6OyXu1ruJ2ta3F7M95znMAAHfdReEBiiqkhH2U93Nw04MOOkiuN8VhS8rKKYWfYTIi4QP2NkS9CDqTnaGwHwAiPrOkHIC00ZD3vXEj9Yk4yF09kZRnV61YF3kZA9vaeFzI2fAF0mMy7X1N+0ciUlav0buplGlbz3Lps5s2UR+oJKgtdCTFSaDKvGwkLnUwLLCfyef4b3mnv/sz9bnTT6BxcWxYvpf9y8hKMDlCz6ZzcO64b/Oq6Qz1mU5qa3/hfgAAa9as43vlPqi+EzYnmXUSOviQQ8IyG7rDlmmnDCuujmfZXV9FzpzhQJMJI+8nlrCBi+kc2pHEhhvJcwDXHuWolJvkHHL8zapB2vjAAPdDFttv2y7Z7lceMsjXkW+bFZV/5zsUxiUWlTrMzJQWbEFwzJCDg4ODg4PDkoabDDk4ODg4ODgsaSzKTOYDKKKBiici1zY2i0U5wuh0XsX4mKa5VpXjSOQLMveKMZ2cSgutnmf6ss4RLmuKor5/418BAKtXkbmrkBsKy7o62LTCwrqdw1J26D8cBQDY+jcSqgW+CLWWceTkZz7zWeG2TAfFRKmUiZ6NxkRw7UWozh1M9VZV7qAqU+42hs5NP/z3sKyvj6hbK4i2sZIAIMbPTQsnLQ1sqX1tqluxggTQlhrVcUls/IwYJ4+z+cEAyWPTzUL0ksoN1MaxZSYLEg01w4LQcTaJxjtUDCIWu9bYtJGIi8guX+bYOmzO7O+VPEw2JkUuV5hzXzbWhY6+bLd1s8AypoRwMzMN7B1GMsLUhNyLbeUxprutGQAAejnOUIKjMb/g+S8My/q4zNLXGjZuzS9+8Ytw2zHHHAMAWLmKzDbabLuS84cV2bypc9/5eXrPHTbmS7eY6u69i2j45z7vGeG2Ozb+DQDwo+vIbD2ocq295pWvonOwOHh6eHtYlqnTPXZ0UF2SUZXUsNpMXnsqFlPco99bN0pMlXUrKc5Kntvxjm3bwrKXv5zM3ddzBO5nnnhcWDa8nWIIWXPZofvuF5bZ+E82v59RhPrMNPWdQw4Rs1VHls4xNUV9Qou+bZ+28aO0aN6+45e+mOJHvec97wnL2vvpWT78IIl5/3qnmCf/+YQ3AaA8XxbWrGBj0BglFm0EDQR7gVuB7/solfLwfRkgk2w2tLFwYjEVZX2AxuGEIVNQVcWVCbg3KZ+Y0AQkcXW8OWUet6EA8j3yG1SHJOfRm5yQiPvLB8l8FdRp3KvmxVnDPvNqVfpXpTTDZdTH29Niatr8ADm5bNtKbS+ZkP6Saac+7hkrgZAx2t5PH+cHm56S6/2N496tWyexjiY5FpyVh9RUfC0rxl7G59qs4hNZ+UQ7m5P1OBzGHuJMEjMqR2Ga49BFG9LmpriOkTpt27hhc1jWsYYdrXi8L+XkfjJW7sHmaBOTcxo+/yjnADzw4APDsp3T1O+1CXH//cnJ6c1vfjMAkcQAwOjIJH50w4+xEDhmyMHBwcHBwWFJY9FZ6+sA2jplhVdlt7exMZr1VWsqQ+86Em0ND9Nqrq//gLAsx7l4JqeE2fBY+ORxFNCZkjAVBx1GYkqPXTIj7bICmpqk2atlUlYul+zC9/2BXIy7Omll3KdEc2ARX7ZdZpk+C7rtakDngpngmXghV9SH0zmyJIRLMitTVYxNWz9de2aaZrzxmHI/ZFGwjZALAEVedVghdHensGfDQ8R6WXGxzl9TYbf2FIvT41Gp+/IBYmiGdtCqtq6EzesOpJn1yBbJpzQ9whFSV1Ddq4GsOsZ4hu+xi7YWi04wm7N65Sq+Z1lZWPdOw2Leg/d/Sli24gS6/x/96EfhtreeSZmghzl7uadEo54HqNRwexQGQF+3RPTeNUQrwzaOsqvF8TV2L2WSEcmU9CXLhOncTD5HUy/z+7KrQAC45ZZbAIhAd2C5CBSHJ2hVNbCCVttW/AsA0YDbDEeLjylWpqOLVn9/uON34barf0LvZIQF1x/8tw+HZXGf1lMre4hdyW+TrPWDabp2fpRXf2m5L8Nt2wouAy10ZaH9cvVMTSetvEtT9GxXr5D+gp3UL1/NoQW+9fOfhkVdzAjZVffkLqlfwNTu8AS1e7sqBoBsltiDXbuEZbbPy44N8bjU2eYiS7NY3Kg8UTY6r422a92eAWCU8zcddRSFB9Hv97rrrqNtSlDcliGW4cCDaSxNt4uQdG9BAB/VoII2FTk/nqR2nyvQ89c5FbPszj0+TCLyWFJFOGarQkONP9M5YlN6llnWXtrv+C5qaw3QuBqN62RWVIdag9qXrxg3jpaARpXH6KgwPQ0bXqEhTFciSn0tkaS+VKuJBWHVSuoLWe5LpYKw8JNcd3D4h5oahwvc7scmeIxWrvKHPJUcB3JFFeE5S23BCpV1BPVpFnZbB5b2DrmfHWw5SSYTfB25L4+Z+VKOntXggDgVlAtUvynFJC3ro/IyR572IO0+t5Ouk2lr5zoJM5Tgd55mtrWsvpelEp3rC1+gcBOf+OR5chwzsNbSAQBxFlPbPqsdlK6+5ioUi8qEMw8cM+Tg4ODg4OCwpLHo5DcNA5QaMqsPOGdVRw/pAsplmaWXyrSiecfZFBgwEpHZ89EvJPt5Qs1m3/QmygA8xczLRE60E8PjZPNMpmlm+JVLvxyWpQJabXTzSrJcErZk1QpiiXxelQ3tkOzw115LWe5tZmQAiHHAq2SKZt2jozID7ewm++trX0OBHE1EHp/PrMe1136PzuM9FJZtY33DctYmjE/I6rTCQQZPPFFyc13CQe2sdkhnXraMUJgZXa0yLQMxOkrnt66XADA8TPe9YhVpJjxls79/A+mpzj1PAofZQHxfuowCN07lZXXjtbM+IrRRywqmq6+X60wrhPPPvyAsq5Vo/5UraVW/6T55Rk9ZQwE3CwVZ+WzZQi6VpSrnsVsmrF4kAtT2vDyC8vUBTVnrrYurXZXVS7LqifDKya7GtDtsg1lW/Qysfd9mu2/LCItp28LyAVqlRhUTWO2k824tcl6sTp13i845NkLnLClX+fRq6kM9DWEoCkkqf/+/ngsAyJWkLawYoHf56z/+NwDgoBVrwrKPfZmyVJ/2+jdQnYpKf8c6sxjrbgo1eUYN/v3hd58TbnvHhynw6GGD1KZ3bBa9x75Z6gMTvKJ87rOOCMt+fS8FxUswY3v+++ScZobao+2XNVUH65qt9ToTE7Zfrea/hUm17sDWFdm2XUBySNl3rUNE1LkR93T3cZnUweZJ1HWos/Cuj3PNlVX+LxMGu9izCIgbCsc2AAgiHFCUma2ocn/u7aT2m2KN3Xe++42w7MTXnwIA8H1pc51d1AfqdWs5MKqM2m+1Rtf2AulLN/zsUgBAobIZAJDtFBYjyu7iG7YSq3DGqSeHZX3dpEH51rc+FW5Lxlg/NkbniqncjTX+njR81kGm5Dl87wc0tsdSHHpEseo2e7zVIU2qjPbTzMZcc42Emzn11NPp7vn7oEMZfP7z5NYfMkMqV97atWRlqXDA35tukgCmQ2x5iDBrpIN4WpZfM5vfuPxbAIBXHUesbFta+rhlqmyQ0gHl3j/8EH1zkjyGjapv4p133wkAiLGm6ay3ShiNz62ngMyWnQUQ9o4c5zvMVWV+UirVQ63q7uCYIQcHBwcHB4clDTcZcnBwcHBwcFjSWFwEag+Ip4BqQ8xKltbtsDScEXq9wcmFCjkyCfT2CcV/7z0klnvwYTGVFJh+9yJ0/r/+7Y6wrM50Z5SjgHZ2S9W9SXbJZxFWrSLUsXVZj7Jbb/+AuHpbGjKuRItW3Fyt0PWSCblOe5rFZUUSwaWS4j67c5hMejGO2JxUwutahcMOsKu7zkeV5PwrfZzjCQBWLifx8QMPEJX4hc9LhOcPfvCDAMSlfsvmzWFZMU80YYLp5lPeKFSvpQqHhynSas0XQWJPLwlHragNABoeVfK1J1KU6HhW6M8vfeUSuv92uv93nvWusCzF9z2yi2jgFcsGVRnd67YNZELoyohgdcPDDwMATjrpjeG2P99FdOkhhx1Ox6ekDrXa3pGbzINBOprCjm3iUt7P7qwx9gfepcImrGWTiRdQmXaHtabPfVZK7r8dO8g9d82aNQCaQynMsOnSmtx0tO9KH52/0k59Y8qT6yTYjdX0k6gyFxUeeahA7+2CL18YbpuoUV9Y/93LAQCjKu8WpwzDun5qs9f/5MawrI3NyN4g9ftMRUWNz1FbLdU4z5Sn8gmxqDrZKwLqZxxIYvudPG709UjbKTE93tNN22Ya0mdTbEKYGiYaPqYi3rfFqD1u3EBjUH+/0P8dWamrxSfPo3ZvBZrxuFznM58lE4oVdq5cIePMjq3U56ybc70idfjmNyhf39QEjUVazJrtJtNbQ/XVGRbZW7NsUeUmC/YKx3oyWkVNgExWTDNTOTLTFyscqTsufTmXo20TE9TOupXYN8M5DmcK0kbr7FRQqdl7l7aTiJIZznB7z7SJOaVYvx8A0DAcnkFJJtrTZII/5NB/pOMDqUN+itrj644/N9yWylIbuPrar9B1MsIrnHDa26h+HFqgu0fqPl2iMT0eoX42MSrXsc4EHR0R/l++l3V2TFq/XvJN2sjzNvq9zjjw2c9+FoCY3LU5/kUvolAPPpvHzz1X7ssK+GMseh7aIdHpv3gRmfgmR8U8nOVMDRUexxJR7VxF2wxfe+Rv94ZlK9fQeDHBYSoynRJ+YIb7V22Gyr74ZcmFODZDZn/97Y2wyaydnQl8/9FxPI4ZcnBwcHBwcFjSMHrGuDt4xgQJD3jggVvDbfEozV7jMVqVTU/KyqZapVnmhRfRbHbjBlk9pzgo2eSMBJ1avopXZoZZjx0PhGWHP4PEVwccSKvmI4/6h7Ds6byCtKuyFcysAECJXc9LRTqnzTkGAMPsZrtunWSm3zVC57AC4PY2mbFu2kKr9Le9jVy+faXMauPzWnfYoC4CSjvj38lBpHrUqjbGTNSEcn0ucubvH/+YVtmaDfjwh8mt+cEHaYV8663yLoaGOMBeL7FMZ7JrOiDBsez9JNtlZVbgldZpZ7453FZjt/k6Z6uvR2X1lc7QqmGU2Z81q4XJGNpM93jl10hY55WErahN0wp+VT+tmutV5S7Lrpndyp16guscWIYhIquOw484AtP5OuoNf49qRqNeJMjG2nHrL24Jt9mVWpTbkBXvAtJGfWYAurKS3+rFL3oRgOZVnF1B3/Jzyt2mxdUnnECiRSugtIH4AODSX1I7HNpB76NfsSzVIq3Ee5fRtg1bJXv3VddRfp9qQv5LR2sAABmiSURBVNZJFY/qajiPns3HBAAlzmAdYTfgVd0icj/zVAoamE1Qe+kKhCUsbOd8fQExiVEliCx30D3/aWRzuC3ZS84B3/wCs5JTwk5/+kPkoDHMq8yrbrs5LPv5H28HAHz3yu9QHVR27GCEg092UPvSeZXsqntsXDLZX389PVPLrurgmKtW0Zhz3nnnAWhepdvciaeffjoA4MorvxuW2XEpzgyxZj/H85YpDzeFWesTHJJBC6ifceQzUIePINizqeuNMUEEUczMSFiC6TyNhz6oT/f2Cgu3fQu1p5WDTwMATI5pl3dqVyed/KpwW3cftc0rv3uZvWJYdtrJxFJbPe7V35WAe6bjjwCAFf00/u4YkXafjBFrV5mhsem0f/lMWBYDB7pV+f2KVXKKiSXofgoqYO2yPgqCWinTOP7Vr0sA3jWHURsr5oklSbc/OywrF2h/O1ZHFctSKdNYGVNjyaWXfhUAcNttt/H+wlR++9vEOFrh9FuVCLnAWep/+EP+VunxhpmhSWZudXtsYzbmk584L9z2iX+jXJRVDhny8pceG5bZoKY38Xesc0DyYR75dAoVMM599qHNEpi2UKD7r7MFyotKB/Bj9A5svwGAMrNS2Uwn34/s39e7FkGABfUJxww5ODg4ODg4LGksihmKGBO0R4AHH7g93Obzyq5c5GCFMWFe+gfIhe+uu4jh+cIXvhSWTfLsV9sKc3liico1mrle+Z2vh2X5Ms28DQc+++3tvw7LOqu0Wj78cNKWLF++MiwrcHoImwk4r1zEVzIj9OC994Xbutg104aBr9dlNZBM0GrvlFNOo3tQgcPsc/z+978PAEjFxZ3SMj02mF6xJIEIk0mbOkN0GKsPJFfOLawZuuyyy8IyO9O3wRa12/3ZZ9OqyLodToxLcCzLFlV5ZRlNiU4qxy6wqS6x8Z/0ltNpWzcxf1NFCWQV5VWsXYl88qPnhWVrOeDl+Fayxw90ClPgcQbqOms8EsoVvByn+2jS0LCLqc9LYx3Qc9+DDkcdC5vxP57wjAkSiOKyS+QdHX00BdCzgQW1K7UNQGbZvrhazVn33te97nXhtgJrRGJReuYN9Xws+2ADVWr3/pEoPePBftKdjI0Ii2HjG6ZSdL31X/9qWDY8RYxIvEMxG4auGWV3Wx34rTND/d1jZqgyJWXnnvMB+sHu45GqMFfZgO47zZoh/d4bnDF7xBP2p62PGLSxLcQ2fOtCGUsazE51csqRjTPipvuJL5IrbjsHIi3vkLK+ONU9iNDxk1PCzlo354hKd2FZvQsuOB9Ac9+zv21wx/EWfe9zn6MgcnaVDwgjNDpC19YZxEusAUu3CWNltRlxDsRYUu7rhz/7cJTqZTT8PcuWGmMCA2B0TMIf1Hy6v1Sa3rfVCQHA+q9cCQB411kU2iMWEbY0ymywFxUm4GWveD6dyzanQNb0JU759LOfUtBQvy7jnIkRmz7N+iUdbiLw6Tq5SWr3y7slQPCxLz6D6y7jlTH8fiPUP0sleQ83/5jY+qlJuse2rHxj69jB/xPjmCtIP7P92QZbTKdkPLaWimJRrmN1M5/+NIWw+N3vJFCqbUe2X1mmEwBuvOkGAMA+7GL/8INigbHhWdraqc3aMC2AjFVtKenHn/z4JwAA//5J6hMVNda96IUvbDpOa9/KnHLlLk7/oy1ExTKNIZ1dnBqlrlJO8avWLFiRx1LLouuwDV2dqxwz5ODg4ODg4OCwELjJkIODg4ODg8OSxqLMZDFjgs4IcNedQsdZk4k1IeVyKlKxzTXGWejrSnCc4XxgmnL/T6bvjj/+lbS/cuGPcnZ3GxlWi4qzETnHXMye78nfQcu5IG9jVi0wLfYJ5ptDcpZlVNW22c9YhcQME2z5c8vNAt5NE/v3yPdqEef8OtZ1HgA4AgKUthRV9pmu8TaVqBgNfia2ehFVhyjnEoqze3OiLnWIc2bjqG+PV7mdDLUbHVm0ynRxwFR5oISSBx92OMp1oOHvHWayu/8s2catqP1FLIjWpo86ZzUfGSH62eZwA4Aim8QintDxEX7WVjgdjchLSrLp11LG2kymTTgAYIK5bS7gF9jwpKxh7HuXbRWf6hzj0AtVZdLy+Dr2vS9TAvgH76Fs9/vtQ3T8mHIpT7GfRbJu6yfVKzMDXlC5v6p82zYKQEqqgBhf2za1ojw+1NjMFWlxXJp/16P83Fomu1vottnw5v4O27t6N+G2uX01z+LouArv0eDx2uYF1FnrD336oaj4NfjBnjWTeZ4JYlGDzVvE/BJP0jMrV8gckk6LCahconv64kUUeTriSdn730em1sBI267VyWSfTFGj0BYQG1UlYrL8v5h0DNjE5PH3whO5Agw3RJ+j/vsq51vAY5KRbw48MoF5bEKGL+Y4NLi/+1YCotqLR3UwhupQV+Y/tRNft0Ubatl2CBdeKOEw7PfxE58gM1ZT/jHuS21tdI82fAcAXH45hc8wHkepV+a1lctJSP6+974v3JabIvnKxRdfDAAYUFkCjns5RVDv7aMxwfelDtb8bPPXbdwkYvZ91pLUosrtv1qTd19p0PNOp+W9BsbODawpUfZfu+Zg1OuAv4DvhGOGHBwcHBwcHJY0Fs8MAbjjjt+H2xo8U7Or37oSSdmyeIKWag3FDFkBna9WsJ0dJJy74UbKlfLGk04Ny7Zto9mrzeGj3dpjEZm98m3Ncxe7mf/xbFvmkYtlhrgGyr1vzkqyifFpwQwtJh17y9XDI/0NJNmNXselssyAInFQs2yRZ9kDdcnw+dJGRSIg6tMqVhgizRpxGe8fUZXIt5NYNJmUFVmBA3oWmE2pN+RCBx12xF4joI7Dw6aHNoXbrADS5vzRLM0/PJvyZh188MEAgL/9TcT7vV3U/stKJJlJ0+rS9lMbpBMAKswE2bJAif077co2zLGlwO3L5/816+dzW2hiDvloy85q934r6J8a55ABFanDwUdQ+IsH//AH2rBCWKNEvfn/iKpgjfN7VVQXsm3T4/Ye1d2F/7f3UYt4c46z+ycVM5TgqtZars5b9ME5/bJVP51vvFgIMyTHR1kkrbPWW9f6UqXINZA6HPbMw/Ya13oDYNPme8JtmQ4bGI+Yl80qWGw8xm2cRcxdHZIp3YYqSCSlMbznvWcBANo4R2K1qrKSs0NPe5raWm5GWALPMjaWEYqIU0i4jRmipiHaskZeWW0roQm+sFloUD/2aiwED5JqR3ti7meJHZiLR8cM6XHG5sj76Ecp7MQFF0iOSGtxiXA/0TnNbKDHCXaEWM5sEADs2Ep11cFduzttPjV6F5Nj4oRwww1k6XnzmynEhmZ4rIXH5gPsVuFmCiygtrntEiooMnhciiiGPM9MkCW/bJ5QANh330McM+Tg4ODg4ODgsBC4yZCDg4ODg4PDksaicpMFABoAOto61TbipqzmMxIIZVmr2zgZRKEVFB3Xt4zoUh07Jp1iE1WN6LGxUckds2KQ6DrP0LnGxyUuj0nPoqvnNWM15inT5d6svxcJXyJ3zhVC6/oGLfYxLfazmHVvi2TE62xLCJSZ0Zoqm1K6sLnE6se9pssETf9rITQ4tg7YpKXzxDRsGV+6oUS9NkpvpSJ2DLY2Ic2xeXQEas8sTF/+eMPAIGZiYQ48AEhzzrZjX/ZSAEBD5cP65S9/CQBo5/xLmQ6JsxVLEZ2unQqqAT0PG/w2DhWpmUWSMRZQJ1Iqt1yenz//3WSY5XcZ8HtrKPOP3earZl9jWj3Kgu22tPT/MY5C3sl0ebRD6OvNLKDuGaC+u8sX04K1wllmX5u9rAm8yTUgbEds0oWG13Sv2oQedi9r5226Dv1fD1r1s3CvOeeaX0Bty7wW2+x55uuzsu/MBEkJjDJZ1mzsITY3xWJKLb4XwYsC7Vlp2+McEjrFwun+AXEcMKB7iMeo32zcKNH7P/JxirjvqwZ5/r9/HIDEb/vMZyRadJr7wK6xnU3nBoCU/UjZb5Tfahy2in65ngnYGSZQ5tTZbcbXpjraz2eRdPO+ze8+8OdpSy3HN/OI5a3MZDYGkTU3AsC73/NOAMA++6yec5x1wohGeWxoyHjc1kZtTju52HaeY+ePbJeMDce+/GUAgPM/RSa6M844IywbHOQo/Qk618atEpOqv58iVRdZJhFRkbhrPObpPl5js1gmQyY+z8g79xZB9zhmyMHBwcHBwWFJY1EC6qgxQTuArSrTfIwzYE9N08y/XBN3xQavBHv7SBw1Pi7RLO3qOdMuwqmZaZpJl9gF99Zbfh2WHfU8iurbv4xWFL6v6h2RaK9PCBYicPbnZr2e103XtFiltL54858tRXaPPMdN8Co/aMH0NImkzSxmYZ7FrGaGPN7Rbov4UbWf4X3mHocst6MpiURaqdHqrKOHRPPVmqzW1ux/6F4hoI4YL0hHEvjNb34TbrOiw8kZcjvVq/c8h5544AFyOz7yyCPDsjLn90knhOGxme+rHNk1lRCBuXVrjXMkb32dRo0ZQP5bP6WQJGFqTTtit3LKtqtDK2S0kZgBYL+166gueaqLHk/GeL/BQcpJNlmUd2sF05xqCDHtfezbsBayzf5uhP8rxtE07+OrMivut8xTXDFetg6lWDPP1IxWzg6M+caBluz04lzrkzHLqEu0ZnvF/kFaPdfUyn1wn8G9RkAND6hWZVweGqLI4QXOv6VdoyW3HjFJNto60DweWNgccsv6KEry2896e1jWxg4H55zzfj6nGodDAa8NLaEag2WEQoZodyEVZpfrwdOOeZFZf7c4o1edu7Flm2gO+TK3XLISAMKcWKGxFUYDQC7H4wY7Nu3cKZnp99mH8kwWc/Qt1qxRsUDPz7LAgDBQOd5fhxExXL8KZ6+/9NKvhGU2598555wDAOjuFkapVGJGjccnLaD2mGUrlYSlC9hs0dlBonkd/X31qv0RwEWgdnBwcHBwcHDYLRalGTIGSMU8PLz5wXCbte+ls2QL7m7rDcvyvFqM8gw0kZYVmPV0m86rvCOsB8py1upsp8xm1+5LK9CJCVop6MB0JtA2zN1hflYn1KEsaNX3yAxO0LQynK0jUMfZmfd8rvgt0cr9stnlvblCtK1q5tFAadu2XSDNOnNTYYsQBtbt3k7EfS3ssbu3en7MfNgccgCQ4KBqdnVTUXnlBvp7sGt87qrxiYaPAOVGFf3LJSP78BgxoNa2XlPPtXeQWK4JXp394IbrwzLLBBz7Esn8PDbDuhF+VF0ZeWZejLpvPGHzlkn/qsSbmSHN+llWpWHmMjDW3V7JVBBLp/gcdFx7v7jI/+lBCjbZ1UUMbywhffGWP/4XAOCoo44CAHS2q1xLVjfGf+vWb6saVXXwg+a61pQvvq2X3eIpdsrGJIzwWXUsynCvUGOwGzYgPNhumy98RquF6CwKC1ADztyxxLb7pjxMrOnYto2yps/kxT08nUwjXyliT8PzgHS7wdCQ5MPr7KDvgrUEaNdoG6phZITYApvLDQCiMXpvNg8XIHnANrC26JPnfSoss27f69cTC2H1MQDgBdYywddWrLV1yQ/sJzGYq4tpHqMZ4ftT7KLH3zTrfm9aBeBl5r2uXPLDXR4dM6SZYcuIWdZHP1PL9ljW7SmHHBqWbdj4MACgu0OYmvD8HFqjqNpYlPU8UQ4MWlLBkG0YCPt+P84BIAHgnnsp7MLFX6Icg5d8+ZKwzAaMTHOQV81ER5kFiqq8lknWodlvjs5o39WVwfTMfEGZBY4ZcnBwcHBwcFjScJMhBwcHBwcHhyWNRZnJGgEwVfXxxjefEm7bvHViniMIlhjMZlSuoSpH4FRWG/vbb2F9unCS6OAPfviTAICoYiyDR+n9bvFYzgjDaLiP4TkfSzyygav5OcxjcJsXs125W8nCW5VZWXBctcgiM8/2Weo61zDbvXrPIYCPfdatDv9uT5OIsFAmmryu3IKtOcm6jTZZEZm+HhoR88L3r7oGADDOucyaorja0Ab8t6feVtWKRPnvhioLa7NQme2c/ihvLpuhey1xUigbURYALrrkywCAFxzzAgBAPS9C4NmNIa7qZ+WS2mnc7l7lSqtYwOp+vOadgTAQgT1XC0MHyuEwuFDRbKsW3Gp/i/kcG2a/BBWB2kZ4V/tYcXG5zs4pyvxRLBebIlLvKfg+kM8H2G+/g8JttRY6YYskR5e2oVi0T88HPnA2AODCC9eH29IpepvFogonzrDWNxus/tL1XwrLykVuKQsYnJqsUWGZ/lxGmguVKcy3RS308nPQ6nXN5z8zT5kO1MyWRFgLa73FYNnOkQ/yKkWb9ZqvtwjKbpUpyhIOGy2n1Te7jUPqdHTSmLdzx645+2QyZArTQu1LLvka/bBhN9RjT9gurhUW9tHztmRSDiiX6/M+Tg3HDDk4ODg4ODgsaSzKtd4YMwpgy253dHB4YrBPEAR9e7ICrk847GVwfcLBoRkL6hOLmgw5ODg4ODg4OPy9wZnJHBwcHBwcHJY03GTIwcHBwcHBYUnjSTUZMsbkd7/XkxfGmC5jzA3GmL8aY/5ojDl0VnnEGPMXY8xP1LYrjDGbjDF38r+n8/aDjDH/bYypGGM+MM81v2mMOeTxuyuHxxOuTyyqTxhjzCXGmIf5fM94hGvebIyZG3XO4UkB1ycW1SfeyOe52xjze2PM0x7hmn/334lFudY7PO74NwB3BkHwKmPMQQDWA3iRKn8PgPsAZGcd98EgCH44a9sEgHcDOGG+CwZB8Nb/W5UdHB5XPJZ94mUA9ud//wDgMv6/CUEQvPwxqruDw+OBx7JPbAJwdBAEk8aYlwH4Olr3ib/778STihmyMMa8wBjzG2PMjcaYjcaYz/IM9488w92X99vXGPM/vO2CVisGY8waY8x9xphvGGPuNcbcYoxJcdnbjDH/a4y5yxhzvTEmzduvMMZcxufeyPW5nM9zhTr3S5md+bMx5jpjTIvY6004BMCvACAIgvsBrDHG9PO5VgJ4BYBvLuQZBUEwEgTB/4JC8sz3LH9tjHkW/84bYy7m5/BLY0wfb382rx7uNMZcZIy5ZyF1cHji4PrEgnA8gO8EhP8B0GmMGZy9kzFmszGml5/D/caY7/F9/FDd78u57A5DbNNPZp/HYc/C9YndIwiC3wdBYDOb/g+AlY/wLP/uvxNPyskQ42kA3g7gYACnADggCIIjQI3gXbzPlwF8OQiCwwBsn+dc+wNYHwTBUwBMAXgNb/9REATPDoLgaaCZ9lvUMV0AngPgHAA3AbgYwFMAHGaMeboxphfARwG8OAiCZwD4E4D3AYAx5nxjzCtb1OMuAK/mfY4AsA+kcX4JwL+idZiuT3EjvNgYk2hRvlC0AfgTP4ffALDJZL4N4MwgCJ6OvTeepIPrExqt+sQKANvUPtt523w4EMBXgyA4GMAMgLOMMUkAXwPwsiAInglgj7qyO8wL1ycEu/tOvAXAz+a5f4u/y+/Ek3ky9L9BEAwFQVABsAHALbz9bgBr+PdzAFzHv6+e51ybgiC4k3/foY4/1BjzW2PM3QDeCGrEFj8OKC7B3QB2BUFwdxAEPoB7+fgjQTP43xlj7gRwGqjRIgiCjwdBcFOLenwWtFq9E9RR/wKgYYw5DsBIEAR3tDjmXAAHAXg2gG4AH5rnPncHH8D3+fdVAI4ypJ3IBEHw37x9vufosGfh+gThsewT24Ig+B3/vgrAUXzujUEQbOLt1/wfzu/w+ML1CcK8fcIY80LQZGghfeXv8jvxZNYM6YDhvvrbx+LvS5+rASDFv68AcEIQBHcZY04H8IIWx+hr6+s3ANwaBMEbFlqJIAhmALwJILEnyJ67EcDrAbzSGPNyAEkAWWPMVUEQnBwEwZCtjzHm2wAeUSz9KOCCUD254PrE/H1iB4BV6vQredu8VdjN3w57N1yf2M13whjzVBBT9rIgCMYXWg9dpUdxzF6HJzMztBD8D4TKPPFRHJ8BMGSMiYFm/Iu99vOMMfsBgDGmzRhzwHwHGGM6jTE288tbAfxXEAQzQRCcGwTByiAI1oDu41dBEJzMxwzy/wYklv6/2Gk9AK/l3ycBuD0IgikAOWOMFdU9mufosPdgKfeJmwCcaghHAphWH4lHwmpjzHP490kAbgfwAIB1xpg1vP31C7h3h70XS7ZPGGNWA/gRgFOCIHhwgXX+u/xO/L1Pht4L4H3GmL8C2A/A9CKP/xiAPwD4HYD7F3NgEASjAE4HcA1f/79BNOV8tuCDAdxjjHkA5PnyngVc6ntMz94NoBfABXyNAWPMdpD9+aPGmO3GmCyX3WyMWa6ry/8XABzBwrdjAJzP298C4BtMy7Zh8c/RYe/Bku0TAG4GraAfBvANAGfZA7htN1WX/38AwNnGmPtA+o/LgiAo8bE/N8bcASAH1yeezFjKfeLjAHoAfNWQ8PlP9oCl9p34u07HYUjVXwqCIDDGnAjgDUEQHL+n67U3gTvIK4Mg2GSMyQdBMMeTwRjTHgRBnn9/GMBgEAQL6YAOexlcn5gfxpgIgBEAAyBx9U+CIDi0xX7tQRDkeaW9HsBDQRBc/MTW1uGxgOsTu8dS+E48mTVDC8EzAVzKA9YUgDfv4frsVTDG3ArgbiUEfSS8whhzLqi9bAGtZByenHB9Yn7cC+CbQRDU6BE9It5mjDkNQBwkYP3aE1E5h8cFrk/Mg6Xynfi7ZoYcHBwcHBwcHHaHv3fNkIODg4ODg4PDvHCTIQcHBwcHB4clDTcZcnBwcHBwcFjScJMhBwcHBwcHhyUNNxlycHBwcHBwWNJwkyEHBwcHBweHJY3/DzsqlFH7/CXaAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "import matplotlib.pyplot as plt\n",
+ "plt.figure(figsize=(10, 10))\n",
+ "sample_idxs = np.random.choice(50000, size=25, replace=False)\n",
+ "\n",
+ "for img_id, img_name in enumerate(os.listdir(INFER_DATA_PATH)):\n",
+ " plt.subplot(1, 3, img_id + 1)\n",
+ " plt.xticks([])\n",
+ " plt.yticks([])\n",
+ " im = Image.open(os.path.join(INFER_DATA_PATH, img_name))\n",
+ " plt.imshow(im, cmap=plt.cm.binary)\n",
+ " plt.xlabel(\"Img name: \" + img_name)\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 七、开始预测\n",
+ "> 飞桨2.2 CTC Decoder 相关API正在迁移中,本节暂时使用简易版解码器。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Predict begin...\n",
+ "step 1/1 [==============================] - 7ms/step\n",
+ "Predict samples: 3\n",
+ "文件名:9451.jpg,推理结果为:[3, 4, 6, 3]\n",
+ "文件名:9450.jpg,推理结果为:[8, 2, 0, 5]\n",
+ "文件名:9452.jpg,推理结果为:[0, 3, 0, 0]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# 编写简易版解码器\n",
+ "def ctc_decode(text, blank=10):\n",
+ " \"\"\"\n",
+ " 简易CTC解码器\n",
+ " :param text: 待解码数据\n",
+ " :param blank: 分隔符索引值\n",
+ " :return: 解码后数据\n",
+ " \"\"\"\n",
+ " result = []\n",
+ " cache_idx = -1\n",
+ " for char in text:\n",
+ " if char != blank and char != cache_idx:\n",
+ " result.append(char)\n",
+ " cache_idx = char\n",
+ " return result\n",
+ "\n",
+ "\n",
+ "# 实例化推理模型\n",
+ "model = paddle.Model(Net(is_infer=True), inputs=input_define)\n",
+ "# 加载训练好的参数模型\n",
+ "model.load(CHECKPOINT_PATH)\n",
+ "# 设置运行环境\n",
+ "model.prepare()\n",
+ "\n",
+ "# 加载预测Reader\n",
+ "infer_reader = InferReader(INFER_DATA_PATH)\n",
+ "img_names = infer_reader.get_names()\n",
+ "results = model.predict(infer_reader, batch_size=BATCH_SIZE)\n",
+ "index = 0\n",
+ "for text_batch in results[0]:\n",
+ " for prob in text_batch:\n",
+ " out = ctc_decode(prob, blank=10)\n",
+ " print(f\"文件名:{img_names[index]},推理结果为:{out}\")\n",
+ " index += 1"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "py35-paddle1.2.0"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 1
+}
diff --git a/docs/practices/cv/image_ocr/image_ocr.ipynb b/docs/practices/cv/image_ocr/image_ocr.ipynb
deleted file mode 100644
index 95f6699855b..00000000000
--- a/docs/practices/cv/image_ocr/image_ocr.ipynb
+++ /dev/null
@@ -1,739 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "# 通过OCR实现验证码识别\n",
- "\n",
- "**作者:** [GT_老张](https://github.com/GT-ZhangAcer) \n",
- "\n",
- "**时间:** 2021.11\n",
- "\n",
- "**摘要:** 本篇将介绍如何通过飞桨实现简单的CRNN+CTC自定义数据集OCR识别模型,数据集采用[CaptchaDataset](https://github.com/GT-ZhangAcer/CaptchaDataset)中OCR部分的9453张图像,其中前8453张图像在本案例中作为训练集,后1000张则作为测试集。 \n",
- "在更复杂的场景中推荐使用[PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)产出工业级模型,模型轻量且精度大幅提升。 \n",
- "同样也可以在[PaddleHub](https://www.paddlepaddle.org.cn/hubdetail?name=chinese_ocr_db_crnn_mobile&en_category=TextRecognition)中快速使用PaddleOCR。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "## 一、环境配置\n",
- "\n",
- "本教程基于Paddle 2.2.0 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) PaddlePaddle 2.2 。"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "2.2.0\n"
- ]
- }
- ],
- "source": [
- "import paddle\n",
- "print(paddle.__version__)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "## 二、自定义数据集读取器\n",
- "\n",
- "常见的开发任务中,我们并不一定会拿到标准的数据格式,好在我们可以通过自定义Reader的形式来随心所欲读取自己想要数据。 \n",
- "\n",
- "设计合理的Reader往往可以带来更好的性能,我们可以将读取标签文件列表、制作图像文件列表等必要操作在`__init__`特殊方法中实现。这样就可以在实例化`Reader`时装入内存,避免使用时频繁读取导致增加额外开销。同样我们可以在`__getitem__`特殊方法中实现如图像增强、归一化等个性操作,完成数据读取后即可释放该部分内存。 \n",
- "需要我们注意的是,如果不能保证自己数据十分纯净,可以通过`try`和`expect`来捕获异常并指出该数据的位置。当然也可以制定一个策略,使其在发生数据读取异常后依旧可以正常进行训练。 "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "### 2.1 数据展示\n",
- "\n",
- "

\n",
- "
\n",
- "\n",
- "点此[快速获取本节数据集](https://aistudio.baidu.com/aistudio/datasetdetail/57285),待数据集下载完毕后可使用`!unzip OCR_Dataset.zip -d data/`命令或熟悉的解压软件进行解压,待数据准备工作完成后修改本文“训练准备”中的`DATA_PATH = 解压后数据集路径`。"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "# 下载数据集 \n",
- "!wget -O OCR_Dataset.zip https://bj.bcebos.com/v1/ai-studio-online/c91f50ef72de43b090298a38281e9c59a2d741eadd334f1cba7c710c5496e342?responseContentDisposition=attachment%3B%20filename%3DOCR_Dataset.zip&authorization=bce-auth-v1%2F0ef6765c1e494918bc0d4c3ca3e5c6d1%2F2020-10-27T09%3A50%3A21Z%2F-1%2F%2Fddc4aebed803af6c57dac46abba42d207961b78e7bc81744e8388395979b66fa"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "# 解压数据集\n",
- "!unzip OCR_Dataset.zip -d data/"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "import os\n",
- "\n",
- "import PIL.Image as Image\n",
- "import numpy as np\n",
- "from paddle.io import Dataset\n",
- "\n",
- "# 图片信息配置 - 通道数、高度、宽度\n",
- "IMAGE_SHAPE_C = 3\n",
- "IMAGE_SHAPE_H = 30\n",
- "IMAGE_SHAPE_W = 70\n",
- "# 数据集图片中标签长度最大值设置 - 因图片中均为4个字符,故该处填写为4即可\n",
- "LABEL_MAX_LEN = 4\n",
- "\n",
- "\n",
- "class Reader(Dataset):\n",
- " def __init__(self, data_path: str, is_val: bool = False):\n",
- " \"\"\"\n",
- " 数据读取Reader\n",
- " :param data_path: Dataset路径\n",
- " :param is_val: 是否为验证集\n",
- " \"\"\"\n",
- " super().__init__()\n",
- " self.data_path = data_path\n",
- " # 读取Label字典\n",
- " with open(os.path.join(self.data_path, \"label_dict.txt\"), \"r\", encoding=\"utf-8\") as f:\n",
- " self.info = eval(f.read())\n",
- " # 获取文件名列表\n",
- " self.img_paths = [img_name for img_name in self.info]\n",
- " # 将数据集后1024张图片设置为验证集,当is_val为真时img_path切换为后1024张\n",
- " self.img_paths = self.img_paths[-1024:] if is_val else self.img_paths[:-1024]\n",
- "\n",
- " def __getitem__(self, index):\n",
- " # 获取第index个文件的文件名以及其所在路径\n",
- " file_name = self.img_paths[index]\n",
- " file_path = os.path.join(self.data_path, file_name)\n",
- " # 捕获异常 - 在发生异常时终止训练\n",
- " try:\n",
- " # 使用Pillow来读取图像数据\n",
- " img = Image.open(file_path)\n",
- " # 转为Numpy的array格式并整体除以255进行归一化\n",
- " img = np.array(img, dtype=\"float32\").reshape((IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W)) / 255\n",
- " except Exception as e:\n",
- " raise Exception(file_name + \"\\t文件打开失败,请检查路径是否准确以及图像文件完整性,报错信息如下:\\n\" + str(e))\n",
- " # 读取该图像文件对应的Label字符串,并进行处理\n",
- " label = self.info[file_name]\n",
- " label = list(label)\n",
- " # 将label转化为Numpy的array格式\n",
- " label = np.array(label, dtype=\"int32\")\n",
- "\n",
- " return img, label\n",
- "\n",
- " def __len__(self):\n",
- " # 返回每个Epoch中图片数量\n",
- " return len(self.img_paths)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "## 三、模型配置"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "### 3.1 定义模型结构以及模型输入\n",
- "\n",
- "模型方面使用的简单的CRNN-CTC结构,输入形为CHW的图像在经过CNN->Flatten->Linear->RNN->Linear后输出图像中每个位置所对应的字符概率。考虑到CTC解码器在面对图像中元素数量不一、相邻元素重复时会存在无法正确对齐等情况,故额外添加一个类别代表“分隔符”进行改善。\n",
- "\n",
- "CTC相关论文:[Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neu](http://people.idsia.ch/~santiago/papers/icml2006.pdf) \n",
- "\n",
- "\n",
- "

\n",
- "
\n",
- "\n",
- "网络部分,因本篇采用数据集较为简单且图像尺寸较小并不适合较深层次网络。若在对尺寸较大的图像进行模型构建,可以考虑使用更深层次网络/注意力机制来完成。当然也可以通过目标检测形式先检出文本位置,然后进行OCR部分模型构建。\n",
- "\n",
- "\n",
- "

\n",
- "
\n",
- "\n",
- "PaddleOCR效果图\n",
- ""
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "import paddle\n",
- "\n",
- "# 分类数量设置 - 因数据集中共包含0~9共10种数字+分隔符,所以是11分类任务\n",
- "CLASSIFY_NUM = 11\n",
- "\n",
- "# 定义输入层,shape中第0维使用-1则可以在预测时自由调节batch size\n",
- "input_define = paddle.static.InputSpec(shape=[-1, IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W],\n",
- " dtype=\"float32\",\n",
- " name=\"img\")\n",
- "\n",
- "# 定义网络结构\n",
- "class Net(paddle.nn.Layer):\n",
- " def __init__(self, is_infer: bool = False):\n",
- " super().__init__()\n",
- " self.is_infer = is_infer\n",
- "\n",
- " # 定义一层3x3卷积+BatchNorm\n",
- " self.conv1 = paddle.nn.Conv2D(in_channels=IMAGE_SHAPE_C,\n",
- " out_channels=32,\n",
- " kernel_size=3)\n",
- " self.bn1 = paddle.nn.BatchNorm2D(32)\n",
- " # 定义一层步长为2的3x3卷积进行下采样+BatchNorm\n",
- " self.conv2 = paddle.nn.Conv2D(in_channels=32,\n",
- " out_channels=64,\n",
- " kernel_size=3,\n",
- " stride=2)\n",
- " self.bn2 = paddle.nn.BatchNorm2D(64)\n",
- " # 定义一层1x1卷积压缩通道数,输出通道数设置为比LABEL_MAX_LEN稍大的定值可获取更优效果,当然也可设置为LABEL_MAX_LEN\n",
- " self.conv3 = paddle.nn.Conv2D(in_channels=64,\n",
- " out_channels=LABEL_MAX_LEN + 4,\n",
- " kernel_size=1)\n",
- " # 定义全连接层,压缩并提取特征(可选)\n",
- " self.linear = paddle.nn.Linear(in_features=429,\n",
- " out_features=128)\n",
- " # 定义RNN层来更好提取序列特征,此处为双向LSTM输出为2 x hidden_size,可尝试换成GRU等RNN结构\n",
- " self.lstm = paddle.nn.LSTM(input_size=128,\n",
- " hidden_size=64,\n",
- " direction=\"bidirectional\")\n",
- " # 定义输出层,输出大小为分类数\n",
- " self.linear2 = paddle.nn.Linear(in_features=64 * 2,\n",
- " out_features=CLASSIFY_NUM)\n",
- "\n",
- " def forward(self, ipt):\n",
- " # 卷积 + ReLU + BN\n",
- " x = self.conv1(ipt)\n",
- " x = paddle.nn.functional.relu(x)\n",
- " x = self.bn1(x)\n",
- " # 卷积 + ReLU + BN\n",
- " x = self.conv2(x)\n",
- " x = paddle.nn.functional.relu(x)\n",
- " x = self.bn2(x)\n",
- " # 卷积 + ReLU\n",
- " x = self.conv3(x)\n",
- " x = paddle.nn.functional.relu(x)\n",
- " # 将3维特征转换为2维特征 - 此处可以使用reshape代替\n",
- " x = paddle.tensor.flatten(x, 2)\n",
- " # 全连接 + ReLU\n",
- " x = self.linear(x)\n",
- " x = paddle.nn.functional.relu(x)\n",
- " # 双向LSTM - [0]代表取双向结果,[1][0]代表forward结果,[1][1]代表backward结果,详细说明可在官方文档中搜索'LSTM'\n",
- " x = self.lstm(x)[0]\n",
- " # 输出层 - Shape = (Batch Size, Max label len, Signal) \n",
- " x = self.linear2(x)\n",
- "\n",
- " # 在计算损失时ctc-loss会自动进行softmax,所以在预测模式中需额外做softmax获取标签概率\n",
- " if self.is_infer:\n",
- " # 输出层 - Shape = (Batch Size, Max label len, Prob) \n",
- " x = paddle.nn.functional.softmax(x)\n",
- " # 转换为标签\n",
- " x = paddle.argmax(x, axis=-1)\n",
- " return x"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "## 四、训练准备"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "### 4.1 定义label输入以及超参数\n",
- "监督训练需要定义label,预测则不需要该步骤。"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "# 数据集路径设置\n",
- "DATA_PATH = \"./data/OCR_Dataset\"\n",
- "# 训练轮数\n",
- "EPOCH = 10\n",
- "# 每批次数据大小\n",
- "BATCH_SIZE = 16\n",
- "\n",
- "label_define = paddle.static.InputSpec(shape=[-1, LABEL_MAX_LEN],\n",
- " dtype=\"int32\",\n",
- " name=\"label\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "### 4.2 定义CTC Loss\n",
- "\n",
- "了解CTC解码器效果后,我们需要在训练中让模型尽可能接近这种类型输出形式,那么我们需要定义一个CTC Loss来计算模型损失。不必担心,在飞桨框架中内置了多种Loss,无需手动复现即可完成损失计算。\n",
- " \n",
- "使用文档:[CTCLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0-beta/api/paddle/nn/functional/loss/ctc_loss_cn.html#ctc-loss)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "class CTCLoss(paddle.nn.Layer):\n",
- " def __init__(self):\n",
- " \"\"\"\n",
- " 定义CTCLoss\n",
- " \"\"\"\n",
- " super().__init__()\n",
- "\n",
- " def forward(self, ipt, label):\n",
- " input_lengths = paddle.full(shape=[BATCH_SIZE],fill_value=LABEL_MAX_LEN + 4,dtype= \"int64\")\n",
- " label_lengths = paddle.full(shape=[BATCH_SIZE],fill_value=LABEL_MAX_LEN,dtype= \"int64\")\n",
- " # 按文档要求进行转换dim顺序\n",
- " ipt = paddle.tensor.transpose(ipt, [1, 0, 2])\n",
- " # 计算loss\n",
- " loss = paddle.nn.functional.ctc_loss(ipt, label, input_lengths, label_lengths, blank=10)\n",
- " return loss"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "### 4.3 实例化模型并配置优化策略"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "# 实例化模型\n",
- "model = paddle.Model(Net(), inputs=input_define, labels=label_define)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "# 定义优化器\n",
- "optimizer = paddle.optimizer.Adam(learning_rate=0.0001, parameters=model.parameters())\n",
- "\n",
- "# 为模型配置运行环境并设置该优化策略\n",
- "model.prepare(optimizer=optimizer,\n",
- " loss=CTCLoss())"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "## 五、开始训练\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 14,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "The loss value printed in the log is the current step, and the metric is the average value of previous steps.\n",
- "Epoch 1/10\n",
- "step 529/529 [==============================] - loss: 0.0891 - 9ms/step \n",
- "save checkpoint at /home/aistudio/output/0\n",
- "Eval begin...\n",
- "step 63/63 [==============================] - loss: 0.0830 - 6ms/step \n",
- "Eval samples: 1000\n",
- "Epoch 2/10\n",
- "step 529/529 [==============================] - loss: 0.0199 - 10ms/step \n",
- "save checkpoint at /home/aistudio/output/1\n",
- "Eval begin...\n",
- "step 63/63 [==============================] - loss: 0.0353 - 6ms/step \n",
- "Eval samples: 1000\n",
- "Epoch 3/10\n",
- "step 529/529 [==============================] - loss: 0.2133 - 10ms/step \n",
- "save checkpoint at /home/aistudio/output/2\n",
- "Eval begin...\n",
- "step 63/63 [==============================] - loss: 0.0259 - 6ms/step \n",
- "Eval samples: 1000\n",
- "Epoch 4/10\n",
- "step 529/529 [==============================] - loss: 0.0133 - 9ms/step \n",
- "save checkpoint at /home/aistudio/output/3\n",
- "Eval begin...\n",
- "step 63/63 [==============================] - loss: 0.0210 - 6ms/step \n",
- "Eval samples: 1000\n",
- "Epoch 5/10\n",
- "step 529/529 [==============================] - loss: 0.0110 - 10ms/step \n",
- "save checkpoint at /home/aistudio/output/4\n",
- "Eval begin...\n",
- "step 63/63 [==============================] - loss: 0.0130 - 5ms/step \n",
- "Eval samples: 1000\n",
- "Epoch 6/10\n",
- "step 529/529 [==============================] - loss: 0.0150 - 9ms/step \n",
- "save checkpoint at /home/aistudio/output/5\n",
- "Eval begin...\n",
- "step 63/63 [==============================] - loss: 0.0111 - 6ms/step \n",
- "Eval samples: 1000\n",
- "Epoch 7/10\n",
- "step 529/529 [==============================] - loss: 0.0039 - 9ms/step \n",
- "save checkpoint at /home/aistudio/output/6\n",
- "Eval begin...\n",
- "step 63/63 [==============================] - loss: 0.0093 - 6ms/step \n",
- "Eval samples: 1000\n",
- "Epoch 8/10\n",
- "step 529/529 [==============================] - loss: 0.0100 - 9ms/step \n",
- "save checkpoint at /home/aistudio/output/7\n",
- "Eval begin...\n",
- "step 63/63 [==============================] - loss: 0.0059 - 5ms/step \n",
- "Eval samples: 1000\n",
- "Epoch 9/10\n",
- "step 529/529 [==============================] - loss: 0.0096 - 9ms/step \n",
- "save checkpoint at /home/aistudio/output/8\n",
- "Eval begin...\n",
- "step 63/63 [==============================] - loss: 0.0061 - 5ms/step \n",
- "Eval samples: 1000\n",
- "Epoch 10/10\n",
- "step 529/529 [==============================] - loss: 0.0066 - 10ms/step \n",
- "save checkpoint at /home/aistudio/output/9\n",
- "Eval begin...\n",
- "step 63/63 [==============================] - loss: 0.0054 - 6ms/step \n",
- "Eval samples: 1000\n",
- "save checkpoint at /home/aistudio/output/final\n"
- ]
- }
- ],
- "source": [
- "# 执行训练\n",
- "model.fit(train_data=Reader(DATA_PATH),\n",
- " eval_data=Reader(DATA_PATH, is_val=True),\n",
- " batch_size=BATCH_SIZE,\n",
- " epochs=EPOCH,\n",
- " save_dir=\"output/\",\n",
- " save_freq=1,\n",
- " verbose=1,\n",
- " drop_last=True)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "## 六、预测前准备"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "### 6.1 像定义训练Reader一样定义预测Reader"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 15,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "# 与训练近似,但不包含Label\n",
- "class InferReader(Dataset):\n",
- " def __init__(self, dir_path=None, img_path=None):\n",
- " \"\"\"\n",
- " 数据读取Reader(预测)\n",
- " :param dir_path: 预测对应文件夹(二选一)\n",
- " :param img_path: 预测单张图片(二选一)\n",
- " \"\"\"\n",
- " super().__init__()\n",
- " if dir_path:\n",
- " # 获取文件夹中所有图片路径\n",
- " self.img_names = [i for i in os.listdir(dir_path) if os.path.splitext(i)[1] == \".jpg\"]\n",
- " self.img_paths = [os.path.join(dir_path, i) for i in self.img_names]\n",
- " elif img_path:\n",
- " self.img_names = [os.path.split(img_path)[1]]\n",
- " self.img_paths = [img_path]\n",
- " else:\n",
- " raise Exception(\"请指定需要预测的文件夹或对应图片路径\")\n",
- "\n",
- " def get_names(self):\n",
- " \"\"\"\n",
- " 获取预测文件名顺序 \n",
- " \"\"\"\n",
- " return self.img_names\n",
- "\n",
- " def __getitem__(self, index):\n",
- " # 获取图像路径\n",
- " file_path = self.img_paths[index]\n",
- " # 使用Pillow来读取图像数据并转成Numpy格式\n",
- " img = Image.open(file_path)\n",
- " img = np.array(img, dtype=\"float32\").reshape((IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W)) / 255\n",
- " return img\n",
- "\n",
- " def __len__(self):\n",
- " return len(self.img_paths)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "### 6.2 参数设置"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "# 待预测目录 - 可在测试数据集中挑出\\b3张图像放在该目录中进行推理\n",
- "INFER_DATA_PATH = \"./sample_img\"\n",
- "# 训练后存档点路径 - final 代表最终训练所得模型\n",
- "CHECKPOINT_PATH = \"./output/final.pdparams\"\n",
- "# 每批次处理数量\n",
- "BATCH_SIZE = 32"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "### 6.3 展示待预测数据"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 18,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAkMAAABmCAYAAADIx5U3AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvIxREBQAAIABJREFUeJztfXeYZVWV/Tr35Vf1XuWuqk50NxkBxYCojCiijMoIphGRZAIFE4ZRxgQMBkBFkZYxISgCisAAihL0pyM6oyMKAgJN51Q5vhzv74+9z937VT2qqxDoxjrr++qrqntuOPfec849Z+291za+78PBwcHBwcHBYbHC290VcHBwcHBwcHDYnXCTIQcHBwcHB4dFDTcZcnBwcHBwcFjUcJMhBwcHBwcHh0UNNxlycHBwcHBwWNRwkyEHBwcHBweHRQ03GXJwcHBwcHBY1HCTIQcHBwcHB4dFDTcZcnBwcHBwcFjUCC9kZ2OMDwBhLxRsq9VrVMb/69lVfdbx8neNha/7+/uCbYODgwAAK4rtqf3tthBvqy9QOHuhOttm17vMedzMe99TYN9Ps+cx32c089k0e1b+jN8NZU0OCPmz9/e4srV6szKDet2H7zc729MH2ydCIWn5Na6wxw2+rlTebR+wm0yTNu7goKGaVtAX5irbE/qEgUHIqO+ET9+JsAnz/1V9BP/2+T+pvt9028wyT5XZB0T7L1+2PCjbNrB15gkCeFxXjx+dr0bw5uNb/fFONauezc9gZu01F8yM382us9DxW2ow+0j7vP1dbHuiQ5adQ1Tr1Ya66LOaps/IzCgT1Jt8dT3PQ71eR30efWJBkyE6IIyu1rbg/6npMQCAbfZJdcZi1VaSEI9Jo50q0tYzz3hHsO2iiy6i40rUcRJROVetRL/TvK1YUmXzqPdCJyfNKLOZT3OufcoLvN7TBftIdfOyz08/o5mNXN97aMa2Zs/Bnquit/GBVW/29Vr5n4p6makWOmAyU2uoJwAk41Fki3vGUw4BaG9NBv9PTmUBALFwHABQqhSCsmiEfpf5wYQjcp6q/T48CaPNQr6GDcPEHAfW5xq/57Ntt36iHx+hOcr0s5k5hix4yjHHuzRNymy9WltkWzbbeKpUq/S+bKaO6h4woTYwiCKOVKQ12JYtU8U74+0AgMnCZFDmBR9ZqnwE0ikqPILobTV+ExUeESJGPhQlv8znonN+5P0fD8o+9Lmzeafg5AFao1TXaJmuU/OLQVmoyYSsDCqvozrzVEE7qQcjo25h9gNJZdU5v0xSFpnxW84g1y6p/f0ZfVXXwJ7DHl9XDdOeIcxH6NqF+Miaeg7VYFJoZu0/E7q7tLemAQCTPH9onIjQe43yxLnq63E+xPcgT8LwiqBQy8+6n9aWJKZz+TlqJVjQZCgSjqCvqxs3/+SGYNvK5f0AgFKOGrenZvz9/b0AgLGp6YbfABBtTQEAUh2dwbZSmT4a738/NVq/Kh+RyZEBAEBvOzXaWlVmQ75H2/x5Wf0WZhk0fpPZ5hyv3PCLqJpmI2WzazfZ5j9J1kszu56RGp27YTLEl9Pb6jPq76lCy+IYpjLCetVmd2Rar6qmMCXutmU7mKlnmyxRI+9VTOHA0AjXi87VuaQ/KOvsXrpHsG8hz6A1GcYvf3V3sG2fvfcFABR5slYo5IKypct6AAATU8SClsvSxiMR28HV0OXzNp8HfP0FDr6g9Rm/AWOPw8x9ABjuo6bGp9RP0v49ezUme+j2GZ5Rr7naeLVJ2e5HqOFT1ohmE55m7W72fvPr12bWgbJPa4J+T0xMBNvicZp0x+I05u3YsTMoe9lRx2B0Itukdk8vQiaEVCyFW//rlmDb8uXE0JTy1N4nJ2UytGqvvaisRGN6tSQfv0g8BgCYnpZvRyhMbc6L0O9YIi77R6mfFIs8mVHU2SCoz538lpMBAOmwTNbGNg8DADrjHQCApBcLyqplOpfvqf7F3bEa4oWaKpNFg/28Sn82dbuNdiqFZXyc2RT0mBtl2i+qVoS2+1f59CU1bFRD9YZ9wqrR2nPYsoYJv7E15slQXQ6s8Y41NW5XqvR32TJ/IZmkRPm9VKvUv+pV6f+jo/S8ly6h8TAWkalIsUCTl1RLYlYdctPUfiIxmQBXmF2KxOgc04VMUPayo4/CfOF8hhwcHBwcHBwWNdxkyMHBwcHBwWFRY0Fmskq1goGhAXR2ic+QFyKKKp4gemxqTCjd0TE+js0pqXQiKDNhouEmxoaCbX193XwdoiUzE2NB2fJlZCKpFokCCxuZxxVKu6bf52dCEzSz4Vs0M5PN3L/W7HJzmr+eaJnGrg1HzOqirp6frXrTOtt91P1Z5/cYU9CVsjIzsMnFC1ubs/C6NaYzo3FqKx1paUfVDFHjGzY8FmxLsl3Z82j/iUlpD9GoQbGy+x0kanUf2VwFq9csC7Z5/JDzBeoL4Yg82IlJoodLZaKCOzrlGWQDhxB1XzON/7ot2E2mmWlrpiedLuP+4lt7/3yfI11bm3bqZj4ee43H72nw5/Bcbyzh5xW8kibvIoB+Lt6MbbrvzbRVyHGFAvWJqjIvhNlEVC5TWVubtJ+Rkel5+U8+1aj5NWSKGaxcuTLYFouR2clw5MuyZdJf7PPP56lP6LHd3ntLi3KcYhN+hYN3RkZGgqLWVjJ9pdI0dmzevDko27t/FQBg26NbAAAH7XNgUNaVpm9PYZxM2q3tcr3WFjpXJi/ml1KZTHo1ayZTpmbf1t+ncdGo9uWxHcpw26lHyuo4NEBbr61pSvtUBsE6NqhIW9dtoMaM4wGgwn/O5esZjtG3Wre9GjukecqkFY+TKSzGbhFGuVfYb4DHtYjGxfS4YtmhAICd27cDAH76i5/Lcfy8DNsJX3XMK4My+35Dqg6ZPI2bUTaRtkWlT2SyuXn3iT1zdHJwcHBwcHBweJqwsNB6ULhzskUOGx0lZmfNcnJ8jYU7gjI70y8VyemppU2cpaey5EDnh8QR6q0nnQgAuOaa7wMAXnvsMUFZkR1NIzzzzGTFUTDVIk63gHJgC2q9APCMfe5Z4lPBDDXD3ztXnV3PGIdyNXOaax4dM8e9VmnOHVLsgF25Wue5clVWPuUsR2DUaP+6crZLJHmFkZPVg111gMMwky3i8Fguz5/PeCrheUAiIWwQAIyO7wAAJDnCrK0tJWWjtIqt1WnVWKsJq+Y1bTMz1jXzjL83M52CGxpoI8PR8NqDNqq3eo1lZlaJWsZqR+3GOMW6r1b3exC8OR5pY5+YMTY0KWvOzXozrtPkRTeTm+B239PTM+tc2Uka/zxvrli43QMPHuKheAMbsWHDBgBAGwfOdHTId8I6U/s8noRVGHGRHa67e5fM2t8GHHS0tQdl45PExlpH6lWrVgVlLd3Edtz3f3+h86jvxsolxGIV6/TNmp6S70uUHXMRlu9eJEIvzPOIOQk19C8u4/4SqqnIJ4/OEapzWXh247PtRBFkYH/ohm22z9kzhDVROZNl0jICfuO2ZsN+ld/F2Mh4sM2+s5VKrmD9po0AgDvvvovqEJF7Peoocl62Y113d3dQ9uB99A5WrlgBAHjT618flBUL9M6t0/xvf/2boGznIFkH0u0ypkbi9K4rNXoX2hrR092FUeWsPxccM+Tg4ODg4OCwqGHmspfPhGeMH/GADY/9Jdh29x23AwCy4+wLUZQZdZpXAV6EZm41IzPrPM88W9u6gm1jk1P8F814+5YIk3TkC18AACiz3Xb5MpnV56dnrIIXzMAsVDBk1+efy+dod8KGWDYNGW4Sit8M1qYb4ZVSrSYz8YjXeOKhIfEJe+SxdQ3bqnU57ujXvAYAsHLlXsG2DRs3AwCS7FuUy4ucwgtedCSKZaBW3/0CcyEP2LnzgWCbzyuTECuEVhU7ZvtbOs16K2rVEoloFRGLGW2toW1bSi88a19jctDw53y3ekk5Oxw42OY3YSEs+xOE62v/MXvfdM/1enqOOuw+zBVaP38xofn7/M13aPDr9Px0/7JyDX39tKKuK/XZ9rZlqGH3iy56xvOjiGP9w+uDbaUCscIxFtayPk8AEA4xA9ZJ34LR0dGgzN679kUZn6I+0790KQAgoYSY7LH2CWSVBSHCLIxhVqa3U74hNda9i4aIjdafxZFxOmfvMmHoRjLE8Na92eJg1qcuxGH04Zr0mzBv89jBpxATDZxAlaSJT8985ByafXP8OVh/M4MhAkQ2JZqmZ5pMin6aZWoGhgeCba1p+sZ3LyHmbuu2zUHZ737/ewAimdDTLd/6o48+GgAwOUbMk20DAJBK0rVzORrD2tuF+QtF6P1oRr1UobY0nmF5n7Dc7K9++/9wyVe+jK3btu2yTzhmyMHBwcHBwWFRw02GHBwcHBwcHBY1FmQmM8b4YQBDAw8H2zJMISY4fDgZE1NYOMJUIOfOmMyK2m7PUnJY27JTwiJbUkSjW8XVbhV2fMtNPwYAvPP0U+lcY0KlRprR9wtCM7XYx1eQbhpaO3PPJsrVzRShm2MhNrYnkxFvdt3Hr3NFqYBbJGNEYyb4t1YPteqzPm+zoZAAcP3tbG5VtHb/Ugq/3baTaNmTT5PULfsdcCAyeR/V2u41CYRCxm9tAQYGNgXb6izlX+UcMlNTU0FZkingnm6SihgZFikKcYZV78HMIRsxS51a9YMQm9+atrkgJp//lT5r6vacMbX7jPM3nJNNOB63BaPahP2bzWV+vRN7IsTZfBdrQ25qwV5PWCpDUJ+j9VonXd2HpqbITaBnCZmIRkZkHFyz+pA9ykz26IOPBttC7PnrcZTAuKp3lM3D3ZyNYMuWLUHZXqxOrRWo4y1iugGAR9etC/5uYbPNgQdS2PzouJLjYEfeMqd7CiuV6VKew+A5R1ksLtfIsQxGxRPTXijJ0iHe7GRG1jnamseiNelf0Srdq8f7FKPyTbQmqlCTLlvh5lTV1usZx2mV6Znq0jWdFzRIUOnNOs6eK8cphPT8IBajumvF73KNnkmeVfY9Ze5qTdEztIEwDz74YFD2wP1/BSBO1S99yZFBWYklJWzZY48puRV2L9D1Ctm5R4rG1um8tJXDX3Q4sqUiavWaM5M5ODg4ODg4OMyFBYXWe4YSTdrVCQCEeSbtMxNSq8mMbZpF8rwwrSzTKVkZDg+R45RRq9koJ/br6qSZZFHldIpEaUa4YwetpKNhYRUioTkcIBlNp4XByk5PxRvzuvvNjgyOk7mk5KOxq8e/hwV6olm3dj23lZxjzXIqN7uuDcOenXvHho7bkEYAyLGMwtQktxGVZjvCq8MYh5dWy1KH1x1HDtR5lXzVrkCs0NiVV34nKPv0pz+NSy/7dpP6Pr2o14FcFti5U3JE2UVlWxu1554ecRwssaPm0CC1/5pa6glT1iQBUcAQNYmftfmOfOWA7c1YGjZjiHwrAKdyy9n+WFfnssyQ3abPxQ7TnpUAUKtniSVnB+qZMgF7CPzAg1TqJ06l8rxn3A4a+u4slmieEghzsMWTHD5vheaAmWH2jexqPG5QKO3+yA3PhNAabQ2kVQCgVqH2m0oRc6PFIm3rswJ/vb29QVl2msaRSkXG+HYOyNm6gwT7Nm3cKGUc/m1FHvXz6eomy8NYcZLrJG21o5u+Lxm2XlTDUta1hPrv9nHp47UwS2OwA7Ue7/2alWCgfqmNBPVAnYK/m/rzMkeou92voppZzbMMFP2v0qMhwueqB/tKmZxj9pUs0ZVSMiYWVWbW9HfZfvcTnIHaKJ0KG+wU4nylvV3iCH3AG48HICxQqSiJcet8zp3bSaJk79VrgrJMnvarqOCbCrNTVtZF95d8MT/vr6ljhhwcHBwcHBwWNdxkyMHBwcHBwWFRY0FmMh+kv1BVOaHarAbMJDtSK1XheIzoKmsmK5WE2kolWYFU5RWrVoi2K5WJQmxNCV36T/90LADg1/+P1Chf/rKXSsXmMJPNJAIb1WYtgTZb0aEe5GFSeYRMo3msrs4uuc8sRdqkTs3o9aCimsxroub7uGimRtHMIZy2lZoqntJxBs3qQKgpDtaaFYayrPYaEyo6HKd3F46x86Ayg0Ys18vtoJQXarTGeWysNg8gFPckJ7l7zxlnBmXdffti1xnpnib4wJIeybUUjrLjdIa0t8JKZR0+qeAWC/QMO9uljZfLTbR6wA6WgUekfrf2vOzQqJ2ewX83U4a2fxvbxvUwMOOcqs5e4LCtdrd1DfSlZjtX29xMdbNb/XofH4FNTOk02S3KjmF4q5mpyK2PnMOpupmOlz/HM+nqIvPM2Jg4AVs3BLY6IRKJzjpud6Pu15ApZQNTFQAYdqItsLpwXNXbaszYQINkXHJYWtOH1hkq5Mj81sL7vfa1rw3KrG7Xf//2twAac2vVy3T+173uBNqgnHAnpuj7FWdtvIm8BD1MjpO5px6X/a0ztc/ma/0WazbQxrZ/5ToC1oXyWZ9Lj2HWedl2Ie1cX+G+mlcO1FU+wD7lSFUOCM8wk1XUuYozzGTajGcfc6VoTU+qPXM/8ZXulTWLRdhkp81kFT5xLkvPsqdTzGT2PcVj5FNgc70BQEcHtftShMaW4eFhqUOI7jYSl3lGiOcc4+P0PSpV5FzRcBz1JoE+zeCYIQcHBwcHB4dFjQXnJqNVkRw2OUGz5t5Oduyryqo2X6QZYZi9uaJRUQrNsSPp2Wd9JNgWilF5mT1mYzFZnRYydJ2rrvwuAOC2W24Oyv75FXsDANqZpdKriDyrWNqQzj6V42bjenK80w571jl8SR+t9McnZIVQ5ay9ZV5tnHa6hHrnSzSTvvWW22iDWgxY578k598aHBQFzxUr6TpaUTOb5dBAXkkmErJSCvEqysoP+Gr58J3vkINxnD14PZXs6sQTT6JtPcTI2TxAABCK0vvMZ8UxviXB6w1eibz7zDOCslQnSyDkaP9MURwlOzlP0Hf/k+oypXLbJA2tBuPsQJ1QIbI2oD6mVJhtXiK7ArSryj0RIU/aap4dByNheoY60tn6mifjtPqpV9UqmIMRPOW8+Wpe9SZaZq9bCjna9vOf/o7PJSvxuqH2UWBnRxvmCgCVKjFytu14Rp55ncOBjXKgfvfbzwIAjA7TubQqbSJOdZjMkDPrLbf9ICgrlWj/GGerrqk2Z9u0VafV4eM2FFf3Y7uStNnL3/CGN8j9VBpZ2LDKIXXzzTRO2LbTrC+FeP/xcWmrLQlitTXzYv2//TIHi6gU4i8/mvIotnO29I3KqXe/Aw4AAFx3/bV0vYi8y7YOCh/euplyd+mxKJPnd6ccQu1jCjOTqp9Rseg/4dCLpwSqbvb9NlNZt+8hyEWoYN+bzmt1xBFH8Onp/Ev65JkNjhCLYNvorbfeGpT5dWpDv/glZUi3/RMAnv2851NdONw+1CIUjB+jwTxXlXHuqh9eCQDYPrANANCrgiTiPM6d9qZTAACdXZLLqzRO57Ci9GVI+4qGmAXj/yuqv/jMhORUgEKaFZ3Ht5Cj8Ze/cGlQhinq46kOGo93VkWy5MyPfxgAsKSvnyshH6vcBO2XDjUGRgFAlr8PCZU7LplkR3XucpNTwmKed95nAIgz+8iIyOjEYtQPk630zS4VpQ8nEsTOffzfzuVryLyhxh+kUlHYnljSjrP0vCIxaWPlahn1eQYzOGbIwcHBwcHBYVHjCYkuDmwTUaywTzO63AStqsoFYRc624mFSLXS79EpWdmf+s73AgC6elcG2wbGeEUdpVnjhGJl9t9vHwDAEAvwfePyy4Oyv95LWe4PP/xwAMD0pAjZ7b03sUbjbHcPKeuutSeHdZg+rxYmp2lVFgrLaqV3GeUDyuVpdn7Cm/41KBsZpfu/665f0j37IvxUKlsBK5q5dqscLaNjgwBkBQ8AnV00W27l0HXtM7B27VoAsoqKK3GwUfated/ZHwCgfFAArOTszQ8M0fP733vuCcqmp+h59XaJ9EGG/YF+cecv6FwqM32J7eTL9lkNANiyc3tQFqzAeXX0za/Ie0pyPp6lKVrlDW7ZFpTF2qiNhNS7sOGTNWYwQoop3Gv1oagAqO92gbmQH0EcA0OShylfomecaqO1RjQqbERumpZQ9TLdb7mgV8O0/0knHx9s6eyh/a/+wRWzrn3aKWcDADgtIK69RlbBufpDAID9+B3t2Lk5KOtZQu1rcJDaXjQs9etoI6byVS97Y7DNr9HKLBaiOre2SsboSrBapr5rwtL/L1v7HwCANXvTOTMl6f+WzbGZsPWq0QrsWcE9ve1f/uVfADTmtvrTn/8MANi6me7xlFNOwUzccsstABoZKJv7yLCDhWYf6swCD2wVFrcjTf3DdoVjj3lVUJaM0zO67dafAQBSfRICfyDfRyvne7rj7juDMptHKfCVC8n6tMRja0gJ2dnq19kXT2et712yZo8QXQyZkJ9AK/74xz8G22wew5YEjVflgvgLZpn1t6xyPitj4bve8U4AwKPrRMDxuYc9FwBw1fevBtAowHfCGyj7uWUjbBsHgDt+/RP+i55xPCnt+NrrfgQASPfQOx5RDMdzXvw8AMDNd/5XsO2xrST0uO9B+9J1BnYEZeUMtfP+dmJePnj62UFZVwuN/QVmbhJpsVRkd1BHTrEXkBeXsTDXQdv+NCTf3gTLAXzvksuo7lMyRn/yPfQNCDGrft63vhaUDdWJVfn0eZ+l8xTkuFZ2LkoF+RSlvyxbTQzX+of+GmzzQpwpntvqV75ycVCWzxHLFGHx5aQaN0aGyUcrlaL+n5lWofVsebr4oi8BAFpUmH+JWdlieTYzVGbfIF/lJjvw0ANQQX1e3wnHDDk4ODg4ODgsarjJkIODg4ODg8OixoLNZCEAWx5TJgGmEyMcRrhSObN57Kz4+//33wCA//z29+U4Dp//0tf/M9jW0kHHnnLKu+j4sDhCjQ4RrdbDJqZvfkPMBjf++OMAgLe+9S0AgLByHPbZY9Xmw6qVxVHLOtlZZy4AyGWJritVbDiwCg2PE1135MvIWXL1mv2CssFheg4/vZ1o8nhe6PVIlHO0sSkslRbT1iRLEmgH1zamah+6714AwM033xiU1Tis0Tp69/dJSPdJJ53M90V0vFU5BcQ5tL6EaNmEclassgN0IiKOp6ecfCIAIM354Y4/8fVB2WODRNX+8KYbAABrnnVAUFZgirhWoOd81de/FZRFOZw8zHmAlqTFXDjJYfYh5fxase/MOo1G5RmtXHPwHmMmC6MVg8pMVgOZfGIJus9KRTkhTtH99XUdAgCoK9/fk95K1PZ0VtrOj278BgCgVJVcThaxMPWXt7yRaPh0izhqXnn9p/na9D7qNTFfDY+SefLQZx9K/w8MBWWnn0rnSsX6g22Xful7AIAlnURHv/a17w3KchlySm3rpLbW1yft6tLLzgMAFIpstk6qfsnjjnWg7uwUE63tl0NDUi/rMG0dqG+8UfqENc1ax2ntiHvkkUc2nP8nP/lJUGadqYs1av92jAGAVAtR+l1pMZ2Bx4S3vIHGmb89KDka/3LvfXQuDvvuVCY3JGgce9HzD6P/VfjxL3/9KwASQFFSqshI0PN6ppnJPOP5EcTx8F8fVtuoStOT5PrQpkwmbex0Pj5E/ebUU08Nyqo8Xuvv1A030LhjzZw6V5Y1jx111FEAgK4eeQ85n96vNZlOZ8SpuLuH+tIkB4XE1Rh9+ZVr+XhxoC6H6D0V2EysHdnBppzuFJlKp3dOBkXvftu7AQB7ryT3jVxBKdBn6F5bWC66qmRGBpJ0zupecj/nXfwFAMDgAzT2XHuBmMKWhaifZLh/ba7LvZ73bTKr5bns/Pd/NChb00Z1TvE3Yd26R4Ky5cvoGfX2qUwSgzSWWBWFT37yE0FZlZWhp6fp/ltb00FZMkF/7xwg0+BBBx8WlJ16CgUmrdyLntGGDRKM0NNN37tiWUzuYibjwTQkbeWAZzszmYODg4ODg4PDvLCg0HoLy3QAQC87ClY47G7HTnGKbWGnwhcf8SIAwAuOOCooK9dpNrdphziqeTGa2X3xC5cAAD7wIQm7P+AgWkkPDZBDXDgqVX8rO0z+6u67Gq4HAEnrhMYzd98Tp2LrOD00KM6bbZ3WSZJm4skWWcG8/JhX0321kuPa0Jg4aifT5Ag2maHZ9pIGgTV2IOMV7+iorEDb220Ir8xL1z9M2X3vuINCQKtVHXZIq9lPfvKTVIchqXvvEnoXO3fSM+rqklVED+fXGfNolTo1LnVvjdK7KBRk9XDjj2n1NZkjx9XhnLynFxz9EgDAm95Oz/3UM94pd8oCWJ0ssFXRIdMcOhqNWnkAcZA3vLTQebI8FrBTEl/Y0+DDoA6vgYWzYc/VGq1cNTPk1xuFCx99WBztM9Ms5AZhRMPs0B+yrJgS9TP1KJ8z3HA8ANQqtK3GyZoTSVmV2aCCTZvICdQ6/wLAf36TVo0tEWGG0iyzwJHeuO4aYWXPOptY2fEx6vePrZNVXJqdQ8P20ah0R5btKHJOorwS4JycpGei81e1cD+0LFBR5bCrVun5WkJYly1fTgEaVtBPNcfgmj47UO+33/5B2cQYrWZHRyXcPhWnOvztb7RaPuiAg4OyGrMBnZ10z9s2iaOrDYToaKGxZWBM5bjiIIcUM95F5YhrnrFrVQODUPBuASDO/duOgZpJsQEiVnzRiioCEopfVQ7zcRZjtWyRlnqwTKPN4aavk0pRe8oye71m1b5B2RAL+3Xw2J4tixP3+CCN1+84+3Q5Vy/1p+0jFDxy++23y/6jNLYODBDD25vqC8q6WDqhzOKLlt0CgLYE1S/JTszTSiwwzw7nwyPClv75EWLenrNiFQAgkxH2t8zfvRoHoUSVjMkkd2TDshHdfdLXJwap7kn+Hh166KFB2XSG+oJmbFNs0TjzjLcDAK78nlh6CswuWfHcYkHux+YpneAcljqnXlc31WfHDuoLe62S3GT5LJ3DKIOWN0voVH8nPMw31+cztbc5ODg4ODg4ODwpWLDooucBeRU+X2K7YDuvHhMdsgIt8UxweJhWQtM5YWWSHWT76+yQ1d/IBK0Q9t2fyrTY2PrHaMVppde/pjKWn3oqzfAPY+Gs7h4JVxwcoGtbEcFkQs45MmbTSYj41iT74iSSVK9XHPPPQdk9v6NQ0aP/meowOCLMRh/pvA4VAAAgAElEQVSnF4nxceG6rPirPDsPcchfokVW4iFmScZVKOd1P74OAJDPW2ZBZtRJnolv3UEr8URCCRcys5Pmd1BUx3ns+9TdSSvQbYqV6e+n57V1o/i9ZMaovG85lUVaha0YZDHNaU6lsmzliqBs3WMkHve9b1PY68QWCW29/Tbyj3j/yeQTNjWt/GCiQQIE2cYrP9PErW1P4ogMQqgp559ahVfEIVoZpdvExh6P0LsZHqZ2dtll3wjKwiH2fQjJe7Nig11LlA4/Y2yYVmrRCB9XE9+Jf+Y2es/vSOph6/a/BWXLl1MfWL2a+tkWDkkHgBXLSMJidFBW57k81SczSe8moUTNxkfoHbakqE3fecuvgrJN64lB6epmMdWc3JdlA6wP0LJl4vu2mevziqNfGWyz2c5bOBw6nRJpf8sy2XB9Lc5Z5KzYMZY3eMPr3xSUXXcd9bN4mu7noYfEx2Up++JVlTzFiae+FQDQ0UZ9aHC7pAn41hUkBvuB970PALBivwODMvAqfcc2Wukm2sRH8XXHkYzCf916M9+LEsCcp1jcMx3eHLdp/T9LitKz0gnf/S4982JetVX77i3zkhfmpcjsSjxGfXB6XMrSMRq3x9j3c+23RBLkgk9RCHopJEzXBMuRPGvNQQCAxHHyTn/2M2KJJrmPRyJS5rHkyGYWSlzaI/IRYAbZZxHEeFTaQowZ4i9/9cvBtn0PIl/N8U3ManWID2aizIwa/x9XYfpFFnNsSdLn/8z3nhWUfedzdP5Skeo+nZHvRDhEx+Vzsq2zg1iv/mX0+9TTRNaij31frS/fe9/3/qBscJjYpf32JXa1Nibvwo4NfUupD27fJmxpR5u1dszmcYKMRXrbrL0eH44ZcnBwcHBwcFjUcJMhBwcHBwcHh0WNBWetr9cBnSjZ+iMXS0SreSpPkHWaa28nCjLRKgTWFDsvptrEpFXmrNivPY5CxOvKKbWPabiTOeSbhWsBAD+85nwAwCkn03E7dwqtlmAFzrqNz46pUEZ27EqnxVQ3OUV03VnvozDnH90gobjrN26i58C5nPr6lwZlBaY4SzUbki/3GuWs7jbEMBaXOWihRM+he4ko1o6xubC7m2hGzxNT2Ps/SDS8dZzuWyrOeVbV0zrNFvJilli+nMKut2wm81p/p0ggDG8nU+LeS8XcZZWxNz1MTrZZT87VtS/t95GPEO25cUied28fPZPAvKNMRO9+J5nHvnoh5dD5hHKQH1Z09ixYp+o5MoLvTvioQak5wIStSiq924rKaeT7LcFRAPDIIxK6mmDphphypk+nySyUL3C4va/L6B3a6Gqd36evl96DdRxOpcU0O8kZuqeyVK/WlNDxA4Nkjm5NiDJ8nh2aV+xNbfS4o8+UOrRRnescnl7Iizlj1YqVfP+c9duT+u3/rGcBAHaySWz71q1B2ep9yezdt0TGBisNYR1iy8o51wYk2JD6/n5xCB1jZWurLl1XWczt+TNlMi+vXrV3UOax6rlJyfMeY8fYiMeBFyNiJvvIJyig4VMfoTZ94cWixDu8ntST21hlXduFptmBtMzvrmcvkUcYnRZH62cSfPioodYgcWBD67M2b58Kre/qIvPO6DC9qwaHaDZ7aUdo68D7xjeSSrrOP9beTuZT61ytc6H99Od3AAB6uC088pD0vVZW8u9kJ/cLPn5eUFbnEI78tASYdLIz9uQOanu9KQlWGdlG43ciSn1jYLsEuZTZUfhZBz0HADA9KmH3YTbqVPPWzCsf2jiHjw+NijtFiFWfO9lMPKWc/dM1+u75LOswOSZ1TyRoLEgkW3kfcU5Pchb5WpH6vJaICHH/Taq8bQUOcd85QI7kq1eL2W9klMYsqxZ/9dXfC8psFocz3/shAEBPj5jJW3jMy7BzebvKjADpvk869syvi4ODg4ODg4PD04SFh9YbwKg8VYapIZ9Fw6o1WbGVy+w4zOHcZRU+F4vR7M8yEADwoY+SUFyCnZ3rKsR4+w5aOZ7GuWq+/c1vBmWvfOWxAIAfXns9AKBSljq8+YQTqF7MWI2qPF9LmcXYphy0VqykML5f/5qEIpevlJnuMOdOyzGrVVfh8140zvdP9+gr4Scb7mkFwfJ5cUCLxWmWPTwsz8GuYn2e+U9PC2vy2fPoGbUz42LFFwFhFs46i9ijjk5hvOxqqiNJ91xQx6UitBrYvlFW58v3XwUAiGeozsW43OvJ/A4KEbrH3m5hmeLsqPo+dsr7ybeuCcpyU7Q66esmhqGak/ckYcR6fu7P+L0nzt19UOimPB8rHVBiwc+CCjiolej5tLVRzjC7KgaAAq+W6nXpXxF2LC/mGzOzA0CSWU6bb0uvnrNZetZ2hRxWbFOYV8sTU7T6W7JS2IhQFzmXbnlMwme72khc9NijKRdfKi6MzcgQtduOLmpD//pGySZ/9TUkVtfSSvWcVE771qF/CYvd6Vxjj9z/ANVTORNX2JHZsg2eEkNds5oYHet4OahEJHu6mf2x+a+UuOMYr6SrYTp3LSzPPcR59DwjQ2SU65PL0mp4//3FSTo3Qte+8IvECH3pws8HZR/l7N0+5zvr7BVauzBEfcCODRM7pe5aiuCZBAODiIk0hNZbJ1o7tuncZNu3E6vQwbnJrrrqqqDsTSy2GVZirH1LqM1Y1vNVr5IccdYhP8Zt/GYlzpnNUBsbrxIb098t7T7LYpA2d9jAiLBy4xlidnpXCXufLdKYvCRN2zZs3xyUdbOYbL1GbejfeMwGgEqe2tj2rRzY48l9hQ2NDSEeSmrSHOHzP6tXrw623fPg/QCANWvICXlvVWY2EWNVYQHHeEKChGw+wEdZzPAlq6QdD7MsS1+ajtPh+rkstfF0ShirtWtJBsd+szPa4Tpsxxy6x/EJCZjhTyIuueQiAEA0KoFXn/r0hQCAbI7Z0l6xftQr6qE8ydgTvy4ODg4ODg4ODk8bFh5aHwJKZfGBCDEjZEPrfZ3dnENpjaFVqhcSv4Uar6S7u2W2fflaEnw78WSSLO/tFx8Wm4E9xv431aqyc7bQtY877jg6tyr78Y9/DAA4+W1vo/pitu1Z+xjY0P2/PkCr080q7Lh/ufgUAEB7p6zwRsZolW3F4aJ1mT1v3UrnOORQmsEPD0v9bMaR73xXhOymM3SueJxej07V8c53EitjQ4y/+EXxTbA+UN9k1uztbxcxxPZ2zg48QSuynnZhJLI8Y08n5P3UeRWfZxs/jNj/r7icVvxv/xj5VU2o0NYKM2OW8bDZxgGgJUxM1Yn/SukMrr/62qDs2JNOxONjz52zGxiE4aGk2Mh6mJgDm63Z+q0BQI5XapaxsQwObaT9Qyr1QI7TA6TaZosu5qYzfC7qczW1lIy1MrvKrFEsLMdFOWQ3ysKlU1Pia1DMsi9anwid5Sbp/d56G/WlV738zUFZH/edySnyn+nukhWeXaVXq3SPOit8lX13hlnsrkXJTaxcSb5GmzZtCrbZDPaW/dEpKjZsIDmHVatWAWgMrbfsgV0N6+dtmbQ4+4tY5kbXL6zano3utuHvBZVd3fZ7y3a+j0PsAeCS88mn0WYEGFTsTyRO52ptYTkMlWYgD+lXzyT48FHxKw0+Q7Yd2nejx2E7VlRKNC5aYVkAuPvuuwEAxx57bLDNtoteFjBMp6XN2fPneUzSPkNJj9pYkX1ylvdKe/RKVL/tm4mx6exR6ULa6LhsUVgSywR77J/Z36WEC1lyJRKmduWpz2y1Qu2qjdmjsBo3itNUry4WwYV6Dn4t23DvAPCc55Df0eAj5Ac6sEPYrNUhvjanWKopX96pCWLGlrHP67p164Iy24cqU2QtiapUJ6GwtfTI9+uMM84AAKRT1vqTVftTf/oI+9FpqZyJCer3bR20rarY8DL3vbYOYgqtZQUAIk9MJ3pe2HO/Mg4ODg4ODg4OTwPcZMjBwcHBwcFhUWNBnFMIQEfVw4qQOBWXakS5TY8R9RtPKf3HJFGA2TrnO1FUuM/5V0ZGNwTbli+l/GPfv4Kcqs56z2eDsu42ou3LWaLtzjntK0HZNT+WnGcAYFQm57edSJR+kcP0vLBQj8k2okJf/s8nBNt+djvlNxtns1f/UjGNDewkij7OSsHRitxriKnXFnb2LarcNqv3pue1fSflK2pQA+XQehvSDwA9Pdb0QOaLcz4sIehD7LCaSBCFeO6/fyoo+8IXKItxiHOA/ee31gZlZ59N2ciTERtyLbnJwgmqT64sVOV4YJ5hpd+w3GukQBT01RdQiPwHzvlAUFZj1fGWNJ3z3HMkw/mV36PQyvWT1FYGe1U4fYTVg0tiJjDsNFkL0Zy9ojJ614A9Qp/Xh0ENESSi4qxuPLqvbIHMjxFPnKt7WR19+0bqNyoxNepWcVs5Yxs2i5m6bdMqN1mN87nVE/xbKO1aiRyFbVur+NLPKhw225Emh8vilJiqvSI981JY3k2klRzr87U/AwB+/ttPBmWvfPGHAQD9nS+meyiIWeIVLz0JAHDPn24CAAxkROHZmq2iMRoTxsclxDjKwQgHHiiOnevXS84zQPKRAUBXV0/DtlpNtVU2k8T4OWRzKuAgSdeZLHJW7aSYRkJsqihOK5eAGNH3+Uk2K6dknCnVqK/aDOq1olD7H/oI9YHb76Js6X5VjpuYJpOKZ00jVemDXpjOkVN52zrZIXycFeJDqgG1phPIZmXf3YsapgryTls4uMMGt7Qm5VuQY2fkJCs1a8X5AptIUipPnc/O83Y0GJ+Q6/T0Lmk47tjjZWy/9Y7/AwDkszT+7BiW4xKcHSAUo9+RFulLm8ap/Uf6xBw3wc+9wkEMn/+6fI9al5D57f3vIlNpMS+mo6VsDs2xeXiqT+6rXGSzLZvewsq0XeExsKroizJ7WnOXxY/uFImBc950KgBgiNv7WETaRT5Jbcz3qV5t7dJnJ4v0jevy6F2MDol8hM03Wa3J9yvJ79FmjJ/Oe2p/utevXPYjOvekmOMvuoi+VVVW7s9kpOzii8nh/JJLyDl7TAU9IUquM8kOlXmBTaKhKF3b5kQDQDlJd52wHoBjhhwcHBwcHBwWORbsQB2BwVRGZmqFCs0u40kOGW6XGdtEhqb43SlydCuoUMtojPaPK6fFfJH+DoVpVn7t9dcFZWec8W8AAI+dP9NqZWGzVNvwVJ3teedOcipbs5qE3EbHxaH3tce9BgDwlz8/EGwbGqEZamsrzWonJoRBCcKUmXnSzsHWiS+foefRmZTZtnUS7ezijMhZOc6GJFqxMEDlbWpNNFwXEKdBWy8tUGYdqOv12dmcrTNjjvPl2BUUAAwMkzhWu3J+jfh0LsPh84Nj4vSZ5v0MO0Fe9JUvBWWf+gzN6ofHrACkrHStc6O9hze8QfJE/eKmXwAAjn75McE260iZ4TDcggrHTadDmMo+dWGW80XYC6E93h44bAJAqM6rXxaWq9WExdi4kRiOKChcPRaXdlIu2RB5ed8tLfwMMjYsVdp2qrWnYf9QSMqqFXp/pSJt61oq8gebNlGfKMVoZdwWF+fPMq+3Q1Gpg2HHyelshv+X9/C7P1MustNPoDYwOihjQ+8SYkQnhunZtPfPbuM2r5rOUJ9qJ1brL3/5S7BtFWeutk7Puk/YnGQ2IOLAgw4KymxIti3TzrbWuTqa5nB9pZw5zUKTMSPvJxKzIq10Du0gbMPIsyzM16WCMjITnEOO+2cF0m77+jhsmJ3tt22XbPfLD+rn60g/tg603/8+SVZEwlKH6enCHsGWAuREbfs5AEwzk2WZum4lcbCJQ7zrIXqPLSp/5PHHU+42vWr/2c9+DkAca193wvFBmT2/FeIMKQfqlz6fclfe8ytqszFf3kOKmcPRDI2rUwPijLz/PsRG/GG9fCfaV9O9DfB1EsoBOG3baI7afWdcAm0MO9h38/dyVDkHd/N4VxklpqysGOIQh7rvs5eEz/83949/2psyy1tRWwC4/htXAQBedzIFq3Sn5HvZwvksrZxFTrEytTK9g+ERGhuWLpOwdmtN8JXy4QQLBO+1imQKOiPyXq2kiGVltTP7i170IgDA/feTPEBeyazYR/kIC/4ecMABcr1JlvIpqkAtfobxkMgHPBE4ZsjBwcHBwcFhUcNNhhwcHBwcHBwWNRZkJqsDyKOGkidOri1sFguz6u5UVukZTNFcq8zaKtmczL0iTJ0lkkIhZpnSr7Lqa0XRcY9s/CsAYOUKMnflMgNBWUcbm1bY2XTnoJQd/MIjAQBb/0bOm35dnBeXsHLy8573/GBbqo30H0pFoijDEXG49kJU5zY2f5RVnpQy04tWQ+fWn/xHUNbTQ+YM6xBttZIAIMLPTTuJWdOIpTG1qW7ZMnKAtuYCrcFgtTwinDzO5gcDJLdTJzuiF1S+rBbW0ZjIiXpoip3fxtgkGm1TGkTs2FdhGjcWFcfTbJF1RNic2dstOWesTksmk5t1X1b/RSvN2m2d7HQcUc6h09M17H4jGT3zQiGLel0aQ5zpYKuFE4ko9dw+anMxQ3RyWenK+Lw2Uf7/gQlIdHW8WWWex7Q8pO/Va1SHOOdHmhgXdfGl/WS+8qv0jstZcUy3ppxyWUy5pcI0l5FJpjUppqbNj5JD/7atpEsSj0lbSLVSX/KMNfdKe7T308P5waYm5Xp/Y42vNWtE62iCda+sKbyidFOsM/YSPtdmpcViTcWtbCbQbS7QHmLV/GmVeyrJmlvhmjhfTnIdQ1XatnHD5qCsbRUHlXDbLmTkflLWtM1mBhORcxo+/wjndtr/wP2Dsp1TpB+jTYj77ksBHe94xzsAiPkfAEaGJ3DTzbdhT8HkuDxr22ojbBa15mIA6GadoRirMb/spS8Pynq4zJo5NaxuzR133BFsO/roowEAy1eQ2Ua7Mizn/GF5NvnrfJD1LLWvNqsN1immuofuJ3PUi1/y3GDbvRv/BgC46QZy5ehXudbe+LrX0znYOXhqcHtQlqrSPba1UV3iYZXos9xo5PSUFlOU+/jWjdK21ywnPa4sj+07tm0Lyl7zGnIBuZEVuJ934nFB2eB26qvWXHbw3vsEZVb/yea8NMrwOj1F/fegg8Rs1Zamc0xO0ndCO33b75zVj9JO8/Ydv+oY0o/64Ac/GJS19tKzXL+Ogj7+ep+YJ//lhLcDoHyQFnbMslplRgUV1Pwa/Hkajx0z5ODg4ODg4LCoseCs9VUALe0ymy1zKOjoKM3AyxWVtXoNOTIODtLMtad3v6Asw3lHJiaF2fDYGdBjZdzpgjAVBxxCjmMehymHWmW2NzlBKzrLpCxfKhm3H/4DhVN2tNMqoEc5koIdW9OtsvKqs0O3nW3q/EjjvDrNZfL6cDpHmpxD48zKlBVj09JL156eohlrNKJCctkp2KqBAkCeV+LWEbqzXdizwQFivaxzsc7pVOKw9gQ7p0fDUvelfcTQDOygGXxVOTav2Z9Wm8NbJHfM1DCrBi+jupd9WYmP8qrX43BU7Rg3zmzOyuUr+J5ltW1Dng07Lh6477OCsmUn0P3fdNNNwbZ3nUnZ0Qc5U7OnHOQ8D1Cp4XYbfNRR9ktoUSrh0Ti9h0yO1Jx1/rg0h3OPDZJzYCSuFI6ZQa2pZz2VodVY1xLLUMqqZ2yIVt41UBsKR3UyK6pDpUbMU12tpDgKFrUyt8ewMD01GzZbE6YrFiZH3lic2lqlImzpiuW0gkx30KqskBPGcYLrDg7rrag2l2Nn+tFxbo8qVP6gQ8khNJNXCs+cydo6Kmtl3Cl27LbO+q1tcj87mCWOx2N8Hbkvj1nIQoaeVX+fOIsWc1S/ScUkLemh8iIrT3uQfpzZSddJtbRynYSRiPE7T/IquqjGhkKBzvXlL1MY8WfPP0+O45W1ZXUBIMrO1Ok0XUcHY1x73TXI55/CtN4LgAHQ0ykq90MDxCC2sBq7DhipsAwBE++IJ+T7YtlhncOvzhkGivy+LVsIAHfeeScAcdDtWyqO7IPjxND0LSNW1jr/AkDY53GUMyhEVD9r47b9h3t/F2y79qc0Tg2zw/XH/v0TQVm0Tv14eRe1l+w2yVrfn6RrZ0eYJUzKfRke761jvq8DIri/LFXP1LRTOy9M0rNduUy+IdhJ36o3sLTAd3/xs6Cogxkhy85ODEn9fDZ3DI7Tt8Cyp4C0uaEhsbzY52W/l9Go1NnmIkuys7hR+QStirtVZbfyGAAwwnn+jjySJHP0+73hhhtomwo8aUnR2LD/gTS/SLbKWLwQOGbIwcHBwcHBYVFjwYk+agYo1GSl63POqrYusoEWizKjLhRp9vbes0kYMBSSFeVRLydbYUyt8N7+dsqKPcnMy3hG7MSDY2QrjSdpBvr1y78WlCV8mtl28qy5WBC2ZMUyYonqPAMd2CHZ4a+/nrLc22zhABBhEbh4gmabIyOyKmvvJJ+EN72RhBxNSB5fnVmP66//IZ3Heywo28a23KVshx0bl5l4iUUGTzxRcnNdxgJe1ndIZyO3jFCQBVrNqO1qa2SEzm/DkQFgcJDue9kKsg97yo/lkQ3kT3XueSKmZ0XHvnoFCTdOZmXF77WyLTjw25BVfUdPN9eZVgEXXHBhUFYp0P7Ll9MKZtPD8oyetYoEN3M5YQO2bKEw40KZ89gtEVYvFAIqe0AcsU/cUPAeAcAPsVAcr1jCKvy5u51Wqgn2nfj+D74dlJ34llMAAPW6POv2DmItq1XLkhpVRu29XKFre748u5t/fjkAIFfaDABIt8uqLMzh4hu2EqtwxqknB2U9neSD8t3vfi7YFo+wX8AonSui8tRVuO/U6uzzlZDn8MMfUzuOJFhmQTGINnu89UOaUBntp5iNue46kdY49dTT6e65L2gpgy99icL6A2ZI5UCyWb5LLG56660iTDfALGuIWSMtzmYZTb1i/faV3wUAvP44Wm23JGU8s0yVFZ/rU+H9g49R/4pz3x1R/f++B+4DAETYp+msd0l49EVrbUZvYUost5bhPFaZsozFhUI18MvbnTCgVbbOWm+lEOw7qhaEHQsxw2ZZOy2bUGPLgx4XrB+YzXbfkhJm346PS/uon4UVO15up/NuzXNerHadd4vOOTpM5yyoUPnkSupnXTVhKHJxKv/Iv50LAMgUpD0u66Px7dd//B8AwAHLVgVln/7a5wEAp73lrVSnvPJJZd/LCPvd5CryjGr89yc+cE6w7b2fIDHeQ/ppnN+xWfwC905Tux1n5vHFzz88KPv1QySeGmMrxgUflnOaaWq/9ltVUXWwEh7aX2d83H5rVvL/Yl2wshFWssKO54DkGrTvWsumVHlg7+rs4TKpg80dqutQ5Ubfw7nmiirfmwkEYHYNxww5ODg4ODg4LGq4yZCDg4ODg4PDosbCFKg9IJoAyjUxK1kKq81S00aoxBonUslliP7s7hE686EHyYF03XoxleSYavRCdP6//u3eoKzKJoAwK+O2d0rVvQkOyWfHxEpJaDIbsh7mEMbePgn1ttR8VDloWefmcomuF4/JdVqT7HCZJ8fQRFxCBXcOkkkvworNceV4XSmx7ACHuuvcO3HOSdTD+WwAYPlScj5+9FGi17/8JVF4/tjHPgZAQuq3bN4clOWzRJ3H2ARzytvE/GHp88FBUlat1MVJt6ubnOSsoycA1Dyq5JtOJJXoaFro3K9+/TK6/1a6//ed9f6gLMH3PTxEdOmyJf2qjO512waiSztS4py3Yf16AMBJJ70t2Pbn+8mEcNAhh9HxCalDpbJn5CYzAMLGRyotppnJDJkk8yVWYI1KvTMZ2jY+Tu2rUzn7pjif23RObB1VdhYtVSwNL6RvLEy0uvHoXaZaxJySrz4CAKgZDrtV5uHWJJkbDzr4n+h4X+qQnaQ28Objzw22JdJkTrv2+q/TdVKyhjrhtHdT/VhaoLNL6j5VoPYbDVFbGB+R61gn0ba2EP+WsaHKQRhr10puPasobFWNtbr6F7/4RQBiStFmlle8gkJ462z2OPdcuS/rmBlhp+eBHaI6/JVLyMQ3MSK0f5pV6UtsHo6FdSAJbTN87eG/PRSULV9F/Xmcw49T7RKGPc0O0JVpKvvK1yTH1eg0mXP0OBNik1krO4nW63veetaDQTKcwI5tElLey7IHEdaNGFJSIqvZZOL5VKZlE6w7wF7LJR/mjh0UGr5q1SoAjfIi02zOtyY3rYBf6qHzl1rpmU16cp0Yyx2YXnrHmbC044EctYELv3ZxsG28Qm1t7Q+uBACMqLxbnDIMa3rpvd/401uCshZ2rfD6qb2nSiqTQobG70KF8xF6Ku8c95d4tzhQP3d/CkDZyd/Sni4ZTwtsRu3qpG3TNfmOJdjUPDlIJq6IygLREqE+sXEDfZd7e8VM3JaWulqcfx59C6wjfzQq1/nCF8nUbgMAli+Tb++OrdTXrBxGtSR1+M63KYfl5Dh9n3XQQ7qTTG819f2aZid7a5bNq9xk/rwD6x0z5ODg4ODg4LDIYfQqalfwjPFjHvDoo3cF26JhmuFGIzQDnZqQWVy5TLPMiy+hFd7GDbJSSLAA08S0CLEtXcGzUMOsx45Hg7LDnksOifvtTyuEI458YVD2HJ4t2xnoMmZWAKDAoeeFPJ3T5hwDgEEOKVyzRjLTDw3TOawDcGuLrOI2baEVybvfTSHfdeWt2MLntaF/flWcxewqeCcLq3WpGXyEmahxFeaZ5yzHt91GKwq98vnEJyiEc906Wg3cdZe8i4EBFhPrJpbpTA5NB0Qwzt5PvFXYihyzD6ed+Y5gW4XD5qucrb4allVKMkUr6RFmf1atlFXbwGa6x6u/Sc6mXkFWZpUpWq2s6KUVQrWsQsg5XLlThY6Oc519u5oKyUr8sMMPx1S2imqtPl//uKcExhg/hDCmpyXcdCpL774Oqn93t6yutm8hZmh5/7MBABOjOuSd2tNJJ78+2NbZQ+uVq39whb1iUHbaycTIWX/ca38ggnum7Y8AgGW91NZ2DEvW+niEVmOlaXoPp/3rF4uMrvsAABmiSURBVIKyCFjUU+VtypcpACASo/vJKXHOJT0kblcqUpv9xrdEbHTVIdTu81liSZKtLwjKijna37bLsGJZSkVqF1Y8FAAuv/wbAIC7776b95cV6Pe+RytJ6zj9LuWEnOOs3T/5CfdLNd5ZZmiCV+SaeWxhNub8z54XbPvsv1PevTKHgr/mVccGZVas7lbus+19kvvviOeQVMAYM0OPbRbBwVyO7r/KbLsXFkf3eoTegR27AKDIrEk61c73I/v3dK+G7wO+P8803U8Rwl7IT0dacdcddwbbLKMX5nE1pt6tHbfrzAB0pCW/1TGveAWAGe+NmdY7f0H5DLVz9QknkHO7dbS3QnwAcPkvqQ0M7KAxqlexLOU8tZPuJbRtw1bpL9fcQHngyjHhDkoe1dVwbkmbtw8ACtOcRZ3lIlZ0SuDHmaeSaGA6Rm21wxfmPLedc1j6xK6HleN8sY3u+U/Dm4Nt8W5qA9/5MjP1k2Kx+fzHKWhpkNvcNXffHpT94o/3AAB+cPX3qQ5VaUP+MItPtlH/0vn3LDs7OiaZ7G+8kZ6ptThoccwVK+g7fN555wFoZHNtPtHTTz8dAHD11T8Iymx7j7LVRPfLsay1HgWbgqz1MZZk0A7Uzz3iuaiiPq8+4ZghBwcHBwcHh0WNBTFDIWP81hCw7tF7gm11nsUW8yxWGBHmpbePwlrvv58Yni9/+atB2QSvCLX9PJMllqhYoVn21d//VlCWLdKM1bDI02/v+XVQ1l6mlcFhh5FvydKly4OyHKeHsNmxsypEfDkzQuseejjY1sHhyjY1QrUqK+R4jGa2p5xyGt2DEtOzz/FHP/oRACARlRBjy/RY4bB8QYQI43GbOkNsziv3p/DmLewzdMUVVwRldvVrxRZ12P3ZZxNTYENxx8dEMM6yRWWeRYcT4ieV4bDwRIf4vZz0ztNpWycxf5N5EXcL84zdrs7P/9R5QdlqFrwc20o+Kn3tsiryOCt7le3ZMRX2WozSfTT4C3DYdZ2XAVrQc+8DDkMVu38VbIzxDYCRUQlrrdRpNZZIUhuyfkIAsPbrVwMA3n8WyRhEQrIKDjPz5YWFCXj1a19K57ILXF/WLwVOb/Pzn5EYXL0q79REiDmcYv8lHUbs1+k6mQlqz0s7RQz12GPO4LrLuzGG21GI2m2hIG379tuImZycoHtsSct4UsUO/k0ryUxOVoZ2lWjFFpMqU7llZfN5uY71m/n85yk0+Xe/EwE8m0Xeth27ggWAW269GQCwF4fYr18nbLOVomhppb5hJSkAIMptuyUhzML5n/ksAOA/zr8AgAimAsArXv7yhuO0T0ORU67cz2kdNBueL9LY1d7BqVGqKr0Ov2rNguWZJbbsiJZt6GhfsUcwQ54xfgxhXHGZjFtHHUUCelZYUIdSW6FKy4BH1f1aGYg3v/nNwbYc+4hEwtTea2rMsO3Kirfq8P7hML2H/l7yOxkdFhbD6hsmEnS9td/6RlA2OEntKdqmmA1D1wyzLIMWCG1P0bv0uG2XJqXs3HM+Sn9w+HioLO0r7dN9J9lnSI+FtTS952FP2J+WHho7RrcQK/3di+X7WmN2qp1Tjmyclrb92a+QZEMri/MWd0hZT5Tq7ofo+IlJsVhYOYyQSndhWb0LL6Q+ob9H9m8r7jjW5Ht00UUkNmrZYEAYoZFhurbt3wBQYB+wZIu0e+vDF2UhxoKSOTnsBYehUC2iVt+1BcExQw4ODg4ODg6LGm4y5ODg4ODg4LCosSAzWcQYvz0E3H+fUNTWZGJNSJmMUiq2ucY4C31VORynOB+Yphf/iynt449/He2vQvjDnN3dqmBqp+J0SM4xGzPne/K/33QuyNuYafZNk338ueaQnHkcZbVt5jNWMrFBgq367HIzj3fTwIg//r1aRDnnlA2dBwBWQIDyo0OZ40MrvE0l70aNn4mtXkjVIcz5taIcyhmrSh2inO07XLfHqzw2htqNVtstM83ss/nIV87DBx5yGIpVoFbfzSYBz/iRsMHmLWJ+icap3sUSmUOSSTEBFQv00L5yCSlPhzwp+8iHiUL3jVD7lSqZJ+MJehHaAmIVJEImzb+FcjdgqtjjvuGJaRaGgxzqrHBeV7l8fH7+RvoXPDKBeWwaQF3McagxhV235m7Vjj2qgzFUh6oy/6md+Lre7G1aO3ZGn7v4YglztmPBZz9LZqyG/GPcflta6B5tWDYAXHklhUUbj9WHlXlt+VJyJP/whz4cbMtMkqn+0ksvBQD0KUX0415DyrjdPWQKr9elDtasYPPXbdwkzrl7rSazcpmdPssVefelGj3vpMr27Rs7DlpTouy/etWBqFaB+u7uE2wme+DPkm3cBnq8gh2itemjylnNh4fJXGPzGgJAnk1iIU/MtiEef6zjdDgkA1ec3SGsaVGbybQJBwCMP3sc9nlQq3lSVjN2LJRtpTrVOcJyJGVl0vL4OnYsXKKCQtY9SNnu99mLzLajKqQ8wU0mXrX1k+oV2XKYU7m/ynzbVgUgIVVAhK9th9+8PD5UuD2GmhyX5L+rYX5uTRNAznfbTDTp4/5cfX329yvL/SSqJG9qPIexuTJ11vqDn3MwSvUK6r4zkzk4ODg4ODg4zImFM0MA7r3398G2Gq9e7Ey/qhwHbVk0RtPSmmKGrFNpXc3W29vIQevmWyh/0NtOOjUo27aNVnQ2X4kOa4+EZEXHtzXHXexi/sezUllbLZQZ4hqokNdZs+YGxqcJM7SQdOxNV9SP9z8Q5zB6rdVmV0GKxEHFskWeXSmpSwbPlzaqBRPCdZqxC0OkWSMu4/1DqhLZVnKMi8eFpcixoGeOV47VmlzogEMO36McqDdtfjDYlmqzwnjEvGxWwpjRCDEo1om5o00ypdsQ1Fhc2s4HP3QWAKCF88GVyyorOQcvtHLm68y0rII9y9hYRigkDvDBNmaIGpqjZY28otpWQAPqwmahRn3Wq7AjuB9XO9oT03urxXZgNp4YM6RX+Tb30ac+ReHEF14o+fAsuxxiD1md08wKPY6zg+vSpZL1e8dWqqsW7etst/nU6F1MjIpz6c03E6v9jndQ6LRmeCybbfM8dSppjRw7UNvcdjpDN7jvhRTzkWUmyJJfNiciAOy990F7DDMUhYdNj20KtllHeZsbTr+/F76A8mYdeOCBAIC//U0CWro76JkXlTN9Ksl9iL9dVrgWAErMBNkyXwXAtFsGNMixpcBjbp1/aya8zuNjA5vOR1uLhQ7vt0Euk2MsGVCSOhx4OEnCrPvDH2jDMmGNYtXG3yFVwQq38ZL6rNjx2uO+EdafEP5t76MS8mYdZ/ePK2YoxlWtNGVxm3yXZn2rmn275vqGzocZkuPD7CSts9bb0PpCKc81kDoc8rxDXGi9g4ODg4ODg8N84CZDDg4ODg4ODosaC8pN5gOoAWhraVfbiNOz/m0hX2j8StVqAhCtnFMUdc8SMiFo7Zhkgk1UFaJER0ckn9KyfqKwPUPnGhsTXR6TnEHNzWnGqs1Rpsu9Gf8vEHVRzZztCK3r6zfZxzTZz2LGvS2QEa8yb+orM6M1VTakOWJq2PqPew2X8Rt+a0dosI4I2KSlcyfVbBlfuqYcGK0iaakknC0z60iyDolWoPbM/PzLnw54YaA1LXpZYywJnWDH6d4+cQg1oI4SjZBT7MaNolT+yc+QurjNowUAF/zHZwCIVtUXviBq0ckEmaSGRnc2nBsAErZD2v5Yb9bmrKemXM/47PjvK5rcn9EO69pUR/vV2Um6cd/GtunX5zD/Nn2X5nHLm5nJrAaRNTcCwAc++D4AwF57rZx1nHWuDYfpeGvWB4CWFuq/2qHf9r0MO/WmO2QcPPY1rwYAXPA5MtGdccYZQVl/P6svx+hcG7eKJlVvLylV59kkHFJK3BV2DNcuARU2i6VSZOLzjLxzbw9Z2hoYREwkyAsJAEnOY3jsq18FAKipfFi//OUvAQCtnKcv1SZ9KcJtXAfalH16T1YkPQql1MzPLMIO1LGEyreY5TGJ/29wVuBm4fNYVlPmH7tNdUtU2PwaZoftlqS0hVFW5m9ns2q4TWxbm9mBuquPvmdDdTH/WSucbaLa7GXbQEO4TDC2spsDNLyGe9VtKOj+1veh4Tr0uzqzzzdg9rnmdqC2ZV6TbfY8c33HZN/pcXKvMcpkWbHaQ+yWEIkob/EFYA/pPg4ODg4ODg4OuwcLcqAOG+O3AtiqMs1HONvv5BSthosVCeGt8ay3u4ccBsfGROnSrhRSreJMOD1Fs/8Chxvedeevg7IjX0IKpr1LaJVdr6t6h0TZ8mnBfByc67Mz/M4ZkmiarNybX7zx36aOp48/x43xisZvwvQ0OEmbGauoOSbumhnyeEe7LVQPq/0M7zP7OKS5HU2KOm+pQjP+ti5ymi9XZGm2at+D9xgHanhAuSxtcGCAFGFznH9Lh0ZLziRa/VoVXaDx3i1sbqAlPaTY+p6z3hOUtbAj6TnnfITPqdpc4MBrQ4bVstYyQsEScVehsjPLdUOx7zc04/8mZ/TKszc2DaNtlLeYXS4K7ICseq2jsXWMBoBMhsLhbRDHzp2SmX4vzpaez9C4o1mjfI6en13dA8JAZXh/HR5uuH4lzl5/+eVfD8psLqdzzjkHANDZKSxCocCMB9Oc2oHa49V5oSAsnc8UbXsbOd5qVd+VK/aFj93fJ0LG85OhGH7zm98E26xz+sQ0vQ+9es+yHMujj5I8xRFHHBGUFTkPXDImDI/NfF9mFetETIIurPxBlNXt9XVqFWbF+X/9lAKShN+DDsRuFpRtWUT7vqwSMwDss3oN1SVLddHf2FHer7+fcpJN5KXPW4dpTkmHiFapqFupF9lm/64FvxULbxr3qasyG/BimaeoGhpsHQqRRp6pEc0CgBhzfRubWmwWFlofj1grk6j62yv29hPLWlEMb/9e/c6B2sHBwcHBwcFhPliQz5AxQCLiYf3mdcE2a/NOpsk/orOlOyjL8sw4zKuyWFJmmzb6cyqrcvGwP1CaM/Sm22WFt3pvmm2Pj9NMWotwGV/b9XeFuVmdwA9lXjPcx2dwfL+ZfbTJcXY1OlcoflM0C0luDHlvrBBtK5s5fKD8JnblGWduKGwiYWDD7u1EvK4de+zuzZ4fr/JsDjkAiLHQoF3xl1Reub7eLgyNzWZSnm54HpBsNRgYkDxH7W3UByzrqUOjbQju8DCxBTZHDwCEI/RcbB4uQPKAbWDfovPP+1xQZsO+164lFsL6xwCA51sWlq+tGDobku/b7u/P9otpbI+M4F2qVaPH/deG35tmYqPMMlZVSH6wyxNjhvSK3zJilvXRz9SyPZZ1e9ZBBwdlGzauBwB0tglTE5yfQ6bzJWlzYfbnCbPgW0EJv9rwXvt+P8MCkADw4EMku3DpVyl31GVfuywos4KRSRbv0wxDmFmgsMrhF2c/NNu/dEb7jo4UpqbnEqB9elCHj2KtjN6lkpF9cJTao/XBqqixprufmN9xZvF+fPONQZllAo595bHBttFp9hvh5tWRknHEi9C7icZs3jJpq6VoIzOkmXDLqtTMbAbGhtsrNxVEkgk+Bx3X2ish8n9aR2KTHR3U/yMx6V93/vG/AQBHHnkkAKC9VeXks76U/L/ugbaqYVWHut9Y14qKxbf1sls8xU5ZTcIQn1VrUQZ7Bb5ou2CNg4PttrkkZZqRMzMoLECNM7O/r3YsbcjXx75/27ZtAwBMZ0VGJBlPIqv68FxwzJCDg4ODg4PDooabDDk4ODg4ODgsaizITFbzgclyHW97xynBts1bx+c4gmDJrnRK5VUps6qnstrYv+tNrE8XTxD19bFPnA8ACCs2zn+C0e8WT+aMMFD+fBLP+WTi8Q1cjc9hDoPbnJgZttrMLbxZmXWBjKoWmWeG2z5LXecKZoaS7h7U60A262OffQ4ItlWa+AlbxFld2spO6PiFj370bADAxRevDbYlE0RX5/NKJpZhrW9WmPvytV8Nyop5fmrzeBEN1qigTA8NocZCZQqr26ImfpCz0IxxnytWYI4yLdTMlkRY5rzapGG0crR2VqVos1Hz1SZiu9YKrywcsMogzcanFpYPaWsn08jOHUOz9kmlyBSmHbUvu+yb9IcNp1aPPWathdqabB89b4vH5YBisTrn43w64aOOvdasDP5vTZKzea5IL6uq4tStOcnKCzRY1vlZDQzL8/zRNdcBAMY4l1mD2reV++D/PTWClW0wAf9fU2VBbebrej7rQUvjTqfoXgucPNAqjwPAJZd9DQDwsqNfBgCoZsUReGa/jKr62eaug8bt7mWutNKMV/fjNe4MBEIE9lxNDOIoBlOD+QZXNBtomu1vMVewz8yXoBSobdYDtY8NQilWOWBLmcnzxXyDIvVccMyQg4ODg4ODw6LGgkLrjTEjALbsckcHh6cHe/m+37M7K+D6hMMeBtcnHBwaMa8+saDJkIODg4ODg4PDPxqcmczBwcHBwcFhUcNNhhwcHBwcHBwWNZ5RkyFjTHbXez1zYYzpMMbcbIz5qzHmj8aYg2eUh4wxfzHG/FRtu8oYs8kYcx//PIe3H2CM+R9jTMkY89E5rvkdY8xBT91dOTyVcH1iQX3ibXyeB4wxvzfGPPtxrun6xDMYrk8sqE8YY8xlxpj1fL7nPs41bzfGzFYn/QfCgkLrHZ5y/DuA+3zff70x5gAAawG8QpV/EMDDANIzjvuY7/s/mbFtHMAHAJww1wV933/X31dlB4enFE9mn9gE4Cjf9yeMMa8G8C0AL5x5QdcnHPZwPJl94tUA9uWfFwK4As37xGuepLrvsXhGMUMWxpiXGWN+Y4y5xRiz0RjzRV71/ZFXfXvzfnsbY/6Xt13YbMVgjFlljHnYGPNtY8xDxpg7jTEJLnu3Meb/jDH3G2NuNMYkeftVxpgr+NwbuT5X8nmuUud+FbMzfzbG3GCMaZKPoAEHAfgVAPi+/wiAVcaYXj7XcgCvBfCd+Twj3/eHfd//P5Akz1zP8tfGmOfz31ljzKX8HH5pjOnh7S/gVcN9xphLjDEPzqcODk8fXJ/YNXzf/73v+zaz6f8CWP44z9L1iX8AuD4xLxwP4Ps+4X8BtBtj+mfuZIzZbIzp5ufwiDHmh3wfP1H3+xouu9cQ2/TTmefZk/GMnAwxng3gPQAOBHAKgP183z8c1Ajez/t8DcDXfN8/BMD2Oc61L4C1vu8/C8AkgDfy9pt833+B7/vPBs2036mO6QDwIgDnALgVwKUAngXgEGPMc4wx3QA+BeAY3/efC+BPAD4MAMaYC4wxr2tSj/sBvIH3ORzAXpAB+6sA/g3NFaw+xwPzpcaYWJPy+aIFwJ/4OfwGgE2w9D0AZ/q+/xzsuXqSDq5PaOyqT7wTwM/nuH8L1yee2XB9QtCsTywDsE3ts523zYX9AXzD9/0DAUwDOMsYEwfwTQCv9n3/eQB2q7zDE8EzeTL0f77vD/i+XwKwAcCdvP0BAKv47xcBuIH/vnaOc23yff8+/vtedfzBxpjfGmMeAPA2UCO2uM0nXYIHAAz5vv+A7/t1AA/x8UeAZvC/M8bcB+A0UKOF7/uf8X3/1ib1+CJoZn4fqKP+BUDNGHMcgGHf9+9tcsy5AA4A8AIAnQA+Psd97gp1AD/iv68BcKQhO3HK9/3/4e1zPUeH3QvXJwhz9gljzMtBH6z59BXXJ57ZcH2C8GR+J7b5vv87/vsaAEfyuTf6vr+Jt1/3d5x/t+CZ7DOkRfTr6v86Fn5f+lw1AAn++yoAJ/i+f78x5nQAL2tyjL62vn4NwF2+7791vpXwfX8awNsBcmwD+ThsBPAWAK8zxrwGQBxA2hhzje/7J/u+P2DrY4z5HoDHdZZ+AnAiVM8suD6xiz5hjDkUxAq82vf9sfnWQ1fpCRzjsPvg+sTcfWIHgBXq9Mt525xV2MX/z0g8k5mh+eB/IVTmiU/g+BSAAWNMBDTjX+i1X2KM2QcAjDEtxpj95jrAGNNujLGpY94F4L9935/2ff9c3/eX+76/CnQfv/J9/2Q+pp9/G5Cz9N/ju+ABeBP/fRKAe3zfnwSQMcZYp7on8hwd9hws2j5hjFkJ4CYAp/i+v26edXZ94h8fi7ZPgEx3pxrCEQCm1MTp8bDSGPMi/vskAPcAeBTAGmPMKt7+lnnc+x6Ff/TJ0IcAfNgY81cA+wCYWuDxnwbwBwC/A/DIQg70fX8EwOkAruPr/w+ISpzLFnwggAeNMY+CvPw/OI9L/ZDp2QcAdAO4kK/RZ4zZDrI/f8oYs90Yk+ay240xS3V1+XcOwOGGnEGPBnABb38ngG8zLduChT9Hhz0Hi7ZPAPgMgC4A3zDk+Pwne4DrE4sai7lP3A5ildYD+DaAs+wB3LYbqsu/HwVwtjHmYZBP1BW+7xf42F8YY+4FkMEzrE/8Q6fjMOTlXvB93zfGnAjgrb7vH7+767UngTvI63zf32SMyfq+PyuSwRjT6vt+lv/+BIB+3/fn0wEd9jC4PrFruD6xuOD6xNwwxoQADAPoAzlX/9T3/YOb7Nfq+36W2ae1AB7zff/Sp7e2TxzPZJ+h+eB5AC7nlzMJ4B27uT57FIwxdwF4QDm9PR5ea4w5F9RetoBWMg7PTLg+MQdcn1iUcH1ibjwE4Du+71foET0u3m2MOQ1AFOTU/c2no3JPFv6hmSEHBwcHBwcHh13hH91nyMHBwcHBwcFhTrjJkIODg4ODg8OihpsMOTg4ODg4OCxquMmQg4ODg4ODw6KGmww5ODg4ODg4LGq4yZCDg4ODg4PDosb/B4eLlFHS9zI4AAAAAElFTkSuQmCC",
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
- "source": [
- "import matplotlib.pyplot as plt\n",
- "plt.figure(figsize=(10, 10))\n",
- "sample_idxs = np.random.choice(50000, size=25, replace=False)\n",
- "\n",
- "for img_id, img_name in enumerate(os.listdir(INFER_DATA_PATH)):\n",
- " plt.subplot(1, 3, img_id + 1)\n",
- " plt.xticks([])\n",
- " plt.yticks([])\n",
- " im = Image.open(os.path.join(INFER_DATA_PATH, img_name))\n",
- " plt.imshow(im, cmap=plt.cm.binary)\n",
- " plt.xlabel(\"Img name: \" + img_name)\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "collapsed": false
- },
- "source": [
- "## 七、开始预测\n",
- "> 飞桨2.1 CTC Decoder 相关API正在迁移中,本节暂时使用简易版解码器。"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 19,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "WARNING: Detect dataset only contains single fileds, return format changed since Paddle 2.1. In Paddle <= 2.0, DataLoader add a list surround output data(e.g. return [data]), and in Paddle >= 2.1, DataLoader return the single filed directly (e.g. return data). For example, in following code: \n",
- "\n",
- "import numpy as np\n",
- "from paddle.io import DataLoader, Dataset\n",
- "\n",
- "class RandomDataset(Dataset):\n",
- " def __getitem__(self, idx):\n",
- " data = np.random.random((2, 3)).astype('float32')\n",
- "\n",
- " return data\n",
- "\n",
- " def __len__(self):\n",
- " return 10\n",
- "\n",
- "dataset = RandomDataset()\n",
- "loader = DataLoader(dataset, batch_size=1)\n",
- "data = next(loader())\n",
- "\n",
- "In Paddle <= 2.0, data is in format '[Tensor(shape=(1, 2, 3), dtype=float32)]', and in Paddle >= 2.1, data is in format 'Tensor(shape=(1, 2, 3), dtype=float32)'\n",
- "\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Predict begin...\n",
- "step 1/1 [==============================] - 10ms/step\n",
- "Predict samples: 3\n",
- "文件名:9451.jpg,推理结果为:[3, 4, 6, 3]\n",
- "文件名:9452.jpg,推理结果为:[0, 3, 0, 0]\n",
- "文件名:9450.jpg,推理结果为:[8, 2, 0, 5]\n"
- ]
- }
- ],
- "source": [
- "# 编写简易版解码器\n",
- "def ctc_decode(text, blank=10):\n",
- " \"\"\"\n",
- " 简易CTC解码器\n",
- " :param text: 待解码数据\n",
- " :param blank: 分隔符索引值\n",
- " :return: 解码后数据\n",
- " \"\"\"\n",
- " result = []\n",
- " cache_idx = -1\n",
- " for char in text:\n",
- " if char != blank and char != cache_idx:\n",
- " result.append(char)\n",
- " cache_idx = char\n",
- " return result\n",
- "\n",
- "\n",
- "# 实例化推理模型\n",
- "model = paddle.Model(Net(is_infer=True), inputs=input_define)\n",
- "# 加载训练好的参数模型\n",
- "model.load(CHECKPOINT_PATH)\n",
- "# 设置运行环境\n",
- "model.prepare()\n",
- "\n",
- "# 加载预测Reader\n",
- "infer_reader = InferReader(INFER_DATA_PATH)\n",
- "img_names = infer_reader.get_names()\n",
- "results = model.predict(infer_reader, batch_size=BATCH_SIZE)\n",
- "index = 0\n",
- "for text_batch in results[0]:\n",
- " for prob in text_batch:\n",
- " out = ctc_decode(prob, blank=10)\n",
- " print(f\"文件名:{img_names[index]},推理结果为:{out}\")\n",
- " index += 1"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "py35-paddle1.2.0"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.4"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
-}
\ No newline at end of file
diff --git a/docs/practices/cv/image_ocr/images/image1.png b/docs/practices/cv/image_ocr/images/image1.png
deleted file mode 100644
index 8163e6d5df9..00000000000
Binary files a/docs/practices/cv/image_ocr/images/image1.png and /dev/null differ
diff --git a/docs/practices/cv/image_ocr/images/image2.png b/docs/practices/cv/image_ocr/images/image2.png
deleted file mode 100644
index 715085e1b99..00000000000
Binary files a/docs/practices/cv/image_ocr/images/image2.png and /dev/null differ
diff --git a/docs/practices/cv/image_ocr/images/image3.png b/docs/practices/cv/image_ocr/images/image3.png
deleted file mode 100644
index 66c7d3758f6..00000000000
Binary files a/docs/practices/cv/image_ocr/images/image3.png and /dev/null differ
diff --git a/docs/practices/cv/index_cn.rst b/docs/practices/cv/index_cn.rst
index 2a7aa441493..a432fec5e74 100644
--- a/docs/practices/cv/index_cn.rst
+++ b/docs/practices/cv/index_cn.rst
@@ -9,7 +9,7 @@
- `图像分类 <./convnet_image_classification.html>`_ :介绍使用 PaddlePaddle 在Cifar10数据集上完成图像分类。
- `以图搜图 <./image_search.html>`_ : 介绍使用 PaddlePaddle 实现以图搜图。
- `图像分割 <./image_segmentation.html>`_ : 介绍使用 PaddlePaddle 实现U-Net模型完成图像分割。
- - `OCR <./image_ocr/image_ocr.html>`_ : 介绍使用 PaddlePaddle 实现 OCR。
+ - `OCR <./image_ocr.html>`_ : 介绍使用 PaddlePaddle 实现 OCR。
- `图像超分 <./super_resolution_sub_pixel.html>`_ : 介绍使用 PaddlePaddle 完成图像超分。
- `人脸关键点检测 <./landmark_detection.html>`_ : 介绍使用 PaddlePaddle 完成人脸关键点检测。
- `点云分类 <./pointnet.html>`_ :介绍使用 PaddlePaddle 完成点云分类。
@@ -23,7 +23,7 @@
convnet_image_classification.ipynb
image_search.ipynb
image_segmentation.ipynb
- image_ocr/image_ocr.ipynb
+ image_ocr.ipynb
super_resolution_sub_pixel.ipynb
landmark_detection.ipynb
- pointnet.ipynb
\ No newline at end of file
+ pointnet.ipynb
diff --git a/docs/practices/cv/image_ocr/sample_img/9450.jpg b/docs/practices/cv/sample_img/9450.jpg
similarity index 100%
rename from docs/practices/cv/image_ocr/sample_img/9450.jpg
rename to docs/practices/cv/sample_img/9450.jpg
diff --git a/docs/practices/cv/image_ocr/sample_img/9451.jpg b/docs/practices/cv/sample_img/9451.jpg
similarity index 100%
rename from docs/practices/cv/image_ocr/sample_img/9451.jpg
rename to docs/practices/cv/sample_img/9451.jpg
diff --git a/docs/practices/cv/image_ocr/sample_img/9452.jpg b/docs/practices/cv/sample_img/9452.jpg
similarity index 100%
rename from docs/practices/cv/image_ocr/sample_img/9452.jpg
rename to docs/practices/cv/sample_img/9452.jpg
diff --git a/docs/practices/index_cn.rst b/docs/practices/index_cn.rst
index 08f43970778..fe8a731775c 100644
--- a/docs/practices/index_cn.rst
+++ b/docs/practices/index_cn.rst
@@ -19,7 +19,7 @@
- `图像分类 <./cv/convnet_image_classification.html>`_ :介绍使用 PaddlePaddle 在Cifar10数据集上完成图像分类。
- `以图搜图 <./cv/image_search.html>`_ : 介绍使用 PaddlePaddle 实现以图搜图。
- `图像分割 <./cv/image_segmentation.html>`_ : 介绍使用 PaddlePaddle 实现U-Net模型完成图像分割。
- - `OCR <./cv/image_ocr/image_ocr.html>`_ : 介绍使用 PaddlePaddle 实现 OCR。
+ - `OCR <./cv/image_ocr.html>`_ : 介绍使用 PaddlePaddle 实现 OCR。
- `图像超分 <./cv/super_resolution_sub_pixel.html>`_ : 介绍使用 PaddlePaddle 完成图像超分。
- `人脸关键点检测 <./cv/landmark_detection.html>`_ : 介绍使用 PaddlePaddle 完成人脸关键点检测。
- `点云分类 <./cv/pointnet.html>`_ :介绍使用 PaddlePaddle 完成点云分类。
diff --git a/docs/release_note_cn.md b/docs/release_note_cn.md
index fc56944eac8..781b1b76957 100644
--- a/docs/release_note_cn.md
+++ b/docs/release_note_cn.md
@@ -1,13 +1,13 @@
-# 2.2.0 rc0 Release Note
+# Release Note
## 1. 重要更新
-我们很高兴的发布飞桨框架2.2.0-rc0版本,本版本包含如下重要更新。
+我们很高兴的发布飞桨框架2.2.0版本,本版本包含如下重要更新。
### API
-- 新增100+个API,包含24个傅里叶变换API、14个线性代数计算 API 等,更好地支持科学计算类、信号处理类模型。
+- 新增100+个API,包含24个傅里叶变换API、17个线性代数计算 API 等,更好地支持科学计算类、信号处理类模型。
- 新增多种索引类型的支持,新增的索引类型包括:省略号(…)、维度扩增(None)、布尔类型数组(Bool Mask)、整数数组((list),以及张量(Tensor) ),可以更加方便的对张量(Tensor)进行操作。
- 新增 `paddle.einsum` API,可以以更加简洁的方式来表达多维张量(Tensor)的计算。
- 动态图混合精度功能增强,新增整个任务使用半精度(float16)训练的方式,主要任务下的计算效率提升20%左右。
@@ -290,7 +290,9 @@ paddle.int64
- 新增 ``paddle.linalg.multi_dot``,支持多个矩阵连乘的计算。([#35224](https://github.com/PaddlePaddle/Paddle/pull/35224))
- 新增 ``paddle.linalg.solve``,支持计算线性方程组的解。([#35715](https://github.com/PaddlePaddle/Paddle/pull/35715))
- 新增``paddle.linalg.matrix_power``,支持矩阵的幂运算操作。([#34667](https://github.com/PaddlePaddle/Paddle/pull/34667))
-
+ - 新增`paddle.linalg.eigvalsh`,用于计算厄米特矩阵或者实数对称矩阵的特征值。([#36680](https://github.com/PaddlePaddle/Paddle/pull/36680))
+ - 新增`paddle.linalg.eig`,用于计算一般方阵的特征值和特征向量。([#35674](https://github.com/PaddlePaddle/Paddle/pull/35674))
+ - 新增`paddle.linalg.qr`,用于计算矩阵的QR分解(暂不支持反向)。([#36627](https://github.com/PaddlePaddle/Paddle/pull/36627))
- 新增傅里叶变换相关API ([#35665](https://github.com/PaddlePaddle/Paddle/pull/35665))
- 新增快速傅立叶变换系列函数
- 可微分的 1d 到 nd 复数到复数快速傅里叶变换。(``paddle.fft.fft``, ``paddle.fft.fft2``, ``paddle.fft.fftn``, ``paddle.fft.ifft``, ``paddle.fft.ifft2``, ``paddle.fft.ifftn``)
@@ -303,19 +305,21 @@ paddle.int64
- 短时傅里叶逆变换。(``paddle.signal.istft``)
- 新增高层API
- - 新增 ``paddle.vision.ops.roi_pool`` 和 ``paddle.vision.ops.RoIPool``,支持检测任务中 RoI 区域池化操作。 ([#36154](https://github.com/PaddlePaddle/Paddle/pull/36154))
- - 新增 ``paddle.vision.ops.roi_align`` 和 ``paddle.vision.ops.RoIAlign``,支持检测任务中 RoI 区域 Align 操作。([#36207](https://github.com/PaddlePaddle/Paddle/pull/36207))
- - 新增 ``paddle.vision.ops.psroi_pool`` 和 ``paddle.vision.ops.PSRoIPool``,支持检测任务中位置敏感的 RoI 区域池化操作。 ([#36111](https://github.com/PaddlePaddle/Paddle/pull/36111))
- - 新增 ``paddle.vision.models.vgg19`` 预训练权重。 ([#35788](https://github.com/PaddlePaddle/Paddle/pull/35788))
- - 新增 ``paddle.vision.datasets.*`` 中数据集 API 下载进度条。([#33302](https://github.com/PaddlePaddle/Paddle/pull/33302))
- - 新增 ``paddle.Model.predict`` 参数 ``verbose``,支持是否显示日志。([#33405](https://github.com/PaddlePaddle/Paddle/pull/33405))
- - 新增 ``paddle.hub`` 下载选项 `wget` 方式。([#33379](https://github.com/PaddlePaddle/Paddle/pull/33379))
- - 新增 ``paddle.Model`` 动态图模式下梯度累加功能。([#32702](https://github.com/PaddlePaddle/Paddle/pull/32702))
- - 新增 ``paddle.Model.fit`` 和 ``paddle.Model.evaluate`` 动态图模式下 ``num_iters`` 参数,控制训练迭代轮数。([#33986](https://github.com/PaddlePaddle/Paddle/pull/33986))
- - 新增 ``paddle.vision.ops.yolo_box`` 参数 ``iou_aware`` 和 ``iou_aware_factor``,支持 YoloBox 使用预测的 IOU 作为置信度的因子。([#33400](https://github.com/PaddlePaddle/Paddle/pull/33400))
- - 新增 ``paddle.summary`` 参数``input``,支持给定输入。([#34165](https://github.com/PaddlePaddle/Paddle/pull/34165))
+ - 新增 ``paddle.vision.ops.roi_pool`` 和 ``paddle.vision.ops.RoIPool``,支持检测任务中 RoI 区域池化操作。 ([#36154](https://github.com/PaddlePaddle/Paddle/pull/36154))
+ - 新增 ``paddle.vision.ops.roi_align`` 和 ``paddle.vision.ops.RoIAlign``,支持检测任务中 RoI 区域 Align 操作。([#36207](https://github.com/PaddlePaddle/Paddle/pull/36207))
+ - 新增 ``paddle.vision.ops.psroi_pool`` 和 ``paddle.vision.ops.PSRoIPool``,支持检测任务中位置敏感的 RoI 区域池化操作。 ([#36111](https://github.com/PaddlePaddle/Paddle/pull/36111))
+ - 新增 ``paddle.vision.models.vgg19`` 预训练权重。 ([#35788](https://github.com/PaddlePaddle/Paddle/pull/35788))
+ - 新增 ``paddle.vision.datasets.*`` 中数据集 API 下载进度条。([#33302](https://github.com/PaddlePaddle/Paddle/pull/33302))
+ - 新增 ``paddle.Model.predict`` 参数 ``verbose``,支持是否显示日志。([#33405](https://github.com/PaddlePaddle/Paddle/pull/33405))
+ - 新增 ``paddle.hub`` 下载选项 `wget` 方式。([#33379](https://github.com/PaddlePaddle/Paddle/pull/33379))
+ - 新增 ``paddle.Model`` 动态图模式下梯度累加功能。([#32702](https://github.com/PaddlePaddle/Paddle/pull/32702))
+ - 新增 ``paddle.Model.fit`` 和 ``paddle.Model.evaluate`` 动态图模式下 ``num_iters`` 参数,控制训练迭代轮数。([#33986](https://github.com/PaddlePaddle/Paddle/pull/33986))
+ - 新增 ``paddle.vision.ops.yolo_box`` 参数 ``iou_aware`` 和 ``iou_aware_factor``,支持 YoloBox 使用预测的 IOU 作为置信度的因子。([#33400](https://github.com/PaddlePaddle/Paddle/pull/33400))
+ - 新增 ``paddle.summary`` 参数``input``,支持给定输入。([#34165](https://github.com/PaddlePaddle/Paddle/pull/34165))
+ - 新增`paddle.text.viterbi_decode`,支持动态图下CPU、GPU的Viterbi解码功能。([#35778](https://github.com/PaddlePaddle/Paddle/pull/35778))
- 新增组网类 API
+ - 新增`paddle.nn.functional.sparse_attention`,用于计算稀疏的Transformer Attention模块。([#35757](https://github.com/PaddlePaddle/Paddle/pull/35757))
- 新增 ``paddle.nn.MaxUnPool2D`` 和 ``paddle.nn.functional.max_unpool2d``,支持根据输入的input和最大值位置计算出池化的逆结果。([#35056](https://github.com/PaddlePaddle/Paddle/pull/35056))
- 新增 ``paddle.nn.functional.gumbel_softmax``,支持 ``gumbel softmax`` 采样。([#35506](https://github.com/PaddlePaddle/Paddle/pull/35506), [#36065](https://github.com/PaddlePaddle/Paddle/pull/36065), [#36094](https://github.com/PaddlePaddle/Paddle/pull/36094))
- 新增 ``paddle.nn.functional.class_center_sample``,支持 PartialFC 类中心采样功能。([#34106](https://github.com/PaddlePaddle/Paddle/pull/34106))
@@ -332,9 +336,13 @@ paddle.int64
- 新增 ``paddle.device.cuda.empty_cache``,支持清理空闲的显存。([#35427](https://github.com/PaddlePaddle/Paddle/pull/35427))
- 新增 ``paddle.device.cuda.get_device_properties``,支持返回给定的设备属性。([#35875](https://github.com/PaddlePaddle/Paddle/pull/35875))
- 新增 ``paddle.device.cuda.stream_guard``,用于动态图下 CUDA Stream的灵活切换。([#35623](https://github.com/PaddlePaddle/Paddle/pull/35623))
-
+ - 新增`paddle.device.cuda.get_device_name`,支持返回给定设备的名称。([#36172](https://github.com/PaddlePaddle/Paddle/pull/36172))
+ - 新增`paddle.device.cuda.get_device_capability`,支持返回给定设备计算能力的版本号。([#36172](https://github.com/PaddlePaddle/Paddle/pull/36172))
+ - 新增`paddle.framework.core.async_read`和`paddle.framework.core.async_write`,可支持非默认 CUDA `Stream`下`CUDAPinnedPlace` 和 `CUDAPlace` 的 `Tensor` 数据异步读写。([#36501](https://github.com/PaddlePaddle/Paddle/pull/36501))
- 新增Tensor操作API
+ - 新增`paddle.tensordot`,支持对高维张量做缩并(Tensor Contraction)运算。([#36454](https://github.com/PaddlePaddle/Paddle/pull/36454))
+ - 新增`paddle.bincount`,支持对一维张量内元素进行计数。([#36709](https://github.com/PaddlePaddle/Paddle/pull/36709))
- 新增 `paddle.broadcast_tensors` ,支持对一组 `Tensor` 进行广播操作。([#33294](https://github.com/PaddlePaddle/Paddle/pull/33294), [#34874](https://github.com/PaddlePaddle/Paddle/pull/34874))
- 新增 `paddle.einsum` 。([#33821](https://github.com/PaddlePaddle/Paddle/pull/34874))
- 增强``paddle.tensor.gradient``接口,支持sigmoid_op的二阶求导算子。([#32971](https://github.com/PaddlePaddle/Paddle/pull/32971))
@@ -373,6 +381,7 @@ paddle.int64
- 新增 ``paddle.static.ExponentialMovingAverage``,支持用指数衰减计算参数的滑动平均值。([#35673](https://github.com/PaddlePaddle/Paddle/pull/35673))
- 新增 `` paddle::Tensor::slice`` C++ API, 支持 slice 操作,允许用户对外部 Tensor 切片操作。([#34227](https://github.com/PaddlePaddle/Paddle/pull/34227))
- 新增``paddle.incubate.segment_*``系列API,包含 ``paddle.incubate.segment_sum, paddle.incubate.segment_mean, paddle.incubate.segment_max, paddle.incubate.segment_min``。支持对`Tensor`按照分段求和、求均值、求最大值、求最小值。 ([#35759](https://github.com/PaddlePaddle/Paddle/pull/35759))
+ - 新增`paddle.version.cuda`和`paddle.version.cudnn`,用于获取 paddle 安装包所使用的 `CUDA`和 `cuDNN`的版本号。([#36556](https://github.com/PaddlePaddle/Paddle/pull/36556))
#### IR(Intermediate Representation)
- 动态图转静态图
@@ -388,13 +397,15 @@ paddle.int64
- 提供分析 `Program` 中控制流需要的依赖辅助函数。 ([#33439](https://github.com/PaddlePaddle/Paddle/pull/33439))
- `Program` 和 `Graph` 相互转换后保留训练所需要的 `stop_gradient` , `persistable` 属性值。([#33771](https://github.com/PaddlePaddle/Paddle/pull/33771))
- 原 `Pass` 只处理主`Graph`,忽略子图,现`Pass` 支持处理主 `Graph`及其所有子图。 ([#34158](https://github.com/PaddlePaddle/Paddle/pull/34158))
- - 处理了在预测情况下 `Program` 和 `Graph` 互转的一些拓扑排序问题。([#34121](https://github.com/PaddlePaddle/Paddle/pull/34121), [#34521](https://github.com/PaddlePaddle/Paddle/pull/34521)). **《== **
+ - 处理了在预测情况下 `Program` 和 `Graph` 互转的一些拓扑排序问题。([#34121](https://github.com/PaddlePaddle/Paddle/pull/34121), [#34521](https://github.com/PaddlePaddle/Paddle/pull/34521))
- Pass开发
- 新增 Python 侧针对 fusion 等子图替换场景下的 Pass 开发方式。([#35708](https://github.com/PaddlePaddle/Paddle/pull/35708), [#35602](https://github.com/PaddlePaddle/Paddle/pull/35602))
- Kernel Primitive API
- 对算子 Kernel 实现中的底层代码进行了抽象与功能封装,提供高性能的 Block 级 IO 运算和 Compute 运算。使用 Kernel Primitive API 进行 Kernel 开发可以更加专注计算逻辑的实现,在保证性能的同时大幅减少代码量,同时实现了算子计算与硬件解耦。([#34672](https://github.com/PaddlePaddle/Paddle/pull/34672), [#35075](https://github.com/PaddlePaddle/Paddle/pull/35075), [#34456](https://github.com/PaddlePaddle/Paddle/pull/34456), [#35282](https://github.com/PaddlePaddle/Paddle/pull/35282), [#35743](https://github.com/PaddlePaddle/Paddle/pull/35743), [#34208](https://github.com/PaddlePaddle/Paddle/pull/34208))
+ - 在 Kernel Primitive API中添加一元和二元计算Functor共13个。 ([#36418](https://github.com/PaddlePaddle/Paddle/pull/36418))
+ - 修改 Kernel Primitive API 中 ReadData 实现方式,修复`NX !=1`访存越界的问题。 ([#36373](https://github.com/PaddlePaddle/Paddle/pull/36373))
#### 混合精度训练
- 动态图混合精度功能增强,新增整个任务使用半精度(float16)训练的方式,主要任务下的计算效率提升20%左右。 ([#35521](https://github.com/PaddlePaddle/Paddle/pull/35521))
@@ -512,7 +523,13 @@ paddle.int64
- 优化``l2_normalize``,``p_norm``,``elementwise_max``,``prelu``,``clip_by_norm``,``lars optimizer``算子支持float16计算。 ([#35576](https://github.com/PaddlePaddle/Paddle/pull/35576), [#35888](https://github.com/PaddlePaddle/Paddle/pull/35888), [#35888](https://github.com/PaddlePaddle/Paddle/pull/35888), [35532](https://github.com/PaddlePaddle/Paddle/pull/35532), [#35446](https://github.com/PaddlePaddle/Paddle/pull/35446), [#33280](https://github.com/PaddlePaddle/Paddle/pull/33280))
- 优化flowers数据集的读取速度,从每批次数分钟优化至1~3秒。([#31408](https://github.com/PaddlePaddle/Paddle/pull/31408))
- 支持`paddle.distributed.fleet.DistributedStrategy` 中 `without_graph_optimize` 开关打开后的fuse allreduce sum功能。FP32下性能提升3%,AMP下性能提升8%。([#34446](https://github.com/PaddlePaddle/Paddle/pull/34446))
-
+- `paddle.matmul` 将底层Op算子由matmul op 切换到 matmul_v2 op。 ([#36374](https://github.com/PaddlePaddle/Paddle/pull/36374))
+- `paddle.fft` 模块添加了 mkl_cdft 和 hipfft 两个计算后端。 ([#36537](https://github.com/PaddlePaddle/Paddle/pull/36537))
+- `paddle.roll` 的参数 `shifts` 支持 `Tensor` 作为输入。 ([#36537](https://github.com/PaddlePaddle/Paddle/pull/36537))
+- `paddle.shape` 支持复数类型的输入。([#36835](https://github.com/PaddlePaddle/Paddle/pull/36835))
+- matmul_v2 支持量化。([#36469](https://github.com/PaddlePaddle/Paddle/pull/36469))
+- 新增 `clip_op` 对 `float16` 的支持。 ([#36672](https://github.com/PaddlePaddle/Paddle/pull/36672))
+- `paddle.fft` 模块为 cufft 后端添加了缓存 plan 的功能,优化性能。([#36537](https://github.com/PaddlePaddle/Paddle/pull/36537))
#### IR(Intermediate Representation)
- 动态图转静态图
@@ -521,6 +538,9 @@ paddle.int64
- 优化了动转静训练代码逻辑,升级内部 ``Program`` 缓存机制,新增输入 ``Tensor`` 的提前 copy 策略,提升训练性能。 ([#34181](https://github.com/PaddlePaddle/Paddle/pull/34181), [#33796](https://github.com/PaddlePaddle/Paddle/pull/33796))
- 优化动转静内部执行器显存回收策略,减少训练时显存占用量。 ([#34177](https://github.com/PaddlePaddle/Paddle/pull/34177))
- 集成了 ``Gast`` 三方依赖库的源码,解耦了版本依赖。 ([#34556](https://github.com/PaddlePaddle/Paddle/pull/34556))
+ - 动转静报错时显示部分框架层报错信息,使得定位问题更加容易。([#36765](https://github.com/PaddlePaddle/Paddle/pull/36765))
+ - 移除动转静报错模块中重复的临时文件删除函数`remove_static_file()`。([#36375](https://github.com/PaddlePaddle/Paddle/pull/36375))
+ - 优化对RegisterPass中`input_specs`参数处理,支持图优化时作为匹配子图条件。([#36453](https://github.com/PaddlePaddle/Paddle/pull/36453))
#### 分布式训练
@@ -534,7 +554,13 @@ paddle.int64
- `paddle.io.Dataset` 支持动态库解析数据。 ([#33969](https://github.com/PaddlePaddle/Paddle/pull/33969))
- 新增 `paddle.distributed.fleet.dataset.DatasetBase` 中对`use_var_list`和 `pipe_command` 生成数据的一致性检查函数。 ([#34463](https://github.com/PaddlePaddle/Paddle/pull/34463))
- 新增 `paddle.fluid.layers.embedding` 的 `emd` 维度与 `fleet` 中` sparse table` 的 `emb` 维度的一致性检查。 ([#34249](https://github.com/PaddlePaddle/Paddle/pull/34249))
-
+ - 动态图混合并行支持Pure FP16训练。([#36707](https://github.com/PaddlePaddle/Paddle/pull/36707))
+ - 静态图混合并行支持dropout使用固定随机种子生成器,以确保模型并行中全局变量的一致性与局部变量的随机性。([#36682](https://github.com/PaddlePaddle/Paddle/pull/36682))
+ ‘
+ - 实现了CPU并行,并支持调用 spawn 或 launch 时可以添加自定义的backend参数。可用的backend选择为 "gloo", "nccl", "bkcl", "auto" ,分别表示CPU并行,GPU并行,XPU并行和按照Paddle版本自动选择。([#35745](https://github.com/PaddlePaddle/Paddle/pull/35745))
+ - 优化动态图混合并行 HybridParallelClipGrad 策略,支持4D混合并行+Pure FP16训练。([#36707](https://github.com/PaddlePaddle/Paddle/pull/36707))
+ - 添加 SlotRecordDataset 类支持GPU参数服务器训练。([#36710](https://github.com/PaddlePaddle/Paddle/pull/36710))
+ - GPU参数服务器构建阶段支持使用SlotRecordDataset。([#36723](https://github.com/PaddlePaddle/Paddle/pull/36723))
- 静态图混合并行
- 优化混合并行 loss scale,减少 scale op 插入个数。([#35775](https://github.com/PaddlePaddle/Paddle/pull/35775))
@@ -555,6 +581,14 @@ paddle.int64
- 修正 ``paddle.jit.save`` 接口和模型裁剪的逻辑,不再为输出变量增加一个关联的 ``scale_op``,可以正确导出含有 ``bool``,``float16`` 类型输出的模型。([#35730](https://github.com/PaddlePaddle/Paddle/pull/35730), [#36132](https://github.com/PaddlePaddle/Paddle/pull/36132))
- 自定义OP
- 移除 ``paddle::Tensor`` 的 ``copy`` 方法中不必要的 ``cudaStreamSynchronize`` 操作,以提升性能。([#35802](https://github.com/PaddlePaddle/Paddle/pull/35802))
+- 新增C++对GeneratePass开发注册的支持,开发方式与Python侧对齐。([#36302](https://github.com/PaddlePaddle/Paddle/pull/36302))
+- 自动稀疏化训练(Automic SParsity)
+ - 新增`paddle.static.sparsity`,支持生成`n:m`稀疏模式的稀疏参数,目前只支持静态图ASP训练。A100上FP32、FP16分别设置`1:2`、`2:4`的稀疏模式,训练保存的稀疏模型,可通过调用TensorRT 8利用Ampere架构的稀疏Tensor Core加速推理任务。当前版本共提供了5个API:([#32995](https://github.com/PaddlePaddle/Paddle/pull/32995)、[#33132](https://github.com/PaddlePaddle/Paddle/pull/33132)、[#33558](https://github.com/PaddlePaddle/Paddle/pull/33558)、[#36525](https://github.com/PaddlePaddle/Paddle/pull/36525))
+ - `paddle.static.sparsity.calculate_density`,计算输入Tensor的密度。
+ - `paddle.static.sparsity.decorate`,将给定的优化器包装为`OptimizerWithSparsityGuarantee`,在调用 `optimizer.minimize()`时自动为ASP工作流插入必要的操作。
+ - `paddle.static.sparsity.prune_model`,依据`mask_algo`指定的掩码生成函数裁剪`main_program`中支持的层的参数。
+ - `paddle.static.sparsity.set_excluded_layers`,设置不会被裁剪的层的参数名称。
+ - `paddle.static.sparsity.reset_excluded_layers`,重置与`main_program`相对应的`excluded_layers`设置。
@@ -594,6 +628,18 @@ paddle.int64
- 优化动态图性能,将只在静态图执行的逻辑从动态图的执行路径中剥离。([#34024](https://github.com/PaddlePaddle/Paddle/pull/34024))
- IR Pass优化能力作为通用能力露出,同时支持单机和分布式优化。在GPT混合并行场景性能提升3%-5%。([#34955](https://github.com/PaddlePaddle/Paddle/pull/34955), [#35704](https://github.com/PaddlePaddle/Paddle/pull/35704), [#34730](https://github.com/PaddlePaddle/Paddle/pull/34730), [#34524](https://github.com/PaddlePaddle/Paddle/pull/34524))
- 优化 ctc loss grad 计算速度,提速~3x,但相应增加了GPU显存占用。([#34729](https://github.com/PaddlePadle/Paddle/pull/34729))
+- transformer encoder 性能优化
+ - 优化思路:通过新增 `paddle.incubate.nn.FusedMultiHeadAttention` 和 `paddle.incubate.nn.FusedFeedForward` 的方式,在实现中采用 q, k, v gemm融合及多种kernel融合优化技术,提升transformer encoder的性能。
+ - FusedAttention
+ - 新增 `paddle.incubate.nn.functional.fused_multi_head_attention` ,支持multi-head attention的融合计算。([#35905](https://github.com/PaddlePaddle/Paddle/pull/35905) [35903](https://github.com/PaddlePaddle/Paddle/pull/35903) [#36803](https://github.com/PaddlePaddle/Paddle/pull/36803) [#36793](https://github.com/PaddlePaddle/Paddle/pull/36793) [36185](https://github.com/PaddlePaddle/Paddle/pull/36185))
+ - 新增 `paddle.incubate.nn.FusedMultiHeadAttention` ,用于融合multi-head attention的layer层组网。 ([#36498](https://github.com/PaddlePaddle/Paddle/pull/36498) )
+ - 该模块使用q, k, v gemm融合和bias add + dropout + residual add + layer_norm kernel融合优化技术,可带来1.08x-1.45x加速。
+
+ - FusedFeedForward
+ - 新增 `paddle.incubate.nn.functional.fused_feedforward` ,支持 feedforward的融合计算。([#36729](https://github.com/PaddlePaddle/Paddle/pull/36729) [#36730](https://github.com/PaddlePaddle/Paddle/pull/36730))
+ - 新增 `paddle.incubate.nn.FusedFeedForward` ,用于融合feedforward的layer层组网。 ([#36776](https://github.com/PaddlePaddle/Paddle/pull/36776))
+ - 性能较优化前有1.04x~1.22x左右的提升。
+ - 新增 `paddle.incubate.nn.FusedTransformerEncoderLayer`,支持使用融合multi-head attention和融合feedforward计算的layer层组网。 ([#36776](https://github.com/PaddlePaddle/Paddle/pull/36776))
### (4)问题修复
@@ -687,12 +733,27 @@ paddle.int64
- 迁移``paddle.nn.functional.dice_loss``API中的`one_hot`算子到`one_hot_v2`算子。([#35734](https://github.com/PaddlePaddle/Paddle/pull/35734))
- 修复 ``paddle.summary`` 静态图模式下使用 bug。([#35303](https://github.com/PaddlePaddle/Paddle/pull/35303))
- 修复 ``paddle.Model.prepare`` 静态图模式下多卡启动的 bug。([#34311](https://github.com/PaddlePaddle/Paddle/pull/34311))
+- 修复`paddle.nn.functional.cross_entropy` 给定`weight`,且指定`axis`为除-1外的其他合法维度时会报错的问题。([#36647](https://github.com/PaddlePaddle/Paddle/pull/36647))
+- 修复`paddle.utils.dlpack.to_dlpack`无法编码多维 `Tensor` 的问题,修复其所生成的 DLPack 对象无法进行跨深度学习框架共享的问题。([#36177](https://github.com/PaddlePaddle/Paddle/pull/36177))
+- 修复使用`paddle.distribution.Categorical`的`sample`方法报错的问题,具体原因是multinomial op的cuda kernel中数组访问越界,该bug会导致访问超出数组下标的值,引起报错。 ([#36511](https://github.com/PaddlePaddle/Paddle/pull/36511))
+- 修复动态图`_BatchNormBase`基类中修改了 default_dtype,导致后续组网参数类型错误的问题,受影响的API有`paddle.nn.BatchNorm1D`,`paddle.nn.BatchNorm2D`,`paddle.nn.BatchNorm3D`,`paddle.nn.SyncBatchNorm`。具体原因是当 `get_default_dtype() == 'float16'` 时,通过 `set_default_dtype('float32')`修改默认参数数据类型,动态图组网的参数类型是通过 default_dtype 来创建的,因此当默认参数类型被修改后导致后续的组网参数类型错误。 ([#36376](https://github.com/PaddlePaddle/Paddle/pull/36376))
+- 修复`paddle.nn.functional.grid_sample`因特殊输入导致的异常问题。([#36625](https://github.com/PaddlePaddle/Paddle/pull/36625))
+- 修复 `paddle.fft.fft`, `paddle.fft.ifft`, `paddle.fft.rfft` , `paddle.fft.irfft`, `paddle.fft.hfft`, `paddle.fft.ihfft` 在输入 `axis=0` 情况下的计算错误问题。([#36537](https://github.com/PaddlePaddle/Paddle/pull/36537))
+- 修复 `paddle.fft.fftshift` 和 `paddle.fft.ifftshift` 在静态图下出错的问题。([#36537](https://github.com/PaddlePaddle/Paddle/pull/36537))
+- 修复 `paddle.fft.ifftshift` 计算结果不正确的问题。([#36835](https://github.com/PaddlePaddle/Paddle/pull/36835))
+- 修复`paddle.nn.functional.pad`在`replicate`模式下的报错信息提示。([#36531](https://github.com/PaddlePaddle/Paddle/pull/36531))
+
#### IR(Intermediate Representation)
- 动态图转静态图
- 修复了动转静后,在 ``paddle.no_grad`` 语义下显存异常增长的问题。([#35725](https://github.com/PaddlePaddle/Paddle/pull/35725))
- 修复了对 ``paddle.no_grad`` 接口的错误识别和转换问题。([#34136](https://github.com/PaddlePaddle/Paddle/pull/34136))
+ - 修复了部分场景下模型中间设置 stop_gradient=True 时,动转静训练报错的问题。([#36353](https://github.com/PaddlePaddle/Paddle/pull/36353))
+ - 修复了在控制流 if 的部分场景转换时,对返回结果检查会报错的问题。([#36830](https://github.com/PaddlePaddle/Paddle/pull/36830))
+ - 修复了在 ifelse 分支返回不等长结果时,动转静会额外对齐返回长度导致返回类型意外改变的问题。([#36565](https://github.com/PaddlePaddle/Paddle/pull/36565))
+ - 修复使用 jit.save/load 接口加载模型后,在 train 模式和 no_grad 上下文中,显存会一直增长的问题。([#36463](https://github.com/PaddlePaddle/Paddle/pull/36463))
+
#### 分布式训练
@@ -727,6 +788,10 @@ paddle.int64
- 修复 GPU 参数服务器使用非0卡训练报错问题。([#33078](https://github.com/PaddlePaddle/Paddle/pull/33078))
- 修复 GPU 参数服务器 delta score,scale show问题。([#33492](https://github.com/PaddlePaddle/Paddle/pull/33078), [#33492](https://github.com/PaddlePaddle/Paddle/pull/33492))
- 修复 GPU 参数服务器训练结束后未 merge dense,g2sum 计算有误,data norm 添加了optimize op 等问题。 ([#35029](https://github.com/PaddlePaddle/Paddle/pull/35029))
+ - 修复使用 fuse all reduce ops 开关时,如果梯度出现 empty 时会报错的问题。([#36231](https://github.com/PaddlePaddle/Paddle/pull/36231))
+ - 修复 dist_transformer 文件出现未定义的变量问题。([#36211](https://github.com/PaddlePaddle/Paddle/pull/36211))
+
+
- 动态图混合并行
- 修复流水线并行计算错误的问题。([#35556](https://github.com/PaddlePaddle/Paddle/pull/35556))
@@ -767,6 +832,8 @@ paddle.int64
- 子图通过支持Paddle-Lite NNAdapter接入ascend310硬件预测 [#35226](https://github.com/PaddlePaddle/Paddle/pull/35226), 示例可参考[demo](https://github.com/PaddlePaddle/Paddle-Inference-Demo/tree/master/c%2B%2B/ascend310_lite_subgraph/image_classification_demo)。
- 新增晟腾910 推理支持 [#34101](https://github.com/PaddlePaddle/Paddle/pull/34101)
+- 新增pool3d算子支持TensorRT的功能。([#36545](https://github.com/PaddlePaddle/Paddle/pull/36545))
+
### (2)功能优化
#### 框架及API更新
@@ -774,6 +841,7 @@ paddle.int64
- 量化支持
- 动态图量化推理 pass 的重构,支持非模拟量化的 OP和模拟量化的 OP。([#35907](https://github.com/PaddlePaddle/Paddle/pull/35907))
- 增加 int8 的模拟量化OP matmul(权重乘以 tensor的情况)。([#34359](https://github.com/PaddlePaddle/Paddle/pull/34359))
+ - 修复MobileNetV3模型在量化训练过程中因量化参数为0导致的Loss出NAN问题。([#36763](https://github.com/PaddlePaddle/Paddle/pull/36763))
- API 增强
@@ -810,16 +878,18 @@ paddle.int64
- 增加TensorRT `qkv_context` plugin 对int8的支持([#34917](https://github.com/PaddlePaddle/Paddle/pull/34917), [#35504](https://github.com/PaddlePaddle/Paddle/pull/35504))
- 增加TensorRT conv3d的支持。([#35507](https://github.com/PaddlePaddle/Paddle/pull/35507))
- 增加对 `multihead_matmul` 融合算子的输入进行广播的支持。([#35780](https://github.com/PaddlePaddle/Paddle/pull/35780))
+ - Inference 支持 TensorRT8 稀疏推理,[测试环境](https://github.com/PaddlePaddle/Paddle-Inference-Demo/tree/master/c%2B%2B/sparsity)下,ERNIE 模型变长输入在不同的 batch_size 下性能提升10%-30%,ResNeXt101_32x4d模型在不同的batch_size下性能提升10%。([#36659](https://github.com/PaddlePaddle/Paddle/pull/36659))
- Nvidia Jetson 原生支持能力增强
- 新增 Op 支持,针对Jetson Nano/TX2这两款算力较低的设备,我们做了针对性的优化,目前新增了 `pool2d`, `pool_max`, `conv3d_transpose` 等 17个OP的支持。([#35378](https://github.com/PaddlePaddle/Paddle/pull/35378))
- 针对Jetson Nano,新增模型:DPN68, EfficientNetB0, ttfnet, fcn_hrnetw18, hardnet。([#35378](https://github.com/PaddlePaddle/Paddle/pull/35378))
- 针对Jetson TX2,新增模型:deeplabv3p_resnet50, deeplabv3_resnet50, fcn_hrnetw18, hardnet, pspnet, ttfnet, unet。([#35378](https://github.com/PaddlePaddle/Paddle/pull/35378))
-
- 昆仑XPU接口功能扩展
- 新增 `set_xpu_device_id` 接口,支持设置推理时的昆仑芯片的设备号([#35572](https://github.com/PaddlePaddle/Paddle/pull/35572))
+- Inference python `copy_from_cpu`接口加入输入类型检查,错误类型输入下提前报错。([#36552](https://github.com/PaddlePaddle/Paddle/pull/36552))
+
### (3)问题修复
#### 框架及API修复
@@ -842,6 +912,16 @@ paddle.int64
- 修复ernie变长情况下,输入的顺序不一致导致输出不对的问题。([#33575](https://github.com/PaddlePaddle/Paddle/pull/33575))
- 修复多流状态下分配器功能异常的问题。([#32932](https://github.com/PaddlePaddle/Paddle/pull/33575))
+- 修复 ERNIE 模型在 TRT8 下可能出现的崩溃问题。([#36769](https://github.com/PaddlePaddle/Paddle/pull/36769))
+- 修复使用 Pool, Slice 时可能出现的崩溃及精度问题。([#36666](https://github.com/PaddlePaddle/Paddle/pull/36666))
+- 修复 yolo_box op因为计算公式错误导致的精度问题。([#36365](https://github.com/PaddlePaddle/Paddle/pull/36365))
+- 修复量化后的 matmul_v2 在TRT下无法正常推理的问题。([#36821](https://github.com/PaddlePaddle/Paddle/pull/36821))
+- 修复了量化 matmul_v2 时错误地添加量化op的问题。([#36820](https://github.com/PaddlePaddle/Paddle/pull/36820))
+- 修复算子 batch_norm 和 elementwise_add 在3D应用场景下开启 TRT 报错的问题。([#36446](https://github.com/PaddlePaddle/Paddle/pull/36446))
+- 修复高层 linear api保存得到的预测模型无法被 Pass 融合优化的问题。([#36500](https://github.com/PaddlePaddle/Paddle/pull/36500))
+- 修改 MatmulV2ToMul 的 Pass,重新限定 (matmul_v2 to mul) 映射的 Pass,增加 MatmulV2ToMatmul 的 Pass,限定 (matmul_v2 to matmul) 映射的 Pass条件(不支持广播),修改 (matmul, mul) 的 op_teller 映射条件。([#36652](https://github.com/PaddlePaddle/Paddle/pull/36652))
+
+
#### 后端能力修复
- TensorRT 子图引擎修复
@@ -907,4 +987,5 @@ paddle.int64
This release contains contributions from:
-0x45f, 123malin, Adam Osewski, Aganlengzi, Aurelius84, Baibaifan, Bo Liu, CheQiXiao, Chen Long, Chen Weihang, CtfGo, Double\_V, Ethanzjp, Fan Zhang, Feiyu Chan, Feng Xing, From00, GT-Zhang, Guanghua Yu, Guoxia Wang, Haipeng Wang, Hao Lin, Haohongxiang, Hui Zhang, Huihuang Zheng, HydrogenSulfate, IMMORTAL, JYChen, JZ-LIANG, Jacek Czaja, Jack Zhou, Jackwaterveg, Jeng Bai-Cheng, Jiangxinz, Jiaqi Liu, Jiawei Wang, JingZhuangzhuang, June Weng, Kaipeng Deng, Kqnonrime, LJQ❤️, Leo Chen, Li Min, LielinJiang, Lijunhui, Linjie Chen, Liu-xiandong, LiuWei, Ming-Xu Huang, MissPenguin, PaddlePM, Pei Yang, Peihan, Qi Li, QingshuChen, Ren Wei (任卫), Roc, Shang Zhizhou, ShenLiang, Shibo Tao, Siming Dai, Sing\_chan, TCChenLong, TTerror, TeslaZhao, Thomas Young, Thunderbrook, Tongxin Bai, WJJ1995, WangXi, Wangzheee, Wei Shengyu, WeiXin, Weilong Wu, Wenyu, Wilber, XGZhang, XYZ, XYZ916829, XiangGao, Xiaoxu Chen, YUNSHEN XIE, Yanxing Shi, Yiqun Liu, YuanRisheng, Yuang Liu, Yulong Ao, Zeng Jinle, Zhang Ting, Zhang Zheng, Zhanlue Yang, Zhen Wang, Zhong Hui, Zhou Wei, andreazanetti, andyjpaddle, arlesniak, baoachun, cc, ceci3, chajchaj, chenenquan, chenjian, chentianyu03, crystal, cuicheng01, danleifeng, denglin-github, duanboqiang, dyning, feng626, feng_shuai, furnace, gongweibao, heliqi, hlygit66666, hong, hong19860320, houj04, huangjun12, huangxu96, huzhiqiang, iducn, jakpiase, jiangcheng, joanna.wozna.intel, jzhang533, kuizhiqing, levi131, lidanqing, lilong12, limingshu, littletomatodonkey, liu zhengxi, liutiexing, liuyuhui, liym27, lyuwenyu, lzzyzlbb, niuliling123, pangyoki, parap1uie-s, ronnywang, root, seemingwang, shangliang Xu, shiyutang, smallv0221, sunli, sunzhongkai588, taixiurong, tangwei12, tianshuo78520a, veyron95, wangguanqun, wangguanzhong, wanghuancoder, wangna11BD, wangxinxin08, wangzhen38, wangzhuang01, wawltor, wenbin, whs, will-jl944, wuhuachaocoding, wuhuanzhou, xiaoting, xiaoxiaohehe001, xiayanming, xiegegege, xiemoyuan, xiongkun, yaoxuefeng, yeliang2258, yingyibiao, zhangbo9674, zhangchunle, zhangkaihuo, zhaoyingli, zhiboniu, zhoujun, zhouzj, zhulei, zhupengyang, zlsh80826, zmx, zyfncg, 李季, 津, 王明冬, 石晓伟
\ No newline at end of file
+0x45f, 123malin, Adam Osewski, Aganlengzi, Aurelius84, Baibaifan, Bo Liu, CheQiXiao, Chen Long, Chen Weihang, CtfGo, Double\_V, Ethanzjp, Fan Zhang, Feiyu Chan, Feng Xing, From00, GT-Zhang, Guanghua Yu, Guoxia Wang, Haipeng Wang, Hao Lin, Haohongxiang, Hui Zhang, Huihuang Zheng, HydrogenSulfate, IMMORTAL, JYChen, JZ-LIANG, Jacek Czaja, Jack Zhou, Jackwaterveg, Jeng Bai-Cheng, Jiangxinz, Jiaqi Liu, Jiawei Wang, JingZhuangzhuang, June Weng, Kaipeng Deng, Kqnonrime, LJQ❤️, Leo Chen, Li Min, LielinJiang, Lijunhui, Linjie Chen, Liu-xiandong, LiuWei, Ming-Xu Huang, MissPenguin, PaddlePM, Pei Yang, Peihan, Qi Li, QingshuChen, Ren Wei (任卫), Roc, Shang Zhizhou, ShenLiang, Shibo Tao, Siming Dai, Sing\_chan, TCChenLong, TTerror, TeslaZhao, Thomas Young, Thunderbrook, Tongxin Bai, WJJ1995, WangXi, Wangzheee, Wei Shengyu, WeiXin, Weilong Wu, Wenyu, Wilber, XGZhang, XYZ, XYZ916829, XiangGao, Xiaoxu Chen, YUNSHEN XIE, Yanxing Shi, Yiqun Liu, YuanRisheng, Yuang Liu, Yulong Ao, Zeng Jinle, Zhang Ting, Zhang Zheng, Zhanlue Yang, Zhen Wang, Zhong Hui, Zhou Wei, andreazanetti, andyjpaddle, arlesniak, baoachun, cc, ceci3, chajchaj, chenenquan, chenjian, chentianyu03, crystal, cuicheng01, danleifeng, denglin-github, duanboqiang, dyning, feng626, feng_shuai, furnace, gongweibao, heliqi, hlygit66666, hong, hong19860320, houj04, huangjun12, huangxu96, huzhiqiang, iducn, jakpiase, jiangcheng, joanna.wozna.intel, jzhang533, kuizhiqing, levi131, lidanqing, lilong12, limingshu, littletomatodonkey, liu zhengxi, liutiexing, liuyuhui, liym27, lyuwenyu, lzzyzlbb, niuliling123, pangyoki, parap1uie-s, ronnywang, root, seemingwang, shangliang Xu, shiyutang, smallv0221, sunli, sunzhongkai588, taixiurong, tangwei12, tianshuo78520a, veyron95, wangguanqun, wangguanzhong, wanghuancoder, wangna11BD, wangxinxin08, wangzhen38, wangzhuang01, wawltor, wenbin, whs, will-jl944, wuhuachaocoding, wuhuanzhou, xiaoting, xiaoxiaohehe001, xiayanming, xiegegege, xiemoyuan, xiongkun, yaoxuefeng, yeliang2258, yingyibiao, zhangbo9674, zhangchunle, zhangkaihuo, zhaoyingli, zhiboniu, zhoujun, zhouzj, zhulei, zhupengyang, zlsh80826, zmx, zyfncg, 李季, 津, 王明冬, 石晓伟
+
diff --git a/docs/release_note_en.md b/docs/release_note_en.md
index 9848c8de754..349796fdebb 100644
--- a/docs/release_note_en.md
+++ b/docs/release_note_en.md
@@ -1,13 +1,13 @@
-# 2.2.0 rc0 Release Note
+# Release Note
## **1. Highlights**
-We are excited to release the PaddlePaddle Framework V2.2.0-rc0. This version contains the following highlights.
+We are excited to release the PaddlePaddle Framework V2.2.0. This version contains the following highlights.
### API
-- Added 100+ APIs, including 24 Fourier transform APIs, 14 linear algebra APIs, etc., to better facilitate developing of scientific computing and signal processing models.
+- Added 100+ APIs, including 24 Fourier transform APIs, 17 linear algebra APIs, etc., to better facilitate developing of scientific computing and signal processing models.
- Added the support for multiple indexing syntax, including ellipsis (...), dimension expansion (None), boolean arrays (Bool Mask), and integer arrays (list and tensor), making it easier to operate on tensor.
- Added the `paddle.einsum` API, to express multi-dimensional tensor computation in a more concise way.
- Enhanced the dynamic graph mixed precision. Added a way to use half-precision (float16) training for the whole task. The computational efficiency under the main tasks increased by 20%.
@@ -289,6 +289,9 @@ paddle.int64
- Add the ``paddle.linalg.multi_dot``, to support the computing of concatenated multiplication of multiple matrices. ([#35224](https://github.com/PaddlePaddle/Paddle/pull/35224))
- Add the ``paddle.linalg.solve``, to support the computing of the solutions of linear equations. ([#35715](https://github.com/PaddlePaddle/Paddle/pull/35715))
- Add the ``paddle.linalg.matrix_power``, to support the power operations on matrices. ([#34667](https://github.com/PaddlePaddle/Paddle/pull/34667))
+ - Add `paddle.linalg.eigvalsh` for computing eigenvalues of Hermite Matrix or real symmetric matrices. ([#36680](https://github.com/PaddlePaddle/Paddle/pull/36680))
+ - Add `paddle.linalg.eig` for computing eigenvalues and eigenvectors of general square matrices. ([#35674](https://github.com/PaddlePaddle/Paddle/pull/35674))
+ - Add `paddle.linalg.qr` for computing QR decomposition of matrices (inverse is not supported yet). ([#36627](https://github.com/PaddlePaddle/Paddle/pull/36627))
- Add new Fourier transform related API ([#35665](https://github.com/PaddlePaddle/Paddle/pull/35665))
- Add fast Fourier transform family functions
@@ -303,18 +306,20 @@ paddle.int64
- Add new high-level APIs
- Add the ``paddle.vision.ops.roi_pool`` and ``paddle.vision.ops.RoIPool``, support RoI region pooling operations in detection tasks. ([#36154](https://github.com/PaddlePaddle/Paddle/pull/36154))
- - Add the ``paddle.vision.ops.roi_align`` and ``paddle.vision.ops.RoIAlign``, to support RoI region Align operations in detection tasks. ([#36207](https://github.com/PaddlePaddle/Paddle/pull/36207))
- - Add the ``paddle.vision.ops.psroi_pool`` and ``paddle.vision.ops.PSRoIPool``, to support location-sensitive RoI region pooling operations in detection tasks. ([#36111](https://github.com/PaddlePaddle/Paddle/pull/36111))
- - Add the ``paddle.vision.models.vgg19`` pre-training weights. ([#35788](https://github.com/PaddlePaddle/Paddle/pull/35788))
- - Add thedatasets API download progress bar in ``paddle.vision.datasets.*``. ([#33302](https://github.com/PaddlePaddle/Paddle/pull/33302))
- - Add the ``paddle.Model.predict`` parameter ``verbose``, to support whether to show logs or not. ([#33405](https://github.com/PaddlePaddle/Paddle/pull/33405))
- - Add the ``paddle.hub`` download option ``wget`` method. ([#33379](https://github.com/PaddlePaddle/Paddle/pull/33379))
- - Add the ``paddle.Model`` gradient accumulation in dynamic graph mode. ([#32702](https://github.com/PaddlePaddle/Paddle/pull/32702))
- - Add the ``paddle.Model.fit`` and ``paddle.Model.evaluate`` ``num_iters`` parameters in dynamic graph mode to control the number of training iterations. ([#33986](https://github.com/PaddlePaddle/Paddle/pull/33986))
- - Add the ``paddle.vision.ops.yolo_box`` parameters ``iou_aware`` and ``iou_aware_factor``, to support YoloBox using predicted IOUs as confidence factors. ([#33400](https://github.com/PaddlePaddle/Paddle/pull/33400))
- - Add the ``paddle.summary`` parameter input to support the given ``input``. ([#34165](https://github.com/PaddlePaddle/Paddle/pull/34165))
+ - Add the ``paddle.vision.ops.roi_align`` and ``paddle.vision.ops.RoIAlign``, to support RoI region Align operations in detection tasks. ([#36207](https://github.com/PaddlePaddle/Paddle/pull/36207))
+ - Add the ``paddle.vision.ops.psroi_pool`` and ``paddle.vision.ops.PSRoIPool``, to support location-sensitive RoI region pooling operations in detection tasks. ([#36111](https://github.com/PaddlePaddle/Paddle/pull/36111))
+ - Add the ``paddle.vision.models.vgg19`` pre-training weights. ([#35788](https://github.com/PaddlePaddle/Paddle/pull/35788))
+ - Add the datasets API download progress bar in ``paddle.vision.datasets.*``. ([#33302](https://github.com/PaddlePaddle/Paddle/pull/33302))
+ - Add the ``paddle.Model.predict`` parameter ``verbose``, to support whether to show logs or not. ([#33405](https://github.com/PaddlePaddle/Paddle/pull/33405))
+ - Add the ``paddle.hub`` download option ``wget`` method. ([#33379](https://github.com/PaddlePaddle/Paddle/pull/33379))
+ - Add the ``paddle.Model`` gradient accumulation in dynamic graph mode. ([#32702](https://github.com/PaddlePaddle/Paddle/pull/32702))
+ - Add the ``paddle.Model.fit`` and ``paddle.Model.evaluate`` ``num_iters`` parameters in dynamic graph mode to control the number of training iterations. ([#33986](https://github.com/PaddlePaddle/Paddle/pull/33986))
+ - Add the ``paddle.vision.ops.yolo_box`` parameters ``iou_aware`` and ``iou_aware_factor``, to support YoloBox using predicted IOUs as confidence factors. ([#33400](https://github.com/PaddlePaddle/Paddle/pull/33400))
+ - Add the ``paddle.summary`` parameter input to support the given ``input``. ([#34165](https://github.com/PaddlePaddle/Paddle/pull/34165))
+ - Add `paddle.text.viterbi_decode`, to support Viterbi decoding for CPU and GPU under dynamic graphs. ([#35778](https://github.com/PaddlePaddle/Paddle/pull/35778))
- Add networking class APIs
+ - Add `paddle.nn.functional.sparse_attention` for computing sparse Transformer Attention modules. ([#35757](https://github.com/PaddlePaddle/Paddle/pull/35757))
- Add the ``paddle.nn.MaxUnPool2D`` and ``paddle.nn.functional.max_unpool2d``, to support the computing of the inverse of the pooling result based on the input and maximum position. ([#35056](https://github.com/PaddlePaddle/Paddle/pull/35056))
- Add the ``paddle.nn.functional.gumbel_softmax``, to support ``gumbel softmax`` sampling. ([#35506](https://github.com/PaddlePaddle/Paddle/pull/35506), [#36065](https://github.com/PaddlePaddle/Paddle/pull/36065), [#36094](https://github.com/PaddlePaddle/Paddle/pull/36094))
- Add the ``paddle.nn.functional.class_center_sample``, to support PartialFC class center sampling. ([#34106](https://github.com/PaddlePaddle/Paddle/pull/34106))
@@ -331,9 +336,14 @@ paddle.int64
- Add the ``paddle.device.cuda.empty_cache``, to support for clearing free GPU memory. ([#35427](https://github.com/PaddlePaddle/Paddle/pull/35427))
- Add the ``paddle.device.cuda.get_device_properties``, to support for returning the given device properties. ([#35875](https://github.com/PaddlePaddle/Paddle/pull/35875))
- Add the ``paddle.device.cuda.stream_guard`` for flexible switching of CUDA Streams under dynamic graphs. ([#35623](https://github.com/PaddlePaddle/Paddle/pull/35623))
+ - Add `paddle.device.cuda.get_device_name`, to support returning the name of a given device. ([#36172](https://github.com/PaddlePaddle/Paddle/pull/36172))
+ - Add `paddle.device.cuda.get_device_capability`, to support returning version number of the computational capability of a given device. ([#36172](https://github.com/PaddlePaddle/Paddle/pull/36172))
+ - Add `paddle.framework.core.async_read` and `paddle.framework.core.async_write`, to support `Tensor` data asynchronous read and write of `CUDAPinnedPlace` and ` CUDAPlace` under non-default CUDA `Stream`. ([#36501](https://github.com/PaddlePaddle/Paddle/pull/36501))
- Add Tensor operation APIs
+ - Add `paddle.tensordot`, to support Tensor Contraction for high dimension. ([#36454](https://github.com/PaddlePaddle/Paddle/pull/36454))
+ - Add `paddle.bincount`, to support counting elements in a one-dimensional tensor. ([#36709](https://github.com/PaddlePaddle/Paddle/pull/36709))
- Add the `paddle.broadcast_tensors`, to support broadcast operations on a set of `Tensors`. ([#33294](https://github.com/PaddlePaddle/Paddle/pull/33294), [#34874](https://github.com/PaddlePaddle/Paddle/pull/34874))
- Add the `paddle.einsum`. ([#33821](https://github.com/PaddlePaddle/Paddle/pull/34874))
- Enhance the ``paddle.tensor.gradient`` interface to support second-order derivative operators for sigmoid_op. ([#32971](https://github.com/PaddlePaddle/Paddle/pull/32971))
@@ -372,6 +382,8 @@ paddle.int64
- Add the ``paddle.static.ExponentialMovingAverage``, to support the computing of the sliding average of parameters with exponential decay. ([#35673](https://github.com/PaddlePaddle/Paddle/pull/35673))
- Add the ``paddle::Tensor::slice`` C++ API, to support the slice operation, and allow users to perform slice operations for the external Tensor. ([#34227](https://github.com/PaddlePaddle/Paddle/pull/34227))
- Add the ``paddle.incubate.segment_*`` series APIs, including ``paddle.incubate.segment_sum``, ``paddle.incubate.segment_mean``, ``paddle.incubate.segment_max``, and ``paddle. incubate.segment_min``. Support the summing, averaging, maximizing, and minimizing of ``Tensor`` by segment. ([#35759](https://github.com/PaddlePaddle/Paddle/pull/35759))
+ - Add `paddle.version.cuda` and `paddle.version.cudnn` to get version numbers of `CUDA` and `cuDNN` used by paddle installer. ([#36556](https://github.com/PaddlePaddle/Paddle/pull/36556))
+
#### IR(Intermediate Representation)
@@ -388,13 +400,15 @@ paddle.int64
- Provide dependent helper functions needed to analyze the control flow in `Program`. ([#33439](https://github.com/PaddlePaddle/Paddle/pull/33439))
- `Program` and `Graph` retain the values of the `stop_gradient` and `persistable` attributes needed for training after converting each other. ([#33771](https://github.com/PaddlePaddle/Paddle/pull/33771))
- `Pass` now supports processing the main `Graph` and all its sub-graphs, while the original `Pass` only processed the main `Graph` and ignored the sub-graphs. ([#34158](https://github.com/PaddlePaddle/Paddle/pull/34158))
- - Handle some topological ordering problems for `Program` and `Graph` inter-conversion in the prediction cases. ([#34121](https://github.com/PaddlePaddle/Paddle/pull/34121), [#34521](https://github.com/PaddlePaddle/Paddle/pull/34521)). **《== **
+ - Handle some topological ordering problems for `Program` and `Graph` inter-conversion in the prediction cases. ([#34121](https://github.com/PaddlePaddle/Paddle/pull/34121), [#34521](https://github.com/PaddlePaddle/Paddle/pull/34521)).
- Pass development
- Add the Pass development for subgraph replacement scenarios such as fusion on the Python side. ([#35708](https://github.com/PaddlePaddle/Paddle/pull/35708), [#35602](https://github.com/PaddlePaddle/Paddle/pull/35602))
- Kernel Primitive API
- Abstract and encapsulate the underlying codes in the operator Kernel implementation, to provide high-performance Block-level IO and Compute operations. The Kernel development using the Kernel Primitive API allows you to focus more on the implementation of the computational logic, significantly reducing the amount of codes while ensuring performance, and decoupling operator computation from hardware. ([#34672](https://github.com/PaddlePaddle/Paddle/pull/34672), [#35075](https://github.com/PaddlePaddle/Paddle/pull/35075), [#34456](https://github.com/PaddlePaddle/Paddle/pull/34456), [#35282](https://github.com/PaddlePaddle/Paddle/pull/35282), [#35743](https://github.com/PaddlePaddle/Paddle/pull/35743), [#34208](https://github.com/PaddlePaddle/Paddle/pull/34208))
+ - Add a total of 13 monadic and binary computation Functors to the Kernel Primitive API. ([#36418](https://github.com/PaddlePaddle/Paddle/pull/36418))
+ - Modify the ReadData implementation in the Kernel Primitive API to fix the NX ! =1 access memory out-of-bound bug. ([#36373](https://github.com/PaddlePaddle/Paddle/pull/36373))
#### **Mixed Precision Training**
@@ -513,8 +527,16 @@ paddle.int64
- `paddle.equal`: Add the support for `int`, `float`, and `bool` types for the second input. ([#35695](https://github.com/PaddlePaddle/Paddle/pull/35695))
- ``paddle.io.DataLoader``: Add the support for persistent_worker mode. ([#34017](https://github.com/PaddlePaddle/Paddle/pull/34017))
- Optimize ``l2_normalize``, ``p_norm``, ``elementwise_max``, ``prelu,clip_by_norm``, ``lars optimizer`` operators support the float16 computation. ([#35576](https://github.com/PaddlePaddle/Paddle/pull/35576), [#35888](https://github.com/PaddlePaddle/Paddle/pull/35888), [#35888](https://github.com/PaddlePaddle/Paddle/pull/35888), [35532](https://github.com/PaddlePaddle/Paddle/pull/35532), [#35446](https://github.com/PaddlePaddle/Paddle/pull/35446), [#33280](https://github.com/PaddlePaddle/Paddle/pull/33280))
-- Optimize the reading speed of flowers dataset from several minutes per batch to 1~3 seconds per batch. ([#31408](https://github.com/PaddlePaddle/Paddle/pull/31408))
-- Support the fuse allreduce sum function in `paddle.distributed.fleet.DistributedStrategy` when the `without_graph_optimize` switch is on.In the FP32, the performance increases by 3%. In the AMP, the performance increases by 8%. ([#34446](https://github.com/PaddlePaddle/Paddle/pull/34446))
+- Optimize the reading speed of flowers dataset from several minutes per batch to 1~3 seconds per batch. ([#31408](https://github.com/PaddlePaddle/Paddle/pull/31408))
+- Support the fuse allreduce sum function in `paddle.distributed.fleet.DistributedStrategy` when the `without_graph_optimize` switch is on.In the FP32, the performance increases by 3%. In the AMP, the performance increases by 8%. ([#34446](https://github.com/PaddlePaddle/Paddle/pull/34446))
+- In `paddle.matmul`, switch underlying Op from matmul op to matmul_v2 op. ([#36374](https://github.com/PaddlePaddle/Paddle/pull/36374))
+- In `paddle.fft` module, add mkl_cdft and hipfft two computational backends. ([#36537](https://github.com/PaddlePaddle/Paddle/pull/36537))
+- Parameter `shifts` of `paddle.roll` supports `Tensor` as input. ([#36537](https://github.com/PaddlePaddle/Paddle/pull/36537))
+- `paddle.shape` supports plural type inputs. ([#36835](https://github.com/PaddlePaddle/Paddle/pull/36835))
+- matmul_v2 supports quantization. ([#36469](https://github.com/PaddlePaddle/Paddle/pull/36469))
+- Add `clip_op` support for `float16`. ([#36672](https://github.com/PaddlePaddle/Paddle/pull/36672))
+- In `paddle.fft` module, add cache plan functionality to the cufft backend, optimizing performance. ([#36537](https://github.com/PaddlePaddle/Paddle/pull/36537))
+
#### IR(Intermediate Representation)
@@ -525,7 +547,9 @@ paddle.int64
- Optimize the logic of dynamic to static training codes, upgrade the internal ``Program`` cache mechanism, and add an advance copy policy for input ``Tensor`` to improve training performance. ([#34181](https://github.com/PaddlePaddle/Paddle/pull/34181), [#33796](https://github.com/PaddlePaddle/Paddle/pull/33796))
- Optimize the internal actuator memory recycling strategy for dynamic to static graphs, reducing the GPU memory usage during training. ([#34177](https://github.com/PaddlePaddle/Paddle/pull/34177))
- Integrate the source codes of ``Gast`` triple dependency library, decoupling version dependencies. ([#34556](https://github.com/PaddlePaddle/Paddle/pull/34556))
-
+ - Display partial frame level error reporting information in case of dynamic-to-static error reporting. It is easier to locate the problem. ([#36765](https://github.com/PaddlePaddle/Paddle/pull/36765))
+ - Remove duplicate temporary file removal function `remove_static_file()` in the dynamic to static error reporting module. ([#36375](https://github.com/PaddlePaddle/Paddle/pull/36375))
+ - Optimize processing of `input_specs` parameter in RegisterPass, to support graph optimization as a matching subgraph condition. ([#36453](https://github.com/PaddlePaddle/Paddle/pull/36453))
#### **Distributed training**
@@ -539,6 +563,12 @@ paddle.int64
- `paddle.io.Dataset`: Support the dynamic library parsing data. ([#33969](https://github.com/PaddlePaddle/Paddle/pull/33969))
- In the `paddle.distributed.fleet.dataset.DatasetBase`, add the consistency check function for generated data of the `use_var_list` and `pipe_command`. ([#34463](https://github.com/PaddlePaddle/Paddle/pull/34463))
- Add the consistency check between the `emd` dimension of `paddle.fluid.layers.embedding` and `emb` dimension of `sparse table` in `fleet`. ([#34249](https://github.com/PaddlePaddle/Paddle/pull/34249))
+ - Dynamic graph hybrid parallel supports for Pure FP16 training. ([#36707](https://github.com/PaddlePaddle/Paddle/pull/36707))
+ - Static graph hybrid parallel supports dropout using a fixed random seed generator to ensure consistency of global variables and randomness of local variables in model parallel. ([#36682](https://github.com/PaddlePaddle/Paddle/pull/36682))
+ - Implement CPU parallelism and support for adding custom backend parameters when calling spawn or launch. Available backend options are "gloo", "nccl", "bkcl", and "auto", for CPU parallel, GPU parallel, XPU parallel, and automatic selection by Paddle version, respectively. ([#35745](https://github.com/PaddlePaddle/Paddle/pull/35745))
+ - Optimize dynamic graph hybrid parallel HybridParallelClipGrad policy, to support 4D hybrid parallel + Pure FP16 training. ([#36707](https://github.com/PaddlePaddle/Paddle/pull/36707))
+ - Add SlotRecordDataset class to support GPU parameter server training. ([#36710](https://github.com/PaddlePaddle/Paddle/pull/36710))
+ - In the GPU parameter server building phase, support use of SlotRecordDataset. ([#36723](https://github.com/PaddlePaddle/Paddle/pull/36723))
- Static graph hybrid parallel
@@ -561,7 +591,15 @@ paddle.int64
- Fix the ``paddle.jit.save`` interface and model pruning logic. It is unnecessary to add an associated ``scale_op`` for output variables, and to properly export models containing outputs of type ``bool`` and ``float16``. ([#35730](https://github.com/PaddlePaddle/Paddle/pull/35730), [#36132](https://github.com/PaddlePaddle/Paddle/pull/36132))
- Custom OP
- Remove unnecessary ``cudaStreamSynchronize`` operations from ``paddle::Tensor's`` ``copy`` method, to improve performance. ([#35802](https://github.com/PaddlePaddle/Paddle/pull/35802))
+- Add C++ to support for GeneratePass development registration. The development mode is aligned with Python side. ([#36302](https://github.com/PaddlePaddle/Paddle/pull/36302))
+- Automic SParsity
+- Add `paddle.static.sparsity`, to support generating sparse parameters for `n:m` sparse mode. Currently, it only supports static graph ASP training. FP32 and FP16 on A100 are set with `1:2` and `2:4` sparse modes, respectively, to train saved sparse models, which can be used to accelerate inference tasks by calling TensorRT 8 based on the sparse Tensor Core of Ampere architecture. The current version provides a total of 5 APIs: ([#32995](https://github.com/PaddlePaddle/Paddle/pull/32995)、[#33132](https://github.com/PaddlePaddle/Paddle/pull/33132)、[#33558](https://github.com/PaddlePaddle/Paddle/pull/33558)、[#36525](https://github.com/PaddlePaddle/Paddle/pull/36525))
+ - `paddle.static.sparsity.calculate_density`: calculates the density of the input Tensor.
+ - `paddle.static.sparsity.decorate`: wraps the given optimizer as `OptimizerWithSparsityGuarantee`, automatically inserting necessary operations for the ASP workflow when calling `optimizer.minimize()`.
+ - `paddle.static.sparsity.prune_model`: prunes the parameters of the supported layers in `main_program` based on the mask generator function specified by `mask_algo`.
+ - `paddle.static.sparsity.set_excluded_layers`: sets the names of the parameters of layers that will not be trimmed.
+ - `paddle.static.sparsity.reset_excluded_layers`: resets the `excluded_layers` setting corresponding to `main_program`.
### **(3) Performance optimization**
@@ -600,6 +638,20 @@ paddle.int64
- Optimize the dynamic graph performance by stripping logic executed only on static graphs from the execution path of dynamic graphs. ([#34024](https://github.com/PaddlePaddle/Paddle/pull/34024))
- For the IR Pass, optimize the capability exposed as a general-purpose capability. Support both single machine and distributed optimization.The performance improves by 3%-5% in GPT mixed parallel scenarios. ([#34955](https://github.com/PaddlePaddle/Paddle/pull/34955), [#35704](https://github.com/PaddlePaddle/Paddle/pull/35704), [#34730](https://github.com/PaddlePaddle/Paddle/pull/34730), [#34524](https://github.com/PaddlePaddle/Paddle/pull/34524))
- Optimize the ctc loss grad computation, increase the speed by ~3x. Correspondingly, the GPU memory usage increases. ([#34729](https://github.com/PaddlePadle/Paddle/pull/34729))
+- transformer encoder Performance Optimization
+ - Optimization method: add `paddle.incubate.nn.FusedMultiHeadAttention` and `paddle.incubate.nn.FusedFeedForward`. In the implementation, q, k, v gemm fusion and multiple kernel fusion optimization techniques are used to improve performance of the transformer encoder.
+ - FusedAttention
+ - Add `paddle.incubate.nn.functional.fused_multi_head_attention`, to support fusion computation of multi-head attention. ([#35905](https://github.com/PaddlePaddle/Paddle/pull/35905) [35903](https://github.com/PaddlePaddle/Paddle/pull/35903) [#36803](https://github.com/PaddlePaddle/Paddle/pull/36803) [#36793](https://github.com/PaddlePaddle/Paddle/pull/36793) [36185](https://github.com/PaddlePaddle/Paddle/pull/36185))
+ - Add `paddle.incubate.nn.FusedMultiHeadAttention` for layer networking of the fused multi-head attention. ([#36498](https://github.com/PaddlePaddle/Paddle/pull/36498) )
+ - This module uses q, k, v gemm fusion and bias add + dropout + residual add + layer_norm kernel fusion optimization techniques, resulting in 1.08x-1.45x acceleration.
+
+ - FusedFeedForward
+ - Add `paddle.incubate.nn.functional.fused_feedforward`, to support feedforward fusion computation. ([#36729](https://github.com/PaddlePaddle/Paddle/pull/36729) [#36730](https://github.com/PaddlePaddle/Paddle/pull/36730))
+ - Add `paddle.incubate.nn.FusedFeedForward` for layer networking of fused feedforward. ([#36776](https://github.com/PaddlePaddle/Paddle/pull/36776))
+ - Performance is improved by about 1.04x~1.22x over pre-optimization.
+ - Add `paddle.incubate.nn.FusedTransformerEncoderLayer`, to support layer networking by using fused multi-head attention and fused feedforward computation. ([#36776](https://github.com/PaddlePaddle/Paddle/pull/36776))
+
+
### **(4) Troubleshooting**
@@ -693,12 +745,27 @@ paddle.int64
- Migrate the one_hot operator in ``paddle.nn.functional.dice_loss`` API to the ``one_hot_v2`` operator. ([#35734](https://github.com/PaddlePaddle/Paddle/pull/35734))
- Fix the bug of usage in the static graph mode in ``paddle.summary``. ([#35303](https://github.com/PaddlePaddle/Paddle/pull/35303))
- Fix the multi-card startup bug in ``paddle.Model.prepare`` static graph mode. ([#34311](https://github.com/PaddlePaddle/Paddle/pull/34311))
+- Fix error report of `paddle.nn.functional.cross_entropy` when `weight` is given and `axis` is specified as a legal dimension other than -1. ([#36647](https://github.com/PaddlePaddle/Paddle/pull/36647))
+- Fix a bug with `paddle.utils.dlpack.to_dlpack` that prevents it from encoding multidimensional `Tensor`, and fix a bug with its generated DLPack objects not being shared across deep learning frameworks. ([#36177](https://github.com/PaddlePaddle/Paddle/pull/36177))
+- Fix a bug in the `sample` method using `paddle.distribution.Categorical`, specifically, due to an out-of-bounds array access in the multinomial op's cuda kernel. The bug causes access to values beyond the subscript of the array, causing an error to be reported. ([#36511](https://github.com/PaddlePaddle/Paddle/pull/36511))
+- Fix a bug in the dynamic graph `_BatchNormBase` base class where the default_dtype is modified, resulting in the wrong type of subsequent networking parameters. Affected APIs are `paddle.nn.BatchNorm1D`, `paddle.nn.BatchNorm2D`, ` paddle.nn.BatchNorm3D`, and `paddle.nn.SyncBatchNorm`. The specific reason is that when `get_default_dtype() == 'float16'`, the default parameter data type is modified by `set_default_dtype('float32')`. The parameter type of dynamic graph networking is created by default_dtype. Therefore, when the default parameter type is modified, subsequent networking parameter type is consequently incorrect. ([#36376](https://github.com/PaddlePaddle/Paddle/pull/36376))
+- Fix an exception in `paddle.nn.functional.grid_sample` caused by special input. ([#36625](https://github.com/PaddlePaddle/Paddle/pull/36625))
+- Fix calculation error of `paddle.fft.ffft`, `paddle.fft.ifft`, `paddle.fft.rfft` , `paddle.fft.irfft`, `paddle.fft.hfft`, and `paddle.fft.ihfft` when input ` axis=0`. ([#36537](https://github.com/PaddlePaddle/Paddle/pull/36537))
+- Fix a bug of errors of `paddle.fft.fftshift` and `paddle.fft.ifftshift` under static graphs. ([#36537](https://github.com/PaddlePaddle/Paddle/pull/36537))
+- Fix a bug where `paddle.fft.ifftshift` is not calculated correctly. ([#36835](https://github.com/PaddlePaddle/Paddle/pull/36835))
+- Fix error message prompt for `paddle.nn.functional.pad` in `replicate` mode. ([#36531](https://github.com/PaddlePaddle/Paddle/pull/36531))
+
+
#### IR(Intermediate Representation)
- Dynamic graph to static graph
- Fix an abnormal growth of GPU memory under ``paddle.no_grad`` semantics after dynamic to static. ([#35725](https://github.com/PaddlePaddle/Paddle/pull/35725))
- Fix a misidentification and conversion bug in the ``paddle.no_grad`` interface. ([#34136](https://github.com/PaddlePaddle/Paddle/pull/34136))
+ - Fix a bug of reporting an error in dynamic to static training when stop_gradient=True is set in the middle of the model in some scenarios. ([#36353](https://github.com/PaddlePaddle/Paddle/pull/36353))
+ - Fix a bug of reporting an error when checking the return result in some scenarios where the control flow “if” is converted. ([#36830](https://github.com/PaddlePaddle/Paddle/pull/36830))
+ - Fix a bug that the return type changes unexpectedly due to additional dynamic to static aligning in the return length when “ifelse” branch returns unequal results. ([#36565](https://github.com/PaddlePaddle/Paddle/pull/36565))
+ - Fix a bug where video memory will keep growing in train mode and no_grad contexts after loading a model via the jit.save/load interface. ([#36463](https://github.com/PaddlePaddle/Paddle/pull/36463))
#### **Distributed training**
@@ -733,6 +800,8 @@ paddle.int64
- Fix the GPU parameter server error reported by using non-0 card training. ([#33078](https://github.com/PaddlePaddle/Paddle/pull/33078))
- Fix the bug of the delta score and scale show in the GPU Parameter Server. ([#33492](https://github.com/PaddlePaddle/Paddle/pull/33078), [#33492](https://github.com/PaddlePaddle/Paddle/pull/33492))
- Fix the bug with GPU Parameter Server not merging dense after training, in incorrect g2sum calculation. For data norm, add the optimize op. ([#35029](https://github.com/PaddlePaddle/Paddle/pull/35029))
+ - Fix an error reported if the gradient is empty when using the fuse all reduce ops switch. ([#36231](https://github.com/PaddlePaddle/Paddle/pull/36231))
+ - Fix a bug with dist_transformer files showing undefined variables. ([#36211](https://github.com/PaddlePaddle/Paddle/pull/36211))
- Dynamic graph hybrid parallel
- Fix the precision error in pipeline parallel due to communication asynchronization. [#35556](https://github.com/PaddlePaddle/Paddle/pull/35556)
@@ -774,6 +843,7 @@ paddle.int64
- Add native support for Ascend series hardware
- sub-graphs are accessed to ascend310 hardware [#35226](https://github.com/PaddlePaddle/Paddle/pull/35226) by supporting Paddle-Lite NNAdapter. For the example, see the [demo](https://github.com/PaddlePaddle/Paddle-Inference-Demo/tree/master/c%2B%2B/ascend310_lite_subgraph/image_classification_demo).
- New Ascend 910 inference support [#34101](https://github.com/PaddlePaddle/Paddle/pull/34101)
+- Add pool3d OP to support for TensorRT. ([#36545](https://github.com/PaddlePaddle/Paddle/pull/36545))
### **(2) Function optimization**
@@ -782,7 +852,7 @@ paddle.int64
- Quantification support
- Refactor dynamic graph quantization inference pass, to support non-analog quantization OP and analog quantization OP. ([#35907](https://github.com/PaddlePaddle/Paddle/pull/35907))
- Add int8 for analog quantized OP matmul (the case where weights are multiplied by tensor). ([#34359](https://github.com/PaddlePaddle/Paddle/pull/34359))
-
+ - Fix a bug that MobileNetV3 model "Loss” out of NAN during quantization training due to the quantization parameter being 0. ([#36763](https://github.com/PaddlePaddle/Paddle/pull/36763))
- API enhancements
- Refactor GO API based on new version of CAPI, [#33113](https://github.com/PaddlePaddle/Paddle/pull/33113). For the example, see the [demo](https://github.com/PaddlePaddle/Paddle-Inference-Demo/tree/master/go/resnet50).
@@ -818,6 +888,7 @@ paddle.int64
- Add support for int8 in TensorRT `qkv_context` plugin ([#34917](https://github.com/PaddlePaddle/Paddle/pull/34917), [#35504](https://github.com/PaddlePaddle/Paddle/pull/35504))
- Add support for TensorRT conv3d. ([#35507](https://github.com/PaddlePaddle/Paddle/pull/35507))
- Add support for broadcasting the input of the `multihead_matmul` fusion operator. ([#35780](https://github.com/PaddlePaddle/Paddle/pull/35780))
+ - Inference supports for TensorRT8 sparse inference, with performance improved by 10%-30% for ERNIE model with variable-length input at different batch_sizes, and performance improved by 10% for ResNeXt101_32x4d model at different batch_sizes under test environment. ([#36659](https://github.com/PaddlePaddle/Paddle/pull/36659))
- Nvidia Jetson native support enhancements
- Add the Op support, for the Jetson Nano/TX2, two devices with lower arithmetic power. We made targeted optimizations. Now add the support for 17 OPs such as `pool2d`, `pool_max`, `conv3d_transpose`, etc. ([#35378](https://github.com/PaddlePaddle/Paddle/pull/35378))
@@ -827,6 +898,7 @@ paddle.int64
- Kunlun XPU interface feature extensions
- Add the `set_xpu_device_id` interface to support setting the device number of the Kunlun chip in the inference ([#35572](https://github.com/PaddlePaddle/Paddle/pull/35572))
+- In Inference python `copy_from_cpu` interface, add input type check. Report errors in advance for wrong type inputs. ([#36552](https://github.com/PaddlePaddle/Paddle/pull/36552))
### **(3) Troubleshooting**
@@ -849,6 +921,14 @@ paddle.int64
- Fix a possible accuracy bug in the running of the ernie model FP16 with precision. ([#34771](https://github.com/PaddlePaddle/Paddle/pull/34711))
- Fix the incorrect output bug due to an inconsistent order of inputs when the ernie becomes longer. ([#33575](https://github.com/PaddlePaddle/Paddle/pull/33575))
- Fix a bug where the allocator function is abnormal in multi-stream state. ([#32932](https://github.com/PaddlePaddle/Paddle/pull/33575))
+- Fix a possible crash bug of ERNIE model under TRT8. ([#36769](https://github.com/PaddlePaddle/Paddle/pull/36769))
+- Fix a bug of crash and accuracy when Pool and Slice are used. ([#36666](https://github.com/PaddlePaddle/Paddle/pull/36666))
+- Fix an accuracy bug of yolo_box op caused by a wrong formula. ([#36365](https://github.com/PaddlePaddle/Paddle/pull/36365))
+- Fix a bug where quantized matmul_v2 does not infer properly under TRT. ([#36821](https://github.com/PaddlePaddle/Paddle/pull/36821))
+- Fix a bug where quantized op is incorrectly added when quantizing matmul_v2. ([#36820](https://github.com/PaddlePaddle/Paddle/pull/36820))
+- Fix a bug with the operators batch_norm and elementwise_add reporting an error when TRT is enabled in 3D application scenarios. ([#36446](https://github.com/PaddlePaddle/Paddle/pull/36446))
+- Fix a bug where the prediction model saved by the high-level linear api cannot not be optimized by Pass fusion. ([#36500](https://github.com/PaddlePaddle/Paddle/pull/36500))
+- Fix the Pass of MatmulV2ToMul, re-qualify (matmul_v2 to mul) mapping pass, add Pass of MatmulV2ToMatmul, qualify (matmul_v2 to matmul) mapping pass condition (not supporting broadcast), and modify (matmul, mul) op_teller mapping condition. ([#36652](https://github.com/PaddlePaddle/Paddle/pull/36652)
#### **Back-end capability fixing**