Skip to content

Commit bec19cc

Browse files
[CodeStyle][Typos][D-[6-10]] Fix typo("datas","deocder","dafault","decribe","decribes") (#7612)
* fix-c19-c23 * fix-c6-c7-c24-c26 * fix-d6-d10 * fix-some-qe
1 parent 9e9b535 commit bec19cc

File tree

5 files changed

+9
-12
lines changed

5 files changed

+9
-12
lines changed

_typos.toml

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
extend-exclude = [
44
# Skip `Accuray ` check in these files
55
"docs/practices/cv/3D_image_classification_from_CT_scans.ipynb",
6+
67
]
78

89
[default]
@@ -23,6 +24,7 @@ Clas = "Clas"
2324
arange = "arange"
2425
unsupport = "unsupport"
2526
Nervana = "Nervana"
27+
datas = "datas"
2628

2729
# These words need to be fixed
2830
Creenshot = "Creenshot"
@@ -36,11 +38,6 @@ Similarily = "Similarily"
3638
Simle = "Simle"
3739
Sovler = "Sovler"
3840
Successed = "Successed"
39-
dafault = "dafault"
40-
datas = "datas"
41-
decribe = "decribe"
42-
decribes = "decribes"
43-
deocder = "deocder"
4441
desgin = "desgin"
4542
desginated = "desginated"
4643
desigin = "desigin"

docs/design/concepts/tensor.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -116,12 +116,12 @@ Before writing code, please make sure you already look through Majel Source Code
116116

117117

118118
### Memory Management
119-
`Allocation` manages a block of memory in device(CPU/GPU). We use `Place` to decribe memory location. The details of memory allocation and deallocation are implemented in `Allocator` and `DeAllocator`. Related low-level API such as `hl_malloc_device()` and `hl_malloc_host()` are provided by Paddle.
119+
`Allocation` manages a block of memory in device(CPU/GPU). We use `Place` to describe memory location. The details of memory allocation and deallocation are implemented in `Allocator` and `DeAllocator`. Related low-level API such as `hl_malloc_device()` and `hl_malloc_host()` are provided by Paddle.
120120

121121
### Dim and Array
122122
#### Dim
123123

124-
`Dim` decribes the dimension information of an array.
124+
`Dim` describes the dimension information of an array.
125125

126126
`DDimVar` is an alias of a specializd class of boost.variant class template.
127127

docs/dev_guides/style_guide_and_references/error_message_writing_specification_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ PADDLE_ENFORCE_EQ(
266266

267267
```c++
268268
PADDLE_ENFORCE(
269-
tmp == *data_type || *data_type == dafault_data_type,
269+
tmp == *data_type || *data_type == default_data_type,
270270
phi::errors::InvalidArgument(
271271
"The DataType of %s Op's duplicable Variable %s must be "
272272
"consistent. The current variable type is (%s), but the "

docs/guides/flags/memory_en.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -205,7 +205,7 @@ FLAGS_initial_gpu_memory_in_mb=4096 will allocate 4 GB as initial GPU chunk.
205205
Note
206206
-------
207207
If you set this flag, the memory size set by FLAGS_fraction_of_gpu_memory_to_use will be overrided by this flag, PaddlePaddle will allocate the initial gpu memory with size specified by this flag.
208-
If you don't set this flag, the dafault value 0 will disable this GPU memory strategy. PaddlePaddle will use FLAGS_fraction_of_gpu_memory_to_use to allocate the initial GPU chunk.
208+
If you don't set this flag, the default value 0 will disable this GPU memory strategy. PaddlePaddle will use FLAGS_fraction_of_gpu_memory_to_use to allocate the initial GPU chunk.
209209

210210

211211

@@ -246,7 +246,7 @@ FLAGS_reallocate_gpu_memory_in_mb=1024 will re-allocate 1 GB if run out of GPU m
246246
Note
247247
-------
248248
If this flag is set, the memory size set by FLAGS_fraction_of_gpu_memory_to_use will be overrided by this flag, PaddlePaddle will re-allocate the gpu memory with size specified by this flag.
249-
If you don't set this flag, the dafault value 0 will disable this GPU memory strategy. PaddlePaddle will use FLAGS_fraction_of_gpu_memory_to_use to re-allocate GPU memory.
249+
If you don't set this flag, the default value 0 will disable this GPU memory strategy. PaddlePaddle will use FLAGS_fraction_of_gpu_memory_to_use to re-allocate GPU memory.
250250

251251

252252
FLAGS_use_pinned_memory

docs/practices/nlp/transformer_in_English-to-Spanish.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1424,8 +1424,8 @@
14241424
" encoder_outputs = self.encoder(encoder_emb)\n",
14251425
"\n",
14261426
" # 解码器\n",
1427-
" deocder_emb = self.ps2(decoder_inputs)\n",
1428-
" decoder_outputs = self.decoder(deocder_emb, encoder_outputs)\n",
1427+
" decoder_emb = self.ps2(decoder_inputs)\n",
1428+
" decoder_outputs = self.decoder(decoder_emb, encoder_outputs)\n",
14291429
"\n",
14301430
" # dropout\n",
14311431
" out = self.drop(decoder_outputs)\n",

0 commit comments

Comments
 (0)