-
Notifications
You must be signed in to change notification settings - Fork 876
[ProcessGroup] Add docs_cn of new process group apis #4969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
ead42d0
d3f9851
b92a5fe
9a8d109
ef36236
f6fd791
0a8836a
bc5fbde
6df37d8
1c592f4
4a9ff58
d8f4a7c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,27 @@ | ||
| .. _cn_api_paddle_distributed_irecv: | ||
|
|
||
| irecv | ||
| ------------------------------- | ||
|
|
||
|
|
||
| .. py:function:: paddle.distributed.irecv(tensor, src=None, group=None) | ||
| 异步接受发送来的tensor。 | ||
|
|
||
| 参数 | ||
| ::::::::: | ||
| - tensor (Tensor) - 要接受的张量。其数据类型应为 float16、float32、float64、int32 或 int64。 | ||
| - src (int) - 接受节点的全局rank号。 | ||
| - group (Group,可选) - new_group返回的Group实例,或者设置为None表示默认的全局组。默认值:None。 | ||
|
|
||
|
|
||
| 返回 | ||
| ::::::::: | ||
| 返回Task。 | ||
|
|
||
| 注意 | ||
| ::::::::: | ||
| 当前只支持动态图 | ||
|
|
||
| 代码示例 | ||
| ::::::::: | ||
| COPY-FROM: paddle.distributed.irecv | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,21 @@ | ||
| .. _cn_api_distributed_is_initialized: | ||
|
|
||
| is_initialized | ||
| ------------------------------- | ||
|
|
||
|
|
||
| .. py:function:: paddle.distributed.is_initialized() | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is_initialized()有没有参数?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is_initialized是没有参数的 |
||
|
|
||
| 检查分布式环境是否已经被初始化 | ||
|
|
||
| 参数 | ||
| ::::::::: | ||
| 无 | ||
|
|
||
| 返回 | ||
| ::::::::: | ||
| 如果分布式环境初始化完成,默认通信组已完成建立,则返回True;反之则返回False。 | ||
|
|
||
| 代码示例 | ||
| ::::::::: | ||
| COPY-FROM: paddle.distributed.is_initialized | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,28 @@ | ||
| .. _cn_api_paddle_distributed_isend: | ||
|
|
||
| isend | ||
| ------------------------------- | ||
|
|
||
|
|
||
| .. py:function:: paddle.distributed.isend(tensor, dst, group=None) | ||
| 异步的将 ``tensor`` 发送到指定的rank进程上。 | ||
|
|
||
| 参数 | ||
| ::::::::: | ||
| - tensor (Tensor) - 要发送的张量。其数据类型应为 float16、float32、float64、int32 或 int64。 | ||
| - dst (int) - 目标节点的全局rank号。 | ||
| - group (Group,可选) - new_group返回的Group实例,或者设置为None表示默认的全局组。默认值:None。 | ||
|
|
||
|
|
||
| 返回 | ||
| ::::::::: | ||
| 返回Task。 | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 同上 |
||
|
|
||
|
|
||
| 注意 | ||
| ::::::::: | ||
| 当前只支持动态图 | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. WARNING
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 现在做不到动静统一是吗?后续会支持?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 目前不支持静态图。后续isend/irecv可以支持动静统一。 |
||
|
|
||
| 代码示例 | ||
| ::::::::: | ||
| COPY-FROM: paddle.distributed.isend | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,29 @@ | ||
| .. _cn_api_paddle_distributed_reduce_scatter: | ||
|
|
||
| reduce_scatter | ||
| ------------------------------- | ||
|
|
||
|
|
||
| .. py:function:: paddle.distributed.reduce_scatter(tensor, tensor_list, op=ReduceOp.SUM, group=None, use_calc_stream=True) | ||
| 规约,然后将张量列表分散到组中的所有进程上 | ||
|
|
||
| 参数 | ||
| ::::::::: | ||
| - tensor (Tensor) – 输出的张量。 | ||
| - tensor_list (list(Tensor)) – 归约和切分的张量列表。 | ||
| - op (ReduceOp.SUM|ReduceOp.MAX|ReduceOp.Min|ReduceOp.PROD) – 操作类型,默认ReduceOp.SUM。 | ||
| - group: (Group, optional) – 通信组;如果是None,则使用默认通信组。 | ||
| - use_calc_stream: (bool, optional) – 决定是在计算流还是通信流上做该通信操作;默认为True,表示在计算流。 | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 需要找个地方说明一下use_calc_stream和Pytorch的async的区别。
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 嗯,这个所有涉及计算通信stream都需要介绍这一点。 |
||
|
|
||
|
|
||
| 返回 | ||
| ::::::::: | ||
| 返回Task。 | ||
|
|
||
| 注意 | ||
| ::::::::: | ||
| 当前只支持动态图 | ||
|
|
||
| 代码示例 | ||
| ::::::::: | ||
| COPY-FROM: paddle.distributed.reduce_scatter | ||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Task加链接
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
当前没有对于task的描述,如果添加,需要在下个pr添加,并修改所有的涉及的文档。