Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Commit 2ec4403

Browse files
committed
fixing example code
1 parent e97a365 commit 2ec4403

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

torchtext/nn/modules/multiheadattention.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ def __init__(self, nhead, in_proj_container, attention_layer, out_proj, batch_fi
2020
2121
Examples::
2222
>>> import torch
23+
>>> from torchtext.nn import MultiheadAttentionContainer, InProjContainer, ScaledDotProduct
2324
>>> embed_dim, num_heads, bsz = 10, 5, 64
2425
>>> in_proj_container = InProjContainer(torch.nn.Linear(embed_dim, embed_dim),
2526
torch.nn.Linear(embed_dim, embed_dim),
@@ -122,6 +123,7 @@ def __init__(self, dropout=0.0, batch_first=False):
122123
as `(batch, seq, feature)`. Default: ``False``
123124
124125
Examples::
126+
>>> import torch, torchtext
125127
>>> SDP = torchtext.nn.ScaledDotProduct(dropout=0.1)
126128
>>> q = torch.randn(21, 256, 3)
127129
>>> k = v = torch.randn(21, 256, 3)
@@ -245,6 +247,7 @@ def forward(self,
245247
value (Tensor): The values to be projected.
246248
247249
Examples::
250+
>>> import torch
248251
>>> from torchtext.nn import InProjContainer
249252
>>> embed_dim, bsz = 10, 64
250253
>>> in_proj_container = InProjContainer(torch.nn.Linear(embed_dim, embed_dim),

0 commit comments

Comments
 (0)