Skip to content

Commit 99fceb8

Browse files
authored
Update PyTorch_Hello_World.py
1 parent 6504e82 commit 99fceb8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

AI-and-Analytics/Getting-Started-Samples/IntelPyTorch_GettingStarted/PyTorch_Hello_World.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ def main():
110110

111111
model.eval()
112112
'''
113-
1. User is suggested to use JIT mode to get best performance with Intel® Deep Neural Network Library (Intel® DNNL) with minimum change of Pytorch code. User may need to pass an explicit flag or invoke a specific Intel DNNL optimization pass. The PyTorch DNNL JIT backend is under development (RFC link https://github.com/pytorch/pytorch/issues/23657), so the example below is given in imperative mode.
113+
1. User is suggested to use JIT mode to get best performance with Intel Deep Neural Network Library (Intel DNNL) with minimum change of Pytorch code. User may need to pass an explicit flag or invoke a specific Intel DNNL optimization pass. The PyTorch DNNL JIT backend is under development (RFC link https://github.com/pytorch/pytorch/issues/23657), so the example below is given in imperative mode.
114114
2. To have model accelerated by Intel DNNL under imperative mode, user needs to explicitly insert format conversion for Intel DNNL operations using tensor.to_mkldnn() and to_dense(). For best result, user needs to insert the format conversion on the boundary of a sequence of Intel DNNL operations. This could boost performance significantly.
115115
3. For inference task, user needs to prepack the model’s weight using mkldnn_utils.to_mkldnn(model) to save the weight format conversion overhead. It could bring good performance gain sometime for single batch inference.
116116
'''

0 commit comments

Comments
 (0)