Skip to content

Commit cbbb2cb

Browse files
authored
Update docs (#267)
1 parent 0d2067e commit cbbb2cb

File tree

7 files changed

+4729
-4677
lines changed

7 files changed

+4729
-4677
lines changed

docs/tutorials/pytorch/language-modeling/bert-base-uncased.ipynb

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,15 @@
1313
"id": "69e00856",
1414
"metadata": {},
1515
"source": [
16-
"This tutorial demonstrates how to quantize a BERT model with both static and dynamic post training quantization based on [Intel® Neural Compressor](https://github.com/intel/neural-compressor) and benchmark the quantized models. "
16+
"This tutorial demonstrates how to quantize a BERT model with both static and dynamic post training quantization based on [Intel® Neural Compressor](https://github.com/intel/neural-compressor) and benchmark the quantized models. For better int8 performance benefit, multi-instance benchmarking with 4 cores/instance is recommended."
1717
]
1818
},
19+
{
20+
"cell_type": "markdown",
21+
"id": "e0d4a9c5",
22+
"metadata": {},
23+
"source": []
24+
},
1925
{
2026
"cell_type": "markdown",
2127
"id": "0cbd8bd4",
@@ -525,7 +531,7 @@
525531
],
526532
"metadata": {
527533
"kernelspec": {
528-
"display_name": "Python 3 (ipykernel)",
534+
"display_name": "Python 3.8.6 64-bit",
529535
"language": "python",
530536
"name": "python3"
531537
},
@@ -539,7 +545,12 @@
539545
"name": "python",
540546
"nbconvert_exporter": "python",
541547
"pygments_lexer": "ipython3",
542-
"version": "3.7.13"
548+
"version": "3.8.6"
549+
},
550+
"vscode": {
551+
"interpreter": {
552+
"hash": "f54fd8d6160ddfbc370985ee3ad2925997e28943a671b1747496a6859c59cd26"
553+
}
543554
}
544555
},
545556
"nbformat": 4,

0 commit comments

Comments
 (0)