Skip to content

Commit e13c56d

Browse files
committed
Updating hyperlinks
1 parent 33d1294 commit e13c56d

File tree

33 files changed

+88
-99
lines changed

33 files changed

+88
-99
lines changed

AI-and-Analytics/End-to-end-Workloads/Census/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# End-to-end machine learning workload: `Census` Sample
1+
# End-to-end machine learning workload: `Census` Sample
22

33
This sample code illustrates how to use Intel® Distribution of Modin for ETL operations and ridge regression algorithm from the Intel® oneAPI Data Analytics Library (oneDAL) accelerated scikit-learn library to build and run an end to end machine learning workload. Both Intel Distribution of Modin and oneDAL accelerated scikit-learn libraries are available together in [Intel AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html). This sample code demonstrates how to seamlessly run the end-to-end census workload using the toolkit, without any external dependencies.
44

@@ -37,7 +37,7 @@ You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi)
3737

3838
### Activate conda environment With Root Access
3939

40-
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script and Intel® Distribution of Modin environment installation (https://software.intel.com/content/www/us/en/develop/articles/installing-ai-kit-with-conda.html). Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a super user. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
40+
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script and [Intel® Distribution of Modin environment installation] (https://software.intel.com/content/www/us/en/develop/articles/installing-ai-kit-with-conda.html). Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a super user. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
4141

4242
Activate the conda environment with the following command:
4343

AI-and-Analytics/Features-and-Functionality/IntelPyTorch_Extensions_AutoMixedPrecision/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Intel Extension for PyTorch is a Python package to extend the official PyTorch. It is designed to make the Out-of-Box user experience of PyTorch CPU better while achieving good performance. The extension also will be the PR(Pull-Request) buffer for the Intel PyTorch framework dev team. The PR buffer will contain functions and optimization (for example, take advantage of Intel's new hardware features).
44

5-
For comprehensive instructions regarding Intel Extension for PyTorch, go to https://github.com/intel/intel-extension-for-pytorch.
5+
For comprehensive instructions goto the github repo for [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).
66

77
| Optimized for | Description
88
|:--- |:---

AI-and-Analytics/Features-and-Functionality/IntelPyTorch_TorchCCL_Multinode_Training/README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,12 @@
1-
# `Intel Extension for PyTorch Getting Started` Sample
1+
# `Intel Extension for PyTorch Getting Started` Sample
22

33
torch-ccl holds PyTorch bindings maintained by Intel for the Intel® oneAPI Collective Communications Library (oneCCL).
44

55
Intel® oneCCL (collective communications library) is a library for efficient distributed deep learning training that implements such collectives like allreduce, allgather, and alltoall. For more information on oneCCL, please refer to the oneCCL documentation.
66

7-
For comprehensive instructions regarding distributed training with oneCCL in PyTorch, go to https://github.com/intel/torch-ccl and https://github.com/intel/optimized-models/tree/master/pytorch/distributed.
7+
For comprehensive instructions regarding distributed training with oneCCL in PyTorch, go to the following github repos:
8+
* [PyTorchand CCL](https://github.com/intel/torch-ccl)
9+
* [PyTorch](https://github.com/intel/optimized-models/tree/master/pytorch/distributed)
810

911
| Optimized for | Description
1012
|:--- |:---

AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_Horovod_Multinode_Training/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `Distributed TensorFlow with Horovod` Sample
1+
# `Distributed TensorFlow with Horovod` Sample
22
Today's modern computer systems are becoming heavily distributed. It is important to capitalize on scaling techniques to maximize the efficiency and performance of neural networks training, a resource-intensive process.
33

44
| Optimized for | Description
@@ -33,7 +33,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
3333
## Build and Run the Sample
3434

3535
### Running Samples In DevCloud (Optional)
36-
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the Intel® oneAPI Base Toolkit Get Started Guide (https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
36+
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the [Intel® oneAPI Base Toolkit Get Started Guide] (https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
3737

3838
### Pre-requirement
3939

AI-and-Analytics/Getting-Started-Samples/IntelModin_GettingStarted/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `Intel Modin Getting Started` Sample
1+
# `Intel Modin Getting Started` Sample
22
This Getting Started sample code show how to use distributed Pandas using the Modin package. It demonstrates how to use software products that can be found in the [Intel AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
33

44
| Optimized for | Description
@@ -41,7 +41,7 @@ You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi)
4141

4242
### Activate conda environment With Root Access
4343

44-
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script and Intel Distribution of Modin environment installation (https://software.intel.com/content/www/us/en/develop/articles/installing-ai-kit-with-conda.html). Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a superuser. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
44+
Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the `setvars.sh` script and [Intel Distribution of Modin environment installation] (https://software.intel.com/content/www/us/en/develop/articles/installing-ai-kit-with-conda.html). Then navigate in Linux shell to your oneapi installation path, typically `/opt/intel/oneapi/` when installed as root or sudo, and `~/intel/oneapi/` when not installed as a superuser. If you customized the installation folder, the `setvars.sh` file is in your custom folder.
4545

4646
Activate the conda environment with the following command:
4747

AI-and-Analytics/Getting-Started-Samples/IntelPyTorch_GettingStarted/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `PyTorch HelloWorld` Sample
1+
# `PyTorch HelloWorld` Sample
22
PyTorch* is a very popular framework for deep learning. Intel and Facebook* collaborate to boost PyTorch* CPU Performance for years. The official PyTorch has been optimized using oneAPI Deep Neural Network Library (oneDNN) primitives by default. This sample demonstrates how to train a PyTorch model and shows how Intel-optimized PyTorch* enables Intel® DNNL calls by default.
33

44
| Optimized for | Description
@@ -32,7 +32,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
3232

3333
## How to Build and Run
3434
### Running Samples In DevCloud (Optional)
35-
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the Intel® oneAPI Base Toolkit Get Started Guide (https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
35+
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the [Intel® oneAPI Base Toolkit Get Started Guide](https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
3636

3737
1. Pre-requirement
3838

AI-and-Analytics/Getting-Started-Samples/IntelTensorFlow_GettingStarted/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# `TensorFlow HelloWorld` Sample
1+
# `TensorFlow HelloWorld` Sample
22
TensorFlow* is a widely-used machine learning framework in the deep learning arena, demanding efficient computational resource utilization. To take full advantage of Intel® architecture and to extract maximum performance, the TensorFlow framework has been optimized using Intel® Deep Neural Networks (Intel® DNNL) primitives. This sample demonstrates how to train an example neural network and shows how Intel-optimized TensorFlow enables Intel® DNNL calls by default.
33

44
| Optimized for | Description
@@ -42,7 +42,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
4242
## Build and Run the Sample
4343

4444
### Running Samples In DevCloud (Optional)
45-
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the Intel® oneAPI Base Toolkit Get Started Guide (https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
45+
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode. For more information, see the [Intel® oneAPI Base Toolkit Get Started Guide](https://devcloud.intel.com/oneapi/get-started/base-toolkit/)
4646

4747
### Pre-requirement
4848

AI-and-Analytics/Getting-Started-Samples/iLiT-Sample-for-Tensorflow/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ We will learn how to train a CNN model based on Keras with Tensorflow, use iLiT
5454

5555
### Running in Devcloud
5656

57-
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU) as well as whether to run in batch or interactive mode. For more information, see the [Intel(R) oneAPI AI Analytics Toolkit Get Started Guide] https://devcloud.intel.com/oneapi/get-started/analytics-toolkit/)
57+
If running a sample in the Intel DevCloud, please follow the below steps to build the python environment. Also, remember that you must specify the compute node (CPU) as well as whether to run in batch or interactive mode. For more information, see the [Intel(R) oneAPI AI Analytics Toolkit Get Started Guide](https://devcloud.intel.com/oneapi/get-started/analytics-toolkit/)
5858

5959
### Running in Local Server
6060

DirectProgramming/C++/CompilerInfrastructure/Intrinsics/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ The intrinsic samples are designed to show how to utilize the intrinsics support
1515

1616
Intrinsics are assembly-coded functions that allow you to use C++ function calls and variables in place of assembly instructions. Intrinsics are expanded inline, eliminating function call overhead. While providing the same benefits as using inline assembly, intrinsics improve code readability, assist instruction scheduling, and help when debugging. They provide access to instructions that cannot be generated using the C and C++ languages' standard constructs and allow code to leverage performance-enhancing features unique to specific processors.
1717

18-
Further information on intrinsics can be found here: https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/compiler-reference/intrinsics.html#intrinsics_GUID-D70F9A9A-BAE1-4242-963E-C3A12DE296A1
18+
Further information on intrinsics can be found [here](https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/compiler-reference/intrinsics.html#intrinsics_GUID-D70F9A9A-BAE1-4242-963E-C3A12DE296A1):
1919

2020
## Key Implementation Details
2121

DirectProgramming/C++/GraphTraversal/MergesortOMP/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The merge sort algorithm is a comparison-based sorting algorithm. In this sample, we use a top-down implementation, which recursively splits the list into two halves (called sublists) until each sublist is size 1. We then merge sublists two at a time to produce a sorted list. This sample could run in serial or parallel with OpenMP* Tasking #pragma omp task and #pragma omp taskwait.
44

5-
For more details about merge sort algorithm and top-down implementation, please refer to http://en.wikipedia.org/wiki/Merge_sort.
5+
For more details, see the wiki on [merge sort](http://en.wikipedia.org/wiki/Merge_sort) algorithm and top-down implementation.
66

77
| Optimized for | Description
88
|:--- |:---

0 commit comments

Comments
 (0)