Skip to content

Commit 18310ad

Browse files
authored
Trade Mark and Branding updates 09/2021 (#672)
* IPLDT of 09.20.2021 * updates * Update README.md * Update README.md * Update TensorFlow_Multinode_Training_with_Horovod.py * Update PyTorch_Hello_World.py * Update CMakeLists.txt * Update CMakeLists.txt * Update TensorFlow_Multinode_Training_with_Horovod.py * Update iso3dfd.cpp * Update black_scholes.cpp * Update black_scholes.hpp * Update sample.json * Update sample.json
1 parent 743f62a commit 18310ad

File tree

54 files changed

+109
-112
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+109
-112
lines changed

AI-and-Analytics/End-to-end-Workloads/Census/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,26 @@
11
# End-to-end Machine Learning Workload: `Census` Sample
22

3-
This sample code illustrates how to use Intel® Distribution of Modin for ETL operations and ridge regression algorithm from the Intel® extension of scikit-learn library to build and run an end to end machine learning workload. Both Intel® Distribution of Modin and Intel® Extension for Scikit-learn libraries are available together in [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html). This sample code demonstrates how to seamlessly run the end-to-end census workload using the toolkit, without any external dependencies.
3+
This sample code illustrates how to use Intel® Distribution of Modin* for ETL operations and ridge regression algorithm from the Intel® extension of scikit-learn library to build and run an end to end machine learning workload. Both Intel Distribution of Modin* and Intel® Extension for Scikit-learn libraries are available together in [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html). This sample code demonstrates how to seamlessly run the end-to-end census workload using the toolkit, without any external dependencies.
44

55
| Optimized for | Description
66
| :--- | :---
77
| OS | 64-bit Linux: Ubuntu 18.04 or higher
88
| Hardware | Intel Atom® Processors; Intel® Core™ Processor Family; Intel® Xeon® Processor Family; Intel® Xeon® Scalable Performance Processor Family
9-
| Software | Intel® AI Analytics Toolkit (Python version 3.7, Intel® Distribution of Modin , Ray, Intel® Extension for Scikit-Learn, NumPy)
10-
| What you will learn | How to use Intel® Distribution of Modin and Intel® Extension for Scikit-learn to build end to end ML workloads and gain performance.
9+
| Software | Intel® AI Analytics Toolkit (Python version 3.7, Intel Distribution of Modin* , Ray, Intel® Extension for Scikit-Learn, NumPy)
10+
| What you will learn | How to use Intel Distribution of Modin* and Intel® Extension for Scikit-learn to build end to end ML workloads and gain performance.
1111
| Time to complete | 15-18 minutes
1212

1313
## Purpose
14-
Intel® Distribution of Modin uses Ray to provide an effortless way to speed up your Pandas notebooks, scripts and libraries. Unlike other distributed DataFrame libraries, Intel® Distribution of Modin provides seamless integration and compatibility with existing Pandas code. Intel(R) Extension for Scikit-learn dynamically patches scikit-learn estimators to use Intel(R) oneAPI Data Analytics Library as the underlying solver, while getting the same solution faster.
14+
Intel Distribution of Modin* uses Ray to provide an effortless way to speed up your Pandas notebooks, scripts and libraries. Unlike other distributed DataFrame libraries, Intel Distribution of Modin* provides seamless integration and compatibility with existing Pandas code. Intel(R) Extension for Scikit-learn dynamically patches scikit-learn estimators to use Intel(R) oneAPI Data Analytics Library as the underlying solver, while getting the same solution faster.
1515

1616
#### Model and dataset
17-
In this sample, you will use Intel® Distribution of Modin to ingest and process U.S. census data from 1970 to 2010 in order to build a ridge regression based model to find the relation between education and the total income earned in the US.
17+
In this sample, you will use Intel Distribution of Modin* to ingest and process U.S. census data from 1970 to 2010 in order to build a ridge regression based model to find the relation between education and the total income earned in the US.
1818
Data transformation stage normalizes the income to the yearly inflation, balances the data such that each year has a similar number of data points, and extracts the features from the transformed dataset. The feature vectors are fed into the ridge regression model to predict the education of each sample.
1919

2020
Dataset is from IPUMS USA, University of Minnesota, [www.ipums.org](https://ipums.org/) (Steven Ruggles, Sarah Flood, Ronald Goeken, Josiah Grover, Erin Meyer, Jose Pacas and Matthew Sobek. IPUMS USA: Version 10.0 [dataset]. Minneapolis, MN: IPUMS, 2020. https://doi.org/10.18128/D010.V10.0)
2121

2222
## Key Implementation Details
23-
This end-to-end workload sample code is implemented for CPU using the Python language. With the installation of Intel AI Analytics Toolkit, the conda environment is prepared with Python version 3.7, Intel® Distribution of Modin , Ray, Intel® Extension for Scikit-Learn, NumPy following which the sample code can be directly run using the underlying steps in this README.
23+
This end-to-end workload sample code is implemented for CPU using the Python language. With the installation of Intel AI Analytics Toolkit, the conda environment is prepared with Python version 3.7, Intel Distribution of Modin* , Ray, Intel® Extension for Scikit-Learn, NumPy following which the sample code can be directly run using the underlying steps in this README.
2424

2525
## License
2626

@@ -29,8 +29,8 @@ Code samples are licensed under the MIT license. See
2929

3030
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
3131

32-
## Building Intel® Distribution of Modin and Intel® Extension for Scikit-learn for CPU to build and run end-to-end workload
33-
Intel® Distribution of Modin and Intel® Extension for Scikit-learn is ready for use once you finish the Intel AI Analytics Toolkit installation with the Conda Package Manager.
32+
## Building Intel Distribution of Modin* and Intel® Extension for Scikit-learn for CPU to build and run end-to-end workload
33+
Intel Distribution of Modin* and Intel® Extension for Scikit-learn is ready for use once you finish the Intel AI Analytics Toolkit installation with the Conda Package Manager.
3434

3535
You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi), and the Intel® oneAPI Toolkit [Installation Guide](https://software.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top/installation/install-using-package-managers/conda/install-intel-ai-analytics-toolkit-via-conda.html) for conda environment setup and installation steps.
3636

AI-and-Analytics/End-to-end-Workloads/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,8 @@ Third party program Licenses can be found here:
2323

2424
| Components | Folder | Description
2525
| ------------------ | ---------------------- | -----------
26-
| Modin, oneDAL, IDP | [Census](Census) | Use Intel® Distribution of Modin to ingest and process U.S. census data from 1970 to 2010 in order to build a ridge regression based model to find the relation between education and the total income earned in the US.
27-
| OpenVino | [LidarObjectDetection-PointPillars](LidarObjectDetection-PointPillars) | Performs 3D object detection and classification using point cloud data from a LIDAR sensor as input.
26+
| Modin*, oneDAL, IDP | [Census](Census) | Use Intel® Distribution of Modin* to ingest and process U.S. census data from 1970 to 2010 in order to build a ridge regression based model to find the relation between education and the total income earned in the US.
27+
| OpenVino | [LidarObjectDetection-PointPillars](LidarObjectDetection-PointPillars) | Performs 3D object detection and classification using point cloud data from a LIDAR sensor as input.
2828

2929
# Using Samples in the Intel oneAPI DevCloud
3030
To get started using samples in the DevCloud, refer to [Using AI samples in Intel oneAPI DevCloud](https://github.com/intel-ai-tce/oneAPI-samples/tree/devcloud/AI-and-Analytics#using-samples-in-intel-oneapi-devcloud).

AI-and-Analytics/Features-and-Functionality/IntelPyTorch_Extensions_AutoMixedPrecision/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,27 @@
1-
# `Intel Extension for PyTorch Getting Started` Sample
1+
# `Intel® Extension for PyTorch* Getting Started` Sample
22

3-
Intel Extension for PyTorch is a Python package to extend the official PyTorch. It is designed to make the Out-of-Box user experience of PyTorch CPU better while achieving good performance. The extension also will be the PR(Pull-Request) buffer for the Intel PyTorch framework dev team. The PR buffer will contain functions and optimization (for example, take advantage of Intel's new hardware features).
3+
Intel Extension for PyTorch* is a Python package to extend the official PyTorch. It is designed to make the Out-of-Box user experience of PyTorch CPU better while achieving good performance. The extension also will be the PR(Pull-Request) buffer for the Intel PyTorch framework dev team. The PR buffer will contain functions and optimization (for example, take advantage of Intel's new hardware features).
44

5-
For comprehensive instructions goto the github repo for [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).
5+
For comprehensive instructions goto the github repo for [Intel Extension for PyTorch*](https://github.com/intel/intel-extension-for-pytorch).
66

77
| Optimized for | Description
88
|:--- |:---
99
| OS | Linux* Ubuntu* 18.04
1010
| Hardware | Skylake with GEN9 or newer
11-
| Software | Intel Extension for PyTorch;
12-
| What you will learn | How to get started with Intel Extension for PyTorch
11+
| Software | Intel Extension for PyTorch*;
12+
| What you will learn | How to get started with Intel Extension for PyTorch*
1313
| Time to complete | 60 minutes
1414

1515

1616
## Purpose
1717

18-
You will learn how to download, compile, and get started with Intel Extension for PyTorch from this sample code.
18+
You will learn how to download, compile, and get started with Intel Extension for PyTorch* from this sample code.
1919

2020
The code will be running on the CPU.
2121

2222
## Key Implementation Details
2323

24-
The code includes Intel Extension for PyTorch and Auto-mixed-precision.
24+
The code includes Intel Extension for PyTorch* and Auto-mixed-precision.
2525

2626
## License
2727

@@ -30,7 +30,7 @@ Code samples are licensed under the MIT license. See
3030

3131
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
3232

33-
## Building the `Intel Extension for PyTorch Getting Started` Sample
33+
## Building the `Intel Extension for PyTorch* Getting Started` Sample
3434

3535
### On a Linux* System
3636

AI-and-Analytics/Features-and-Functionality/IntelPyTorch_TorchCCL_Multinode_Training/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
# `Intel Extension for PyTorch Getting Started` Sample
1+
# `Intel® Extension for PyTorch* Getting Started` Sample
22

33
torch-ccl holds PyTorch bindings maintained by Intel for the Intel® oneAPI Collective Communications Library (oneCCL).
44

5-
Intel® oneCCL (collective communications library) is a library for efficient distributed deep learning training that implements such collectives like allreduce, allgather, and alltoall. For more information on oneCCL, please refer to the oneCCL documentation.
5+
Intel® oneAPI Collective Communications Library (Intel® oneCCL) is a library for efficient distributed deep learning training that implements such collectives like allreduce, allgather, and alltoall. For more information on oneCCL, please refer to the oneCCL documentation.
66

77

88
For comprehensive instructions regarding distributed training with oneCCL in PyTorch, go to the following github repos:
@@ -13,7 +13,7 @@ For comprehensive instructions regarding distributed training with oneCCL in PyT
1313
|:--- |:---
1414
| OS | Linux* Ubuntu* 18.04
1515
| Hardware | Skylake with GEN9 or newer
16-
| Software | Intel Extension for PyTorch;
16+
| Software | Intel Extension for PyTorch*;
1717
| What you will learn | How to perform distributed training with oneCCL in PyTorch
1818
| Time to complete | 60 minutes
1919

AI-and-Analytics/Features-and-Functionality/IntelPython_XGBoost_Performance/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ XGBoost is a widely used gradient boosting library in the classical ML area. Des
1616
In this sample, you will an XGBoost model and prediction using Intel optimizations upstreamed by Intel to the latest XGBoost package and the un-optimized XGBoost 0.81 for comparison.
1717

1818
## Key Implementation Details
19-
This XGBoost sample code is implemented for the CPU using the Python language. The example assumes you XGBoost installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit). It also assumes you have set up an additional XGBoost 0.81 conda environment, with details on how to do so explained within the sample and this README.
19+
This XGBoost sample code is implemented for the CPU using the Python language. The example assumes you XGBoost installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python* as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit). It also assumes you have set up an additional XGBoost 0.81 conda environment, with details on how to do so explained within the sample and this README.
2020

2121
## License
2222
Code samples are licensed under the MIT license. See

AI-and-Analytics/Features-and-Functionality/IntelPython_XGBoost_daal4pyPrediction/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ This sample code illustrates how to analyze the performance benefit of minimal c
1818
In this sample, you will run an XGBoost model with daal4py prediction and XGBoost API prediction to see the performance benefit of daal4py gradient boosting prediction. You will also learn how to port a pre-trained XGBoost model to daal4py prediction.
1919

2020
## Key Implementation Details
21-
This sample code is implemented for CPU using the Python language. The example assumes you have XGboost and daal4py installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit).
21+
This sample code is implemented for CPU using the Python language. The example assumes you have XGboost and daal4py installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python* as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit).
2222

2323
## License
2424
Code samples are licensed under the MIT license. See

AI-and-Analytics/Features-and-Functionality/IntelPython_daal4py_DistributedKMeans/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ daal4py is a simplified API to Intel® oneDAL that allows for fast usage of the
1616
In this sample, you will run a distributed K-Means model with oneDAL daal4py library memory objects. You will also learn how to train a model and save the information to a file.
1717

1818
## Key Implementation Details
19-
This distributed K-means sample code is implemented for CPU using the Python language. The example assumes you have daal4py and scikit-learn installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit).
19+
This distributed K-means sample code is implemented for CPU using the Python language. The example assumes you have daal4py and scikit-learn installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python* as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit).
2020

2121
## License
2222
Code samples are licensed under the MIT license. See

AI-and-Analytics/Features-and-Functionality/IntelPython_daal4py_DistributedLinearRegression/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ daal4py is a simplified API to Intel® oneDAL that allows for fast usage of the
1616
In this sample, you will run a distributed Linear Regression model with oneDAL daal4py library memory objects. You will also learn how to train a model and save the information to a file.
1717

1818
## Key Implementation Details
19-
This distributed linear regression sample code is implemented for the CPU using the Python language. The example assumes you have daal4py and scikit-learn installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit).
19+
This distributed linear regression sample code is implemented for the CPU using the Python language. The example assumes you have daal4py and scikit-learn installed inside a conda environment, similar to what is delivered with the installation of the Intel® Distribution for Python* as part of the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/en-us/oneapi/ai-kit).
2020

2121
## License
2222
Code samples are licensed under the MIT license. See

AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_Horovod_Multinode_Training/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi)
4444

4545
### Sourcing the oneAPI AI Analytics Toolkit environment variables
4646

47-
By default, the Intel® AI Analytics toolkit is installed in the `/opt/intel/oneapi` folder. The toolkit may be loaded by sourcing the `setvars.sh` script on a Linux shell. Notice the flag `--ccl-configuration=cpu_icc`. By default, the `ccl-configuration` is set to `cpu_gpu_dpcpp`. However, since we are distributing our TensorFlow workload on multiple CPU nodes, we are configuring the Horovod installation to use CPUs.
47+
By default, the Intel® AI Analytics Toolkit is installed in the `/opt/intel/oneapi` folder. The toolkit may be loaded by sourcing the `setvars.sh` script on a Linux shell. Notice the flag `--ccl-configuration=cpu_icc`. By default, the `ccl-configuration` is set to `cpu_gpu_dpcpp`. However, since we are distributing our TensorFlow workload on multiple CPU nodes, we are configuring the Horovod installation to use CPUs.
4848

4949
```
5050
source /opt/intel/oneapi/setvars.sh --ccl-configuration=cpu_icc

AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_Horovod_Multinode_Training/TensorFlow_Multinode_Training_with_Horovod.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@
4444
tf.compat.v1.disable_eager_execution()
4545
'''
4646
Environment settings:
47-
Set MKLDNN_VERBOSE=1 to show DNNL run time verbose
47+
Set MKLDNN_VERBOSE=1 to show Intel Deep Neural Network Library (Intel DNNL) run time verbose
4848
Set KMP_AFFINITY=verbose to show OpenMP thread information
4949
'''
5050
#import os; os.environ["MKLDNN_VERBOSE"] = "1"

0 commit comments

Comments
 (0)