Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions Libraries/oneDAL/License.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
Copyright Intel Corporation

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
© 2020 GitHub, Inc.
8 changes: 8 additions & 0 deletions Libraries/oneDAL/daal4py_Distributed_Kmeans/License.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
Copyright Intel Corporation

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
© 2020 GitHub, Inc.
112 changes: 112 additions & 0 deletions Libraries/oneDAL/daal4py_Distributed_Kmeans/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# daal4py Distributed K-Means
This sample code shows how to train and predict with a distributed k-means model using the python API package daal4py for oneAPI Data Analytics Library. It assumes you have a working version of MPI library installed and it demonstrates how to use software products that can be found in the [Intel oneAPI Data Analytics Library](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onedal.html) or [Intel AI Analytics Toolkit powered by oneAPI](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).

| Optimized for | Description
| :--- | :---
| OS | 64-bit Linux: Ubuntu 18.04 or higher, 64-bit Windows 10, macOS 10.14 or higher
| Hardware | Intel Atom® Processors; Intel® Core™ Processor Family; Intel® Xeon® Processor Family; Intel® Xeon® Scalable Performance Processor Family
| Software | oneDAL Software Library, Python version 2.7 or >= 3.6, conda-build version >= 3, C++ compiler with C++11 support, Pickle, Pandas, NumPy
| What you will learn | distributed oneDAL K-Means programming model for Intel CPU
| Time to complete | 5 minutes

## Purpose

daal4py is a simplified API to Intel® DAAL that allows for fast usage of the framework suited for Data Scientists or Machine Learning users. Built to help provide an abstraction to Intel® DAAL for either direct usage or integration into one's own framework.

In this sample you will run a distributed K-Means model with oneDAL daal4py library memory objects. You will also learn how to train a model and save the information to a file.

## Key Implementation Details
This distributed K-means sample code is implemented for CPU using the Python language. The example assumes you have daal4py and scikit-learn installed inside a conda environment, similar to what is delivered with the installation of the Intel(R) Distribution for Python as part of the [oneAPI AI Analytics Toolkit powered by oneAPI](https://software.intel.com/en-us/oneapi/ai-kit).

## Additional Requirements
You will need a working MPI library. We recommend to use Intel(R) MPI, which is included in the [oneAPI HPC Toolkit](https://software.intel.com/en-us/oneapi/hpc-kit).

## License
This code sample is licensed under MIT license

## Building daal4py for CPU

oneAPI Data Analytics Library is ready for use once you finish the Intel AI Analytics Toolkit installation, and have run the post installation script.

You can refer to the oneAPI [main page](https://software.intel.com/en-us/oneapi) for toolkit installation, and the Toolkit [Getting Started Guide for Linux](https://software.intel.com/en-us/get-started-with-intel-oneapi-linux-get-started-with-the-intel-ai-analytics-toolkit) for post-installation steps and scripts.

### Activate conda environment With Root Access

Please follow the Getting Started Guide steps (above) to set up your oneAPI environment with the setvars.sh script. Then navigate in linux shell to your oneapi installation path, typically `~/intel/inteloneapi`. Intel Python environment will be activte by default. However, if you activated another environment, you can return with the following command:

#### On a Linux* System
```
source activate base
```

### Activate conda environment Without Root Access (Optional)

By default, the Intel AI Analytics toolkit is installed in the inteloneapi folder, which requires root privileges to manage it. If you would like to bypass using root access to manage your conda environment, then you can clone your desired conda environment using the following command:

#### On a Linux* System
```
conda create --name user_base --clone base
```

Then activate your conda environment with the following command:

```
source activate user_base
```

### Install Jupyter Notebook
```
conda install jupyter nb_conda_kernels
```


#### View in Jupyter Notebook

_Note: This distributed execution cannot be launched from the jupyter notebook version, but you can still view inside the notebook to follow the included write-up and description._

Launch Jupyter Notebook in the directory housing the code example

```
jupyter notebook
```

### Running the Sample as a Python File

When using daal4py for distributed memory systems, the command needed to execute the program should be executed in a bash shell. To execute this example, run the following command, where the number **4** is chosen as an example and means that it will run on **4 processes**:

Run the Program

`mpirun -n 4 python ./daal4py_Distributed_Kmeans.py`

The output of the script will be saved in the included models and results directories.

_Note: This code samples focuses on how to use daal4py to do distributed ML computations on chunks of data. The `mpirun` command above will only run on single local node. In order to launch on a cluster, you will need to create a host file on the master node among other steps. The **TensorFlow_Multinode_Training_with_Horovod** code sample explains this process well._

##### Expected Printed Output (with similar numbers, printed 4 times):
```


Here our centroids:


[[ 5.46000000e+02 -3.26170648e+00 -6.15922494e+00]
[ 1.80000000e+01 -1.00432059e+01 -8.38198798e+00]
[ 4.10000000e+02 3.78330964e-01 8.29073839e+00]]

Here is our centroids loaded from file:

[[ 5.46000000e+02 -3.26170648e+00 -6.15922494e+00]
[ 1.80000000e+01 -1.00432059e+01 -8.38198798e+00]
[ 4.10000000e+02 3.78330964e-01 8.29073839e+00]]
Here is our cluster assignments for first 5 datapoints:

[[1]
[1]
[1]
[1]
[1]]
[CODE_SAMPLE_COMPLETED_SUCCESFULLY]

```


Original file line number Diff line number Diff line change
@@ -0,0 +1,254 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# =============================================================\n",
"# Copyright © 2020 Intel Corporation\n",
"# \n",
"# SPDX-License-Identifier: MIT\n",
"# ============================================================="
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Daal4py K-Means Clustering Example for Distributed Memory Systems [SPMD mode]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## IMPORTANT NOTICE"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When using daal4py for distributed memory systems, the command needed to execute the program should be **executed \n",
"in a bash shell**. In order to run this example, please download it as a .py file then run the following command (**the number 4 means that it will run on 4 processes**):"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"mpirun -n 4 python ./daal4py_Distributed_Kmeans.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Importing and Organizing Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example we will be using K-Means clustering to **initialize centroids** and then **use them to cluster the synthetic dataset.**\n",
"\n",
"Let's start by **importing** all necessary data and packages."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"##### daal4py K-Means Clustering example for Distributed Memory Systems [SPMD Mode] #####\n",
"import daal4py as d4p\n",
"import pickle\n",
"import pandas as pd\n",
"import numpy as np"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's **load** in the dataset and **organize** it as necessary to work with our model. For distributed, every file has a unique ID.\n",
"\n",
"We will also **initialize the distribution engine**."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"d4p.daalinit() #initializes the distribution engine\n",
"\n",
"# organizing variables used in the model for prediction\n",
"# each process gets its own data\n",
"infile = \"./data/distributed_data/daal4py_Distributed_Kmeans_\" + str(d4p.my_procid()+1) + \".csv\"\n",
"\n",
"# read data\n",
"X = pd.read_csv(infile)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Computing and Saving Initial Centroids"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Time to **initialize our centroids!**"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"# computing inital centroids\n",
"init_result = d4p.kmeans_init(nClusters = 3, method = \"plusPlusDense\").compute(X)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To **get initial centroid information and save it** to a file:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hers is our centroids:\n",
"\n",
"\n",
" [[ 5.46000000e+02 -4.95417384e-01 8.83354904e+00]\n",
" [ 1.80000000e+01 -4.12886224e+00 -7.35426095e+00]\n",
" [ 4.11000000e+02 -3.27940151e+00 -6.22280477e+00]] \n",
"\n"
]
}
],
"source": [
"# retrieving and printing inital centroids\n",
"centroids = init_result.centroids\n",
"print(\"Here's our centroids:\\n\\n\\n\", centroids, \"\\n\")\n",
"\n",
"centroids_filename = './models/kmeans_clustering_initcentroids_'+ str(d4p.my_procid()+1) + '.csv'\n",
"\n",
"# saving centroids to a file\n",
"pickle.dump(centroids, open(centroids_filename, \"wb\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's **load up the centroids** and look at them."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Here is our centroids loaded from file:\n",
"\n",
" [[ 5.46000000e+02 -4.95417384e-01 8.83354904e+00]\n",
" [ 1.80000000e+01 -4.12886224e+00 -7.35426095e+00]\n",
" [ 4.11000000e+02 -3.27940151e+00 -6.22280477e+00]]\n"
]
}
],
"source": [
"# loading the initial centroids from a file\n",
"loaded_centroids = pickle.load(open(centroids_filename, \"rb\"))\n",
"print(\"Here is our centroids loaded from file:\\n\\n\",loaded_centroids)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Assign The Data to Clusters and Save The Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's **assign the data** to clusters."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"# compute the clusters/centroids\n",
"kmeans_result = d4p.kmeans(nClusters = 3, maxIterations = 5, assignFlag = True).compute(X, init_result.centroids)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To **get Kmeans result objects** (assignments, centroids, goalFunction [deprecated], nIterations, and objectiveFunction):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# retrieving and printing cluster assignments\n",
"assignments = kmeans_result.assignments\n",
"print(\"Here is our cluster assignments for first 5 datapoints: \\n\\n\", assignments[:5])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading