You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ml-guide.md
+18-13Lines changed: 18 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,17 +3,22 @@ layout: global
3
3
title: Spark ML Programming Guide
4
4
---
5
5
6
-
`spark.ml` is a new package introduced in Spark 1.2, which aims to provide a uniform set of
6
+
Spark 1.2 introduced a new package called `spark.ml`, which aims to provide a uniform set of
7
7
high-level APIs that help users create and tune practical machine learning pipelines.
8
-
It is currently an alpha component, and we would like to hear back from the community about
9
-
how it fits real-world use cases and how it could be improved.
8
+
9
+
*Graduated from Alpha!* The Pipelines API is no longer an alpha component, although many elements of it are still `Experimental` or `DeveloperApi`.
10
10
11
11
Note that we will keep supporting and adding features to `spark.mllib` along with the
12
12
development of `spark.ml`.
13
13
Users should be comfortable using `spark.mllib` features and expect more features coming.
14
14
Developers should contribute new algorithms to `spark.mllib` and can optionally contribute
15
15
to `spark.ml`.
16
16
17
+
Guides for sub-packages of `spark.ml` include:
18
+
*[Feature Extraction, Transformation, and Selection](ml-features.html): Details on transformers supported in the Pipelines API, including a few not in the lower-level `spark.mllib` API
19
+
*[Ensembles](ml-ensembles.html): Details on ensemble learning methods in the Pipelines API
20
+
21
+
17
22
**Table of Contents**
18
23
19
24
* This will become a table of contents (this text will be scraped).
@@ -148,16 +153,6 @@ Parameters belong to specific instances of `Estimator`s and `Transformer`s.
148
153
For example, if we have two `LogisticRegression` instances `lr1` and `lr2`, then we can build a `ParamMap` with both `maxIter` parameters specified: `ParamMap(lr1.maxIter -> 10, lr2.maxIter -> 20)`.
149
154
This is useful if there are two algorithms with the `maxIter` parameter in a `Pipeline`.
150
155
151
-
# Algorithm Guides
152
-
153
-
There are now several algorithms in the Pipelines API which are not in the lower-level MLlib API, so we link to documentation for them here. These algorithms are mostly feature transformers, which fit naturally into the `Transformer` abstraction in Pipelines, and ensembles, which fit naturally into the `Estimator` abstraction in the Pipelines.
154
-
155
-
**Pipelines API Algorithm Guides**
156
-
157
-
*[Feature Extraction, Transformation, and Selection](ml-features.html)
158
-
*[Ensembles](ml-ensembles.html)
159
-
160
-
161
156
# Code Examples
162
157
163
158
This section gives code examples illustrating the functionality discussed above.
@@ -783,6 +778,16 @@ Spark ML also depends upon Spark SQL, but the relevant parts of Spark SQL do not
783
778
784
779
# Migration Guide
785
780
781
+
## From 1.3 to 1.4
782
+
783
+
Several major API changes occurred, including:
784
+
*`Param` and other APIs for specifying parameters
785
+
*`uid` unique IDs for Pipeline components
786
+
* Reorganization of certain classes
787
+
Since the `spark.ml` API was an Alpha Component in Spark 1.3, we do not list all changes here.
788
+
789
+
However, now that `spark.ml` is no longer an Alpha Component, we will provide details on any API changes for future releases.
790
+
786
791
## From 1.2 to 1.3
787
792
788
793
The main API changes are from Spark SQL. We list the most important changes here:
MLlib is Spark's scalable machine learning library consisting of common learning algorithms and utilities,
9
9
including classification, regression, clustering, collaborative
10
-
filtering, dimensionality reduction, as well as underlying optimization primitives, as outlined below:
10
+
filtering, dimensionality reduction, as well as underlying optimization primitives.
11
+
Guides for individual algorithms are listed below.
12
+
13
+
The API is divided into 2 parts:
14
+
*[The original `spark.mllib` API](mllib-guide.html#mllib-types-algorithms-and-utilities) is the primary API.
15
+
*[The "Pipelines" `spark.ml` API](mllib-guide.html#sparkml-high-level-apis-for-ml-pipelines) is a higher-level API for constructing ML workflows.
16
+
17
+
We list major functionality from both below, with links to detailed guides.
18
+
19
+
# MLlib types, algorithms and utilities
20
+
21
+
This lists functionality included in `spark.mllib`, the main MLlib API.
11
22
12
23
*[Data types](mllib-data-types.html)
13
24
*[Basic statistics](mllib-statistics.html)
@@ -49,16 +60,19 @@ and the migration guide below will explain all changes between releases.
49
60
50
61
Spark 1.2 introduced a new package called `spark.ml`, which aims to provide a uniform set of
51
62
high-level APIs that help users create and tune practical machine learning pipelines.
52
-
It is currently an alpha component, and we would like to hear back from the community about
53
-
how it fits real-world use cases and how it could be improved.
63
+
64
+
*Graduated from Alpha!* The Pipelines API is no longer an alpha component, although many elements of it are still `Experimental` or `DeveloperApi`.
54
65
55
66
Note that we will keep supporting and adding features to `spark.mllib` along with the
56
67
development of `spark.ml`.
57
68
Users should be comfortable using `spark.mllib` features and expect more features coming.
58
69
Developers should contribute new algorithms to `spark.mllib` and can optionally contribute
59
70
to `spark.ml`.
60
71
61
-
See the **[spark.ml programming guide](ml-guide.html)** for more information on this package.
72
+
More detailed guides for `spark.ml` include:
73
+
***[spark.ml programming guide](ml-guide.html)**: overview of the Pipelines API and major concepts
74
+
*[Feature transformers](ml-features.html): Details on transformers supported in the Pipelines API, including a few not in the lower-level `spark.mllib` API
75
+
*[Ensembles](ml-ensembles.html): Details on ensemble learning methods in the Pipelines API
62
76
63
77
# Dependencies
64
78
@@ -90,21 +104,14 @@ version 1.4 or newer.
90
104
91
105
For the `spark.ml` package, please see the [spark.ml Migration Guide](ml-guide.html#migration-guide).
92
106
93
-
## From 1.2 to 1.3
94
-
95
-
In the `spark.mllib` package, there were several breaking changes. The first change (in `ALS`) is the only one in a component not marked as Alpha or Experimental.
96
-
97
-
**(Breaking change)* In [`ALS`](api/scala/index.html#org.apache.spark.mllib.recommendation.ALS), the extraneous method `solveLeastSquares` has been removed. The `DeveloperApi` method `analyzeBlocks` was also removed.
98
-
**(Breaking change)*[`StandardScalerModel`](api/scala/index.html#org.apache.spark.mllib.feature.StandardScalerModel) remains an Alpha component. In it, the `variance` method has been replaced with the `std` method. To compute the column variance values returned by the original `variance` method, simply square the standard deviation values returned by `std`.
99
-
**(Breaking change)*[`StreamingLinearRegressionWithSGD`](api/scala/index.html#org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD) remains an Experimental component. In it, there were two changes:
100
-
* The constructor taking arguments was removed in favor of a builder patten using the default constructor plus parameter setter methods.
101
-
* Variable `model` is no longer public.
102
-
**(Breaking change)*[`DecisionTree`](api/scala/index.html#org.apache.spark.mllib.tree.DecisionTree) remains an Experimental component. In it and its associated classes, there were several changes:
103
-
* In `DecisionTree`, the deprecated class method `train` has been removed. (The object/static `train` methods remain.)
104
-
* In `Strategy`, the `checkpointDir` parameter has been removed. Checkpointing is still supported, but the checkpoint directory must be set before calling tree and tree ensemble training.
105
-
*`PythonMLlibAPI` (the interface between Scala/Java and Python for MLlib) was a public API but is now private, declared `private[python]`. This was never meant for external use.
106
-
* In linear regression (including Lasso and ridge regression), the squared loss is now divided by 2.
107
-
So in order to produce the same result as in 1.2, the regularization parameter needs to be divided by 2 and the step size needs to be multiplied by 2.
107
+
## From 1.3 to 1.4
108
+
109
+
In the `spark.mllib` package, there were several breaking changes, but all in `DeveloperApi` or `Experimental` APIs:
110
+
111
+
* Gradient-Boosted Trees
112
+
**(Breaking change)* The signature of the [`Loss.gradient`](api/scala/index.html#org.apache.spark.mllib.tree.loss.Loss.gradient) method was changed. This is only an issues for users who wrote their own losses for GBTs.
113
+
**(Breaking change)* The `apply` and `copy` methods for the case class [`BoostingStrategy`](api/scala/index.html#org.apache.spark.mllib.tree.configuration.BoostingStrategy) have been changed because of a modification to the case class fields. This could be an issue for users who use `BoostingStrategy` to set GBT parameters.
114
+
**(Breaking change)* The return value of [`LDA.run`](api/scala/index.html#org.apache.spark.mllib.clustering.LDA.run) has changed. It now returns an abstract class `LDAModel` instead of the concrete class `DistributedLDAModel`. The object of type `LDAModel` can still be cast to the appropriate concrete type, which depends on the optimization algorithm.
Copy file name to clipboardExpand all lines: docs/mllib-migration-guides.md
+16Lines changed: 16 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,6 +7,22 @@ description: MLlib migration guides from before Spark SPARK_VERSION_SHORT
7
7
8
8
The migration guide for the current Spark version is kept on the [MLlib Programming Guide main page](mllib-guide.html#migration-guide).
9
9
10
+
## From 1.2 to 1.3
11
+
12
+
In the `spark.mllib` package, there were several breaking changes. The first change (in `ALS`) is the only one in a component not marked as Alpha or Experimental.
13
+
14
+
**(Breaking change)* In [`ALS`](api/scala/index.html#org.apache.spark.mllib.recommendation.ALS), the extraneous method `solveLeastSquares` has been removed. The `DeveloperApi` method `analyzeBlocks` was also removed.
15
+
**(Breaking change)*[`StandardScalerModel`](api/scala/index.html#org.apache.spark.mllib.feature.StandardScalerModel) remains an Alpha component. In it, the `variance` method has been replaced with the `std` method. To compute the column variance values returned by the original `variance` method, simply square the standard deviation values returned by `std`.
16
+
**(Breaking change)*[`StreamingLinearRegressionWithSGD`](api/scala/index.html#org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD) remains an Experimental component. In it, there were two changes:
17
+
* The constructor taking arguments was removed in favor of a builder pattern using the default constructor plus parameter setter methods.
18
+
* Variable `model` is no longer public.
19
+
**(Breaking change)*[`DecisionTree`](api/scala/index.html#org.apache.spark.mllib.tree.DecisionTree) remains an Experimental component. In it and its associated classes, there were several changes:
20
+
* In `DecisionTree`, the deprecated class method `train` has been removed. (The object/static `train` methods remain.)
21
+
* In `Strategy`, the `checkpointDir` parameter has been removed. Checkpointing is still supported, but the checkpoint directory must be set before calling tree and tree ensemble training.
22
+
*`PythonMLlibAPI` (the interface between Scala/Java and Python for MLlib) was a public API but is now private, declared `private[python]`. This was never meant for external use.
23
+
* In linear regression (including Lasso and ridge regression), the squared loss is now divided by 2.
24
+
So in order to produce the same result as in 1.2, the regularization parameter needs to be divided by 2 and the step size needs to be multiplied by 2.
0 commit comments