Skip to content

Commit 5163413

Browse files
authored
Reference to See Also section for example of usage in all estimators (#3577)
1 parent e3c2043 commit 5163413

File tree

66 files changed

+133
-31
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+133
-31
lines changed

docs/api-reference/algo-details-fastforest.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,4 +28,6 @@ For more see:
2828
* [Quantile regression
2929
forest](http://jmlr.org/papers/volume7/meinshausen06a/meinshausen06a.pdf)
3030
* [From Stumps to Trees to
31-
Forests](https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-stumps-to-trees-to-forests/)
31+
Forests](https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-stumps-to-trees-to-forests/)
32+
33+
Check the See Also section for links to examples of the usage.

docs/api-reference/algo-details-fasttree.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,4 +35,6 @@ For more information see:
3535
* [Wikipedia: Gradient boosting (Gradient tree
3636
boosting).](https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting)
3737
* [Greedy function approximation: A gradient boosting
38-
machine.](https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451)
38+
machine.](https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451)
39+
40+
Check the See Also section for links to examples of the usage.

docs/api-reference/algo-details-gam.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,4 +15,6 @@ the average prediction over the training set, and the shape functions are
1515
normalized to represent the deviation from the average prediction. This results
1616
in models that are easily interpreted simply by inspecting the intercept and the
1717
shape functions. See the sample below for an example of how to train a GAM model
18-
and inspect and interpret the results.
18+
and inspect and interpret the results.
19+
20+
Check the See Also section for links to examples of the usage.

docs/api-reference/algo-details-lightgbm.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,5 @@ LightGBM is an open source implementation of gradient boosting decision tree.
33
For implementation details, please see [LightGBM's official
44
documentation](https://lightgbm.readthedocs.io/en/latest/index.html) or this
55
[paper](https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree.pdf).
6+
7+
Check the See Also section for links to examples of the usage.

docs/api-reference/algo-details-sdca.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,4 +57,6 @@ For more information, see:
5757
* [Scaling Up Stochastic Dual Coordinate
5858
Ascent.](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/main-3.pdf)
5959
* [Stochastic Dual Coordinate Ascent Methods for Regularized Loss
60-
Minimization.](http://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf)
60+
Minimization.](http://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf)
61+
62+
Check the See Also section for links to examples of the usage.

docs/api-reference/algo-details-sgd.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,6 @@ Hogwild Stochastic Gradient Descent for binary classification that supports
66
multi-threading without any locking. If the associated optimization problem is
77
sparse, Hogwild Stochastic Gradient Descent achieves a nearly optimal rate of
88
convergence. For more details about Hogwild Stochastic Gradient Descent can be
9-
found [here](http://arxiv.org/pdf/1106.5730v2.pdf).
9+
found [here](http://arxiv.org/pdf/1106.5730v2.pdf).
10+
11+
Check the See Also section for links to examples of the usage.

src/Microsoft.ML.Data/Transforms/ColumnConcatenatingEstimator.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ namespace Microsoft.ML.Transforms
3131
/// If the input columns' data type is a vector the output column data type remains the same. However, the size of
3232
/// the vector will be the sum of the sizes of the input vectors.
3333
///
34-
/// Check the See Also section for links to examples of the usage.
34+
/// Check the See Also section for links to usage examples.
3535
/// ]]></format>
3636
/// </remarks>
3737
/// <seealso cref="TransformExtensionsCatalog.Concatenate(TransformsCatalog, string, string[])"/>

src/Microsoft.ML.Data/Transforms/ColumnCopying.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ namespace Microsoft.ML.Transforms
4545
///
4646
/// The resulting [ColumnCopyingTransformer](xref:Microsoft.ML.Transforms.ColumnCopyingTransformer) creates a new column, named as specified in the output column name parameters, and
4747
/// copies the data from the input column to this new column.
48-
/// Check the See Also section for links to examples of the usage.
48+
/// Check the See Also section for links to usage examples.
4949
/// ]]>
5050
/// </format>
5151
/// </remarks>

src/Microsoft.ML.Data/Transforms/ColumnSelecting.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ namespace Microsoft.ML.Transforms
5353
/// In the case of serialization, every column in the schema will be written out. If there are columns
5454
/// that should not be saved, this estimator can be used to remove them.
5555
///
56-
/// Check the See Also section for links to examples of the usage.
56+
/// Check the See Also section for links to usage examples.
5757
/// ]]></format>
5858
/// </remarks>
5959
/// <seealso cref="TransformExtensionsCatalog.DropColumns(TransformsCatalog, string[])"/>

src/Microsoft.ML.Data/Transforms/FeatureContributionCalculationTransformer.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -284,6 +284,8 @@ private Delegate GetValueGetter<TSrc>(DataViewRow input, int colSrc)
284284
/// while keeping the other features constant. The contribution of feature F1 for the given example is the difference between the original score
285285
/// and the score obtained by taking the opposite decision at the node corresponding to feature F1. This algorithm extends naturally to models with
286286
/// many decision trees.
287+
///
288+
/// Check the See Also section for links to usage examples.
287289
/// ]]></format>
288290
/// </remarks>
289291
/// <seealso cref="ExplainabilityCatalog.CalculateFeatureContribution(TransformsCatalog, ISingleFeaturePredictionTransformer{ICalculateFeatureContribution}, int, int, bool)"/>

0 commit comments

Comments
 (0)