Skip to content
This repository was archived by the owner on Nov 16, 2023. It is now read-only.

Commit 9c7c096

Browse files
najeeb-kazmiganik
authored andcommitted
More doc fixes (#228)
* More doc fixes * A few nits
1 parent e348250 commit 9c7c096

File tree

9 files changed

+60
-66
lines changed

9 files changed

+60
-66
lines changed

src/python/docs/docstrings/EnsembleClassifier.txt

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -30,14 +30,14 @@
3030
* ``RandomFeatureSelector``: selects a random subset of the features
3131
for each model.
3232

33-
:param num_models: indicates the number models to train, i.e. the number of
33+
:param num_models: Indicates the number models to train, i.e. the number of
3434
subsets of the training set to sample. The default value is 50. If
3535
batches are used then this indicates the number of models per batch.
3636

3737
:param sub_model_selector_type: Determines the efficient set of models the
38-
``output_combiner`` uses, and removes the least significant models. This is
39-
used to improve the accuracy and reduce the model size. This is also called
40-
pruning.
38+
``output_combiner`` uses, and removes the least significant models.
39+
This is used to improve the accuracy and reduce the model size. This is
40+
also called pruning.
4141

4242
* ``ClassifierAllSelector``: does not perform any pruning and selects
4343
all models in the ensemble to combine to create the output. This is
@@ -51,9 +51,9 @@
5151
or ``"LogLossReduction"``.
5252

5353

54-
:param output_combiner: indicates how to combine the predictions of the different
55-
models into a single prediction. There are five available output
56-
combiners for clasification:
54+
:param output_combiner: Indicates how to combine the predictions of the
55+
different models into a single prediction. There are five available
56+
outputcombiners for clasification:
5757

5858
* ``ClassifierAverage``: computes the average of the scores produced by
5959
the trained models.
@@ -92,7 +92,7 @@
9292
and ``0 <= b <= 1`` and ``b - a = 1``. This normalizer preserves
9393
sparsity by mapping zero to zero.
9494

95-
:param batch_size: train the models iteratively on subsets of the training
95+
:param batch_size: Train the models iteratively on subsets of the training
9696
set of this size. When using this option, it is assumed that the
9797
training set is randomized enough so that every batch is a random
9898
sample of instances. The default value is -1, indicating using the

src/python/docs/docstrings/EnsembleRegressor.txt

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -30,14 +30,14 @@
3030
* ``RandomFeatureSelector``: selects a random subset of the features
3131
for each model.
3232

33-
:param num_models: indicates the number models to train, i.e. the number of
33+
:param num_models: Indicates the number models to train, i.e. the number of
3434
subsets of the training set to sample. The default value is 50. If
3535
batches are used then this indicates the number of models per batch.
3636

3737
:param sub_model_selector_type: Determines the efficient set of models the
38-
``output_combiner`` uses, and removes the least significant models. This is
39-
used to improve the accuracy and reduce the model size. This is also called
40-
pruning.
38+
``output_combiner`` uses, and removes the least significant models.
39+
This is used to improve the accuracy and reduce the model size. This is
40+
also called pruning.
4141

4242
* ``RegressorAllSelector``: does not perform any pruning and selects
4343
all models in the ensemble to combine to create the output. This is
@@ -51,9 +51,9 @@
5151
``"RSquared"``.
5252

5353

54-
:param output_combiner: indicates how to combine the predictions of the different
55-
models into a single prediction. There are five available output
56-
combiners for clasification:
54+
:param output_combiner: Indicates how to combine the predictions of the
55+
different models into a single prediction. There are five available
56+
output combiners for clasification:
5757

5858
* ``RegressorAverage``: computes the average of the scores produced by
5959
the trained models.
@@ -86,7 +86,7 @@
8686
and ``0 <= b <= 1`` and ``b - a = 1``. This normalizer preserves
8787
sparsity by mapping zero to zero.
8888

89-
:param batch_size: train the models iteratively on subsets of the training
89+
:param batch_size: Train the models iteratively on subsets of the training
9090
set of this size. When using this option, it is assumed that the
9191
training set is randomized enough so that every batch is a random
9292
sample of instances. The default value is -1, indicating using the

src/python/docs/docstrings/LinearSvmBinaryClassifier.txt

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,10 @@
55
.. remarks::
66
Linear SVM implements an algorithm that finds a hyperplane in the
77
feature space for binary classification, by solving an SVM problem.
8-
For instance, with feature values $f_0, f_1,..., f_{D-1}$, the
9-
prediction is given by determining what side of the hyperplane the
10-
point falls into. That is the same as the sign of the feautures'
11-
weighted sum, i.e. $\sum_{i = 0}^{D-1} \left(w_i * f_i \right) + b$,
12-
where $w_0, w_1,..., w_{D-1}$ are the weights computed by the
13-
algorithm, and *b* is the bias computed by the algorithm.
8+
For instance, for a given feature vector, the prediction is given by
9+
determining what side of the hyperplane the point falls into. That is
10+
the same as the sign of the feautures' weighted sum (the weights being
11+
computed by the algorithm) plus the bias computed by the algorithm.
1412

1513
This algorithm implemented is the PEGASOS method, which alternates
1614
between stochastic gradient descent steps and projection steps,

src/python/nimbusml/ensemble/ensembleclassifier.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -57,14 +57,14 @@ class EnsembleClassifier(core, BasePredictor, ClassifierMixin):
5757
* ``RandomFeatureSelector``: selects a random subset of the features
5858
for each model.
5959
60-
:param num_models: indicates the number models to train, i.e. the number of
60+
:param num_models: Indicates the number models to train, i.e. the number of
6161
subsets of the training set to sample. The default value is 50. If
6262
batches are used then this indicates the number of models per batch.
6363
6464
:param sub_model_selector_type: Determines the efficient set of models the
65-
``output_combiner`` uses, and removes the least significant models. This is
66-
used to improve the accuracy and reduce the model size. This is also called
67-
pruning.
65+
``output_combiner`` uses, and removes the least significant models.
66+
This is used to improve the accuracy and reduce the model size. This is
67+
also called pruning.
6868
6969
* ``ClassifierAllSelector``: does not perform any pruning and selects
7070
all models in the ensemble to combine to create the output. This is
@@ -77,9 +77,9 @@ class EnsembleClassifier(core, BasePredictor, ClassifierMixin):
7777
``"AccuracyMicro"``, ``"AccuracyMacro"``, ``"LogLoss"``,
7878
or ``"LogLossReduction"``.
7979
80-
:param output_combiner: indicates how to combine the predictions of the different
81-
models into a single prediction. There are five available output
82-
combiners for clasification:
80+
:param output_combiner: Indicates how to combine the predictions of the
81+
different models into a single prediction. There are five available
82+
outputcombiners for clasification:
8383
8484
* ``ClassifierAverage``: computes the average of the scores produced by
8585
the trained models.
@@ -123,7 +123,7 @@ class EnsembleClassifier(core, BasePredictor, ClassifierMixin):
123123
:param train_parallel: All the base learners will run asynchronously if the
124124
value is true.
125125
126-
:param batch_size: train the models iteratively on subsets of the training
126+
:param batch_size: Train the models iteratively on subsets of the training
127127
set of this size. When using this option, it is assumed that the
128128
training set is randomized enough so that every batch is a random
129129
sample of instances. The default value is -1, indicating using the

src/python/nimbusml/ensemble/ensembleregressor.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -57,14 +57,14 @@ class EnsembleRegressor(core, BasePredictor, RegressorMixin):
5757
* ``RandomFeatureSelector``: selects a random subset of the features
5858
for each model.
5959
60-
:param num_models: indicates the number models to train, i.e. the number of
60+
:param num_models: Indicates the number models to train, i.e. the number of
6161
subsets of the training set to sample. The default value is 50. If
6262
batches are used then this indicates the number of models per batch.
6363
6464
:param sub_model_selector_type: Determines the efficient set of models the
65-
``output_combiner`` uses, and removes the least significant models. This is
66-
used to improve the accuracy and reduce the model size. This is also called
67-
pruning.
65+
``output_combiner`` uses, and removes the least significant models.
66+
This is used to improve the accuracy and reduce the model size. This is
67+
also called pruning.
6868
6969
* ``RegressorAllSelector``: does not perform any pruning and selects
7070
all models in the ensemble to combine to create the output. This is
@@ -77,9 +77,9 @@ class EnsembleRegressor(core, BasePredictor, RegressorMixin):
7777
can be ``"L1"``, ``"L2"``, ``"Rms"``, or ``"Loss"``, or
7878
``"RSquared"``.
7979
80-
:param output_combiner: indicates how to combine the predictions of the different
81-
models into a single prediction. There are five available output
82-
combiners for clasification:
80+
:param output_combiner: Indicates how to combine the predictions of the
81+
different models into a single prediction. There are five available
82+
output combiners for clasification:
8383
8484
* ``RegressorAverage``: computes the average of the scores produced by
8585
the trained models.
@@ -117,7 +117,7 @@ class EnsembleRegressor(core, BasePredictor, RegressorMixin):
117117
:param train_parallel: All the base learners will run asynchronously if the
118118
value is true.
119119
120-
:param batch_size: train the models iteratively on subsets of the training
120+
:param batch_size: Train the models iteratively on subsets of the training
121121
set of this size. When using this option, it is assumed that the
122122
training set is randomized enough so that every batch is a random
123123
sample of instances. The default value is -1, indicating using the

src/python/nimbusml/internal/core/ensemble/ensembleclassifier.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -57,14 +57,14 @@ class EnsembleClassifier(
5757
* ``RandomFeatureSelector``: selects a random subset of the features
5858
for each model.
5959
60-
:param num_models: indicates the number models to train, i.e. the number of
60+
:param num_models: Indicates the number models to train, i.e. the number of
6161
subsets of the training set to sample. The default value is 50. If
6262
batches are used then this indicates the number of models per batch.
6363
6464
:param sub_model_selector_type: Determines the efficient set of models the
65-
``output_combiner`` uses, and removes the least significant models. This is
66-
used to improve the accuracy and reduce the model size. This is also called
67-
pruning.
65+
``output_combiner`` uses, and removes the least significant models.
66+
This is used to improve the accuracy and reduce the model size. This is
67+
also called pruning.
6868
6969
* ``ClassifierAllSelector``: does not perform any pruning and selects
7070
all models in the ensemble to combine to create the output. This is
@@ -77,9 +77,9 @@ class EnsembleClassifier(
7777
``"AccuracyMicro"``, ``"AccuracyMacro"``, ``"LogLoss"``,
7878
or ``"LogLossReduction"``.
7979
80-
:param output_combiner: indicates how to combine the predictions of the different
81-
models into a single prediction. There are five available output
82-
combiners for clasification:
80+
:param output_combiner: Indicates how to combine the predictions of the
81+
different models into a single prediction. There are five available
82+
outputcombiners for clasification:
8383
8484
* ``ClassifierAverage``: computes the average of the scores produced by
8585
the trained models.
@@ -123,7 +123,7 @@ class EnsembleClassifier(
123123
:param train_parallel: All the base learners will run asynchronously if the
124124
value is true.
125125
126-
:param batch_size: train the models iteratively on subsets of the training
126+
:param batch_size: Train the models iteratively on subsets of the training
127127
set of this size. When using this option, it is assumed that the
128128
training set is randomized enough so that every batch is a random
129129
sample of instances. The default value is -1, indicating using the

src/python/nimbusml/internal/core/ensemble/ensembleregressor.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -55,14 +55,14 @@ class EnsembleRegressor(
5555
* ``RandomFeatureSelector``: selects a random subset of the features
5656
for each model.
5757
58-
:param num_models: indicates the number models to train, i.e. the number of
58+
:param num_models: Indicates the number models to train, i.e. the number of
5959
subsets of the training set to sample. The default value is 50. If
6060
batches are used then this indicates the number of models per batch.
6161
6262
:param sub_model_selector_type: Determines the efficient set of models the
63-
``output_combiner`` uses, and removes the least significant models. This is
64-
used to improve the accuracy and reduce the model size. This is also called
65-
pruning.
63+
``output_combiner`` uses, and removes the least significant models.
64+
This is used to improve the accuracy and reduce the model size. This is
65+
also called pruning.
6666
6767
* ``RegressorAllSelector``: does not perform any pruning and selects
6868
all models in the ensemble to combine to create the output. This is
@@ -75,9 +75,9 @@ class EnsembleRegressor(
7575
can be ``"L1"``, ``"L2"``, ``"Rms"``, or ``"Loss"``, or
7676
``"RSquared"``.
7777
78-
:param output_combiner: indicates how to combine the predictions of the different
79-
models into a single prediction. There are five available output
80-
combiners for clasification:
78+
:param output_combiner: Indicates how to combine the predictions of the
79+
different models into a single prediction. There are five available
80+
output combiners for clasification:
8181
8282
* ``RegressorAverage``: computes the average of the scores produced by
8383
the trained models.
@@ -115,7 +115,7 @@ class EnsembleRegressor(
115115
:param train_parallel: All the base learners will run asynchronously if the
116116
value is true.
117117
118-
:param batch_size: train the models iteratively on subsets of the training
118+
:param batch_size: Train the models iteratively on subsets of the training
119119
set of this size. When using this option, it is assumed that the
120120
training set is randomized enough so that every batch is a random
121121
sample of instances. The default value is -1, indicating using the

src/python/nimbusml/internal/core/linear_model/linearsvmbinaryclassifier.py

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,12 +26,10 @@ class LinearSvmBinaryClassifier(
2626
.. remarks::
2727
Linear SVM implements an algorithm that finds a hyperplane in the
2828
feature space for binary classification, by solving an SVM problem.
29-
For instance, with feature values $f_0, f_1,..., f_{D-1}$, the
30-
prediction is given by determining what side of the hyperplane the
31-
point falls into. That is the same as the sign of the feautures'
32-
weighted sum, i.e. $\sum_{i = 0}^{D-1} \left(w_i * f_i \right) + b$,
33-
where $w_0, w_1,..., w_{D-1}$ are the weights computed by the
34-
algorithm, and *b* is the bias computed by the algorithm.
29+
For instance, for a given feature vector, the prediction is given by
30+
determining what side of the hyperplane the point falls into. That is
31+
the same as the sign of the feautures' weighted sum (the weights being
32+
computed by the algorithm) plus the bias computed by the algorithm.
3533
3634
This algorithm implemented is the PEGASOS method, which alternates
3735
between stochastic gradient descent steps and projection steps,

src/python/nimbusml/linear_model/linearsvmbinaryclassifier.py

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,12 +29,10 @@ class LinearSvmBinaryClassifier(
2929
.. remarks::
3030
Linear SVM implements an algorithm that finds a hyperplane in the
3131
feature space for binary classification, by solving an SVM problem.
32-
For instance, with feature values $f_0, f_1,..., f_{D-1}$, the
33-
prediction is given by determining what side of the hyperplane the
34-
point falls into. That is the same as the sign of the feautures'
35-
weighted sum, i.e. $\sum_{i = 0}^{D-1} \left(w_i * f_i \right) + b$,
36-
where $w_0, w_1,..., w_{D-1}$ are the weights computed by the
37-
algorithm, and *b* is the bias computed by the algorithm.
32+
For instance, for a given feature vector, the prediction is given by
33+
determining what side of the hyperplane the point falls into. That is
34+
the same as the sign of the feautures' weighted sum (the weights being
35+
computed by the algorithm) plus the bias computed by the algorithm.
3836
3937
This algorithm implemented is the PEGASOS method, which alternates
4038
between stochastic gradient descent steps and projection steps,

0 commit comments

Comments
 (0)