Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 52 additions & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ Table of Contents
5. `Chainer SageMaker Estimators <#chainer-sagemaker-estimators>`__
6. `AWS SageMaker Estimators <#aws-sagemaker-estimators>`__
7. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators>`__
8. `BYO Model <#byo-model>`__
8. `SageMaker Automatic Model Tuning <#sagmaker-automatic-model-tuning>`__
9. `BYO Model <#byo-model>`__


Getting SageMaker Python SDK
Expand Down Expand Up @@ -263,6 +264,56 @@ Please refer to the full example in the examples repo:
The example notebook is is located here:
``advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb``


SageMaker Automatic Model Tuning
--------------------------------

All of the estimators can be used with SageMaker Automatic Model Tuning, which performs hyperparameter tuning jobs. A hyperparameter tuning job runs multiple training jobs that differ by their hyperparameters to find the best one. The SageMaker Python SDK contains a ``HyperparameterTuner`` class for creating and interacting with hyperparameter training jobs. You can read more about SageMaker Automatic Model Tuning in the `AWS documentation <https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html>`__.

Here is a basic example of how to use ``HyperparameterTuner`` to start tuning jobs instead of using an estimator to start training jobs:

.. code:: python

from sagemaker.tuner import HyperparameterTuner, ContinuousParameter

# Configure HyperparameterTuner
my_tuner = HyperparameterTuner(estimator=my_estimator, # previously-configured Estimator object
objective_metric_name='validation-accuracy',
hyperparameter_ranges={'learning-rate': ContinuousParameter(0.05, 0.06)},
metric_definitions=[{'Name': 'validation-accuracy', 'Regex': 'validation-accuracy=(\d\.\d+)'}],
max_jobs=100,
max_parallel_jobs=10)

# Start hyperparameter tuning job
my_tuner.fit({'train': 's3://my_bucket/my_training_data', 'test': 's3://my_bucket_my_testing_data'})

# Deploy best model
my_predictor = my_tuner.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')

# Make a prediction against the SageMaker endpoint
response = my_predictor.predict(my_prediction_data)

# Tear down the SageMaker endpoint
my_tuner.delete_endpoint()

There is also an analytics object with each ``HyperparameterTuner`` instance, which presents useful information about the hyperparameter tuning job, like a pandas dataframe summarizing the associated training jobs:

.. code:: python

# Retrieve analytics object
my_tuner_analytics = my_tuner.analytics()

# Look at summary of associated training jobs
my_dataframe = my_tuner_analytics.dataframe()

For more detailed examples of running hyperparameter tuning jobs, see: https://github.com/awslabs/amazon-sagemaker-examples.

For more detailed explanations of the classes mentioned, see:

- `API docs for HyperparameterTuner and parameter range classes <https://sagemaker.readthedocs.io/en/latest/tuner.html>`__.
- `API docs for analytics classes <https://sagemaker.readthedocs.io/en/latest/analytics.html>`__.


FAQ
---

Expand Down