Closed
Description
I tried to test code for deploying a PyTorchModel on SageMaker fully locally doing something like:
I have also created the file ~/.sagemaker/config.yaml as described in https://sagemaker.readthedocs.io/en/stable/overview.html#local-mode
sess = LocalSession()
sess.config = {'local': {'local_code': True}}
model = PyTorchModel(
model_data="model1.tar.gz",
role=SOMEROLE,
framework_version="1.8.1",
py_version="py3",
entry_point="inference.py",
)
model.sagemaker_session = sess
predictor = model.deploy(
instance_type="local",
initial_instance_count=1,
deserializer=JSONDeserializer(),
serializer=JSONSerializer(),
)
It turns out that this will upload my model to an S3 bucket every time the deploy
method is run even though everything is running completely locally.
If testing this completely locally, and if the correct Docker images have been pulled beforehand, then this should not even need an internet connection, but definitely not waste time uploading the model and filling my S3 space with all those model files.
See also #2451