Skip to content

Sagemaker SDK on lambda function invoking serverless endpoint #4123

@KaramRazooq

Description

@KaramRazooq

I'm getting the following message while running the Sagemaker SDK on my lambda function.
I have installed the correct libraries that are comparable with python 3.11 and the lambda.

The log data from lambda:

sagemaker.config INFO - Not applying SDK defaults from location: /etc/xdg/sagemaker/config.yaml
--
sagemaker.config INFO - Not applying SDK defaults from location: /home/sbx_user1051/.config/sagemaker/config.yaml

The log from the endpoint


2023-09-16T21:55:46,174 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Executing input_fn from inference.py ...
--
2023-09-16T21:55:46,174 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3557
2023-09-16T21:55:46,174 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Executing predict_fn from inference.py ...
2023-09-16T21:55:46,174 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Executing output_fn from inference.py ...
2023-09-16T21:55:46,174 [INFO ] W-9000-model_1.0 ACCESS_LOG - /127.0.0.1:35074 "POST /invocations HTTP/1.1" 200 3558
2023-09-16T21:55:46,174 [INFO ] W-9000-model_1.0 TS_METRICS - Requests2XX.Count:1\|#Level:Host\|#hostname:169.254.29.125,timestamp:1694901018

My code is

from sagemaker.pytorch.model import PyTorchModel, PyTorchPredictor 
from sagemaker.deserializers import JSONDeserializer
    
predictor = PyTorchPredictor(endpoint_name=ENDPOINT_NAME, deserializer=JSONDeserializer())
result = predictor.predict(decode)

The predictor function is what's breaking the run and sending the INFO level log

any thoughts on how to fix this?
I tried to look into documentation on what could fix this but did not find anything

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions