Skip to content

No way to pass requirements_file to TensorflowModel (or any FrameworkModel instance) #182

@zmjjmz

Description

@zmjjmz

System Information

  • Framework (e.g. TensorFlow) / Algorithm (e.g. KMeans): Tensorflow
  • Framework Version: 1.6
  • Python Version: 2.7.9
  • CPU or GPU: n/a
  • Python SDK Version: 1.2.4
  • Are you using a custom image: No

Describe the problem

I noticed that there's no way to pass a requirements_file to the FrameworkModel initializer (and thus the TensorFlowModel initializer). This is a problem when third party libraries are to be used. If that argument could be supported, that would be great.

Note: this is just an annoying thing, but I had to ablate my code to figure out that this was the issue since the actual error I got (on deploying an entrypoint that probably hit an ImportError) was as follows:

-------------------------------------------------------------------------------------*
Traceback (most recent call last):
File "sagemaker_pipeline.py", line 209, in <module>
runner.execute_targets()
File "/home/u1/zach/stats/statutils/pipeliner.py", line 612, in execute_targets
target_output_dict[target.node_name] = target.cache_execute()
File "/home/u1/zach/stats/statutils/pipeliner.py", line 432, in cache_execute
*map(lambda x: x.cache_execute(), self.parents))
File "/home/u1/zach/stats/statutils/pipeliner.py", line 432, in <lambda>
*map(lambda x: x.cache_execute(), self.parents))
File "/home/u1/zach/stats/statutils/pipeliner.py", line 432, in cache_execute
*map(lambda x: x.cache_execute(), self.parents))
File "/home/u1/zach/stats/statutils/pipeliner.py", line 432, in <lambda>
*map(lambda x: x.cache_execute(), self.parents))
File "/home/u1/zach/stats/statutils/pipeliner.py", line 479, in cache_execute
new_data = self.executor(*parent_data_loaded)
File "/home/u1/zach/stats/omc/utils/sagemaker_components.py", line 790, in batched_predict_classifications_df
instance_type=instance_type, initial_instance_count=initial_instance_count)
File "/home/u1/zach/stats/omc/utils/sagemaker_components.py", line 357, in maybe_deploy
endpoint_name=self.endpoint_name
File "/home/u1/zach/proj/dataplayground2/local/lib/python2.7/site-packages/sagemaker/model.py", line 92, in deploy self.sagemaker_session.endpoint_from_production_variants(self.endpoint_name, [production_variant])
File "/home/u1/zach/proj/dataplayground2/local/lib/python2.7/site-packages/sagemaker/session.py", line 522, in endpoint_from_production_variants
return self.create_endpoint(endpoint_name=name, config_name=name, wait=wait)
File "/home/u1/zach/proj/dataplayground2/local/lib/python2.7/site-packages/sagemaker/session.py", line 354, in create_endpoint self.wait_for_endpoint(endpoint_name)
File "/home/u1/zach/proj/dataplayground2/local/lib/python2.7/site-packages/sagemaker/session.py", line 415, in wait_for_endpoint
raise ValueError('Error hosting endpoint {}: {} Reason: {}'.format(endpoint, status, reason))
ValueError: Error hosting endpoint sagemaker-endtoend-conffilt-model0-test-2018-05-11-15-08-51-790: Failed Reason: The primary container for production variant AllTraffic did not pass the ping health check.

This was not a helpful error!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions