Spotted in 7.6.0
We currently ship with 1 pre-packed model: lang_ident_model_1. When doing a GET _ml/inference, that model is always prepended to the list of trained_model_configs in the response.
This also means that the from and size query parameters are not respected. The pre-packed model will always be the first model in the list, unless a model_id is specified.
Expected:
Pre-packed models should be treated the same way the models stored in .ml-inference* are, in GET _ml/inference responses.