\n",
+ "gpt-3.5-turbo-1106 | chat | openai | The latest GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. Learn more. |
\n",
+ "gpt-3.5-turbo | chat | openai | Currently points to gpt-3.5-turbo-0613. |
\n",
+ "gpt-3.5-turbo-16k | chat | openai | Currently points to gpt-3.5-turbo-0613. |
\n",
+ "gpt-3.5-turbo-instruct | llm | openai | Similar capabilities as text-davinci-003 but compatible with legacy Completions endpoint and not Chat Completions. |
\n",
+ "gpt-3.5-turbo-0613 | chat | openai | Legacy Snapshot of gpt-3.5-turbo from June 13th 2023. Will be deprecated on June 13, 2024. |
\n",
+ "gpt-3.5-turbo-16k-0613 | chat | openai | Legacy Snapshot of gpt-3.5-16k-turbo from June 13th 2023. Will be deprecated on June 13, 2024. |
\n",
+ "gpt-3.5-turbo-0301 | chat | openai | Legacy Snapshot of gpt-3.5-turbo from March 1st 2023. Will be deprecated on June 13th 2024. |
\n",
+ "text-davinci-003 | llm | openai | Legacy Can do language tasks with better quality and consistency than the curie, babbage, or ada models. Will be deprecated on Jan 4th 2024. |
\n",
+ "text-davinci-002 | llm | openai | Legacy Similar capabilities to text-davinci-003 but trained with supervised fine-tuning instead of reinforcement learning. Will be deprecated on Jan 4th 2024. |
\n",
+ "code-davinci-002 | llm | openai | Legacy Optimized for code-completion tasks. Will be deprecated on Jan 4th 2024. |
\n",
+ "llama-v2-7b-chat | chat | fireworks | 7b parameter LlamaChat model |
\n",
+ "llama-v2-13b-chat | chat | fireworks | 13b parameter LlamaChat model |
\n",
+ "llama-v2-70b-chat | chat | fireworks | 70b parameter LlamaChat model |
\n",
+ "\n",
+ "