Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit ae7d3b5

Browse files
committed
broken link for deprecated /cli/chat
1 parent c053cdd commit ae7d3b5

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/docs/capabilities/models/model-yaml.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ Model load parameters include the options that control how Cortex.cpp runs the m
179179
| `prompt_template` | Template for formatting the prompt, including system messages and instructions. | Yes |
180180
| `engine` | The engine that run model, default to `llama-cpp` for local model with gguf format. | Yes |
181181

182-
All parameters from the `model.yml` file are used for running the model via the [CLI chat command](/docs/cli/chat) or [CLI run command](/docs/cli/run). These parameters also act as defaults when using the [model start API](/api-reference#tag/models/post/v1/models/start) through cortex.cpp.
182+
All parameters from the `model.yml` file are used for running the model via the [CLI run command](/docs/cli/run). These parameters also act as defaults when using the [model start API](/api-reference#tag/models/post/v1/models/start) through cortex.cpp.
183183

184184
## Runtime parameters
185185

docs/docs/chat-completions.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,5 +146,5 @@ Cortex also acts as an aggregator for remote inference requests from a single en
146146
:::note
147147
Learn more about Chat Completions capabilities:
148148
- [Chat Completions API Reference](/api-reference#tag/inference/post/chat/completions)
149-
- [Chat Completions CLI command](/docs/cli/chat)
149+
- [`cortex run` CLI command](/docs/cli/run)
150150
:::

0 commit comments

Comments
 (0)