You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 4, 2025. It is now read-only.
> ⚠️ **Cortex is currently in Development**: Expect breaking changes and bugs!
12
12
13
13
## About
14
-
Cortex is an OpenAI-compatible AI engine that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and client libraries. It can be used as a standalone server or imported as a library.
14
+
Cortex is a C++ AI engine that comes with a Docker-like command-line interface and client libraries. It supports running AI models using `ONNX`, `TensorRT-LLM`, and `llama.cpp` engines. Cortex can function as a standalone server or be integrated as a library.
15
15
16
16
## Cortex Engines
17
17
Cortex supports the following engines:
18
18
-[`cortex.llamacpp`](https://github.com/janhq/cortex.llamacpp): `cortex.llamacpp` library is a C++ inference tool that can be dynamically loaded by any server at runtime. We use this engine to support GGUF inference with GGUF models. The `llama.cpp` is optimized for performance on both CPU and GPU.
19
19
-[`cortex.onnx` Repository](https://github.com/janhq/cortex.onnx): `cortex.onnx` is a C++ inference library for Windows that leverages `onnxruntime-genai` and uses DirectML to provide GPU acceleration across a wide range of hardware and drivers, including AMD, Intel, NVIDIA, and Qualcomm GPUs.
20
20
-[`cortex.tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm): `cortex.tensorrt-llm` is a C++ inference library designed for NVIDIA GPUs. It incorporates NVIDIA’s TensorRT-LLM for GPU-accelerated inference.
21
21
22
-
## Quicklinks
23
-
24
-
-[Homepage](https://cortex.so/)
25
-
-[Docs](https://cortex.so/docs/)
26
-
27
-
## Quickstart
28
-
### Prerequisites
29
-
-**OS**:
30
-
- MacOSX 13.6 or higher.
31
-
- Windows 10 or higher.
32
-
- Ubuntu 22.04 and later.
33
-
-**Dependencies**:
34
-
-**Node.js**: Version 18 and above is required to run the installation.
35
-
-**NPM**: Needed to manage packages.
36
-
-**CPU Instruction Sets**: Available for download from the [Cortex GitHub Releases](https://github.com/janhq/cortex/releases) page.
37
-
-**OpenMPI**: Required for Linux. Install by using the following command:
38
-
```bash
39
-
sudo apt install openmpi-bin libopenmpi-dev
40
-
```
41
-
42
-
> Visit [Quickstart](https://cortex.so/docs/quickstart) to get started.
43
-
44
-
### NPM
45
-
``` bash
46
-
# Install using NPM
47
-
npm i -g cortexso
48
-
# Run model
49
-
cortex run mistral
50
-
# To uninstall globally using NPM
51
-
npm uninstall -g cortexso
22
+
## Installation
23
+
### MacOs
24
+
```bash
25
+
brew install cortex-engine
52
26
```
53
-
54
-
### Homebrew
55
-
``` bash
56
-
# Install using Brew
57
-
brew install cortexso
58
-
# Run model
59
-
cortex run mistral
60
-
# To uninstall using Brew
61
-
brew uninstall cortexso
27
+
### Windows
28
+
```bash
29
+
winget install cortex-engine
62
30
```
63
-
> You can also install Cortex using the Cortex Installer available on [GitHub Releases](https://github.com/janhq/cortex/releases).
64
-
65
-
## Cortex Server
31
+
### Linux
66
32
```bash
67
-
cortex serve
68
-
69
-
# Output
70
-
# Started server at http://localhost:1337
71
-
# Swagger UI available at http://localhost:1337/api
33
+
sudo apt install cortex-engine
72
34
```
35
+
### Docker
36
+
**Coming Soon!**
73
37
74
-
You can now access the Cortex API server at `http://localhost:1337`,
75
-
and the Swagger UI at `http://localhost:1337/api`.
0 commit comments