Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Conversation

@vansangpfiev
Copy link
Contributor

@vansangpfiev vansangpfiev commented Sep 27, 2024

Describe Your Changes

cortex run tinyllama:gguf (runs in background, advises user to use `cortex chat` for interactive)

image

Chat (Interactive)

cortex run tinyllama:gguf --chat (for interactive chat)

image

cortex chat tinyllama:gguf (shorthand for run and --chat, for interactive shell)

image

Chat (single message)

cortex chat tinyllama:gguf -m <message>

image

Fixes Issues

Self Checklist

  • Added relevant comments, esp in complex areas
  • Updated docs (for bug fixes / features)
  • Created issues for follow-up changes or refactoring needed

@vansangpfiev vansangpfiev marked this pull request as ready for review September 30, 2024 02:18
Copy link
Contributor

@namchuai namchuai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ lgtm

@vansangpfiev vansangpfiev merged commit 44d831f into dev Sep 30, 2024
4 checks passed
@vansangpfiev vansangpfiev deleted the fix/cmd branch September 30, 2024 05:38
@gabrielle-ong gabrielle-ong mentioned this pull request Sep 30, 2024
5 tasks
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: cortex run model(:gguf)

4 participants