-
Notifications
You must be signed in to change notification settings - Fork 18
Open
Description
Thanks for providing chatlas
Recent models like deepseek-r1 or deepscaler have a thinking/reasoning part
Is there a way to extract this information from chatlas in order split out the thinking/reasoning part and the final answer.
e.g. in lmstudio you would have a reasoning section which you can uncollapse
How can you extract the thinking/reasoning part of the output with chatlas? I'm using ChatOllama

brooklynbagel and jwijffels
Metadata
Metadata
Assignees
Labels
No labels