Replies: 2 comments
-
Actually, I was about to open a discussion for the opposite haha. When using the Ollama client, it stopped outputting the thinking process after this update. Would be good to have this as an option. Seeing the thinking processes while developing helps a lot with the prompt engineering. But I agree, it should be optional. |
Beta Was this translation helpful? Give feedback.
-
Im facing the same issue as some of the model like claude model , it does not output thinking tag , directly show the thinking step as the output, which make my final output include tool use thinking step. If the thinking can be consistent across all model, it would be better for developer to render the thinking process to frontend |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi folks,
I'm building an agent that is embedded in a chatbot. The agents uses several tools. The challenge i'm facing is that the agent outputs the LLM thinking (within tags) which includes its tool-plan. How can i exclude the "thinking" and output only the final content?
Beta Was this translation helpful? Give feedback.
All reactions