-
Notifications
You must be signed in to change notification settings - Fork 7k
Add an example using autogen-core and FastAPI to create streaming responses #6335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@microsoft-github-policy-service agree |
@ToryPan I think some of your ideas from this example could really help improve it. Would love to get your thoughts! |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #6335 +/- ##
=======================================
Coverage 77.97% 77.97%
=======================================
Files 214 214
Lines 15377 15377
=======================================
Hits 11990 11990
Misses 3387 3387
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…e0424 * upstream/main: Remove `name` field from OpenAI Assistant Message (microsoft#6388) Introduce workbench (microsoft#6340) TEST/change gpt4, gpt4o serise to gpt4.1nano (microsoft#6375) update website version (microsoft#6364) Add self-debugging loop to `CodeExecutionAgent` (microsoft#6306) Fix: deserialize model_context in AssistantAgent and SocietyOfMindAgent and CodeExecutorAgent (microsoft#6337) Add azure ai agent (microsoft#6191) Avoid re-registering a message type already registered (microsoft#6354) Added support for exposing GPUs to docker code executor (microsoft#6339) fix: ollama fails when tools use optional args (microsoft#6343) Add an example using autogen-core and FastAPI to create streaming responses (microsoft#6335) FEAT: SelectorGroupChat could using stream inner select_prompt (microsoft#6286) Add experimental notice to canvas (microsoft#6349) DOC: add extentions - autogen-oaiapi and autogen-contextplus (microsoft#6338) fix: ensure serialized messages are passed to LLMStreamStartEvent (microsoft#6344) Generalize Continuous SystemMessage merging via model_info[“multiple_system_messages”] instead of `startswith("gemini-")` (microsoft#6345) Agentchat canvas (microsoft#6215) Signed-off-by: Peter Jausovec <[email protected]>
Why are these changes needed?
This PR adds an example demonstrates how to build a streaming chat API with multi-turn conversation history using
autogen-core
and FastAPI.Related issue number
Checks