Skip to content

Gut mistakes made in python Chat #60

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed

Conversation

cpsievert
Copy link
Collaborator

@cpsievert cpsievert commented May 28, 2025

This PR removes some features in Chat that predate chatlas and add a lot of complexity to the implementation without much benefit.

For reference, here is the list:

  • Chat(messages=) was removed. Use chat.ui(messages=) instead.
  • Chat(tokenizer=) was removed. This was only relevant for .messages(token_limits=[]) which was also removed.
  • Several parameters were removed from .messages(). This reflects an overall change approach for maintaining the conversation history sent to the LLM -- Chat should no longer be responsible for maintaining this conversation state, and instead, another stateful object (perhaps the one provided by chatlas, Pydantic AI, LangChain, etc.) should be used to maintain this state. That said, .messages() can still be useful if you want a reflection of the message state on the client.
  • The transform_user_input and transform_assistant_response decorators were removed. This change also reflects the new recommendation of using another stateful object to submit input and retain conversation history.
  • .user_input()'s transform parameter was removed (as a consequence of the previous point).
  • The already deprecated .set_user_message() method was officially removed (in favor of .update_user_input())

NOTE: Although these features are removed here, we should do a py-shiny release that deprecates these features first, then removes them full-stop when we move to re-export shinychat in shiny.

Follow up

  • Create shiny PR to deprecate these features
  • Possibly externalize the _chat_normalize.py logic into a more explicit API for transforming provider-specific completion objects into a ChatMessage/ChatMessageDict. That way .append_message() et al. can have a stronger message type.
  • When we eventually go to re-export shinychat in shiny, remove the "3rd party" LLM dependencies from shiny's pyproject.toml

@cpsievert cpsievert force-pushed the monorepo-gut-py-chat branch from b05546c to 92ca7a8 Compare May 30, 2025 14:28
@cpsievert cpsievert force-pushed the monorepo-gut-py-chat branch from 92ca7a8 to a9ecef8 Compare May 30, 2025 15:04
@cpsievert cpsievert force-pushed the monorepo-gut-py-chat branch from a9ecef8 to a03e195 Compare May 30, 2025 15:12
@gadenbuie gadenbuie deleted the branch monorepo May 30, 2025 15:23
@gadenbuie gadenbuie closed this May 30, 2025
@gadenbuie
Copy link
Collaborator

@cpsievert Can you update the base branch to main and re-open the PR?

@cpsievert
Copy link
Collaborator Author

I just opened up a new one in #62

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants