Skip to content

Controlling Context in Client for Sampling Requests #1205

@yarnabrina

Description

@yarnabrina

Question

I am exploring the sampling components in MCP. Based on my understanding of the protocol, this should allow the server to utilise the LLM(s) available on the client side. Unfortunately, none of the examples provide a working demonstration of this.

I have managed to get something working here, but I had to invoke the LLM directly using only part of the information from the CreateMessageRequestParams object. I have not been able to determine the purpose of RequestContext or how it can be leveraged.

In summary, these are my questions:

  1. How can I access the LLM(s) available to the client directly, rather than making an explicit call?
  2. How can I use modelPreferences to suggest that the client changes the choice of LLM based on priorities or name hints?
  3. How can I use the RequestContext to pass available context to the LLM for improved generation?
  4. How can I use the includeContext attribute of CreateMessageRequestParams to filter what information is included in the context?

Any guidance or examples would be greatly appreciated.

Additional Context

  • MCP Python SDK version: 1.12.2
  • Python version: 3.13.5
  • Operating System: Ubuntu 22.04.5 LTS on Windows 11 Home via WSL 2

Metadata

Metadata

Assignees

No one assigned

    Labels

    P1Significant bug affecting many users, highly requested featuredocumentationImprovements or additions to documentationgood first issueGood for newcomersready for workEnough information for someone to start working on

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions