feature: custom ai executor #1766
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This adds support for a custom AI executor instead of always relying on the Vercel AI SDK to execute requests.
See this example code for usage.
You can test this using
npm i https://pkg.pr.new/TypeCellOS/BlockNote/@blocknote/xl-ai@1766
TODO:
LLMResponse.fromArray
for testingAlternatives
This PR basically kind of lets you "opt out" from using the AI SDK as layer to call underlying LLMs.
We can also explore:
a) (lean in on AI SDK) making it easy to implement a custom Model / Provider which you can then use as "executor". (i.e.: we consider the Model to be the point that defines how LLMs are called
b) (lean "out" of AI SDK on client side): instead of using the AI SDK heavily on the client and a lightweight proxy on the server, we could follow their default design more closely and recommend users to expose REST endpoints that can then in turn use the AI SDK