-
Notifications
You must be signed in to change notification settings - Fork 21
Make the LLM module more configurable. #1070
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## llm-integration #1070 +/- ##
===================================================
- Coverage 79.46% 79.26% -0.20%
===================================================
Files 251 254 +3
Lines 12493 12543 +50
Branches 1229 1232 +3
===================================================
+ Hits 9927 9942 +15
- Misses 2298 2330 +32
- Partials 268 271 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
7240eb4
to
dd5d982
Compare
dd5d982
to
e6bab01
Compare
7fa1c8b
to
03ff4bd
Compare
pboers1988
added a commit
that referenced
this pull request
Sep 18, 2025
* Vector search and agent mode POC * l * add ag-ui package * fix linting * last lint fix * Streaming pipeline for indexing, using litellm to track token count vs allowed token count. Make conflicting libraries pydantic-ai and ag-ui optional; disabling agent route if not installed. Make search routes async and fix small bugs in query building. * fix mypy issues & use pgvector image * use pgvector for codspeed tests * update docs and cleanup * use python 3.10+ style type hinting * refactor from a list of filter conditions to a filter tree; Include endpoints for autocompleting paths and UI compatible operators per field type for frontend rendering. * small bugfixes * Bump pydantic to 2.11 * CLI command to reshape vector embeddings column, improved local setup settings and instructions. * Update mask_value for masking exposed settings and fix unit tests * Bump version to 5.0.0a1 * Add keyset pagination, include search metadata, load detailed subscription records in response, improve highlighting * Speedtest, improved retrieval speed by limiting search space, improved substring highlighting * Normalize all retriever scores to 0-1 range and other small fixes * fix linting issues * Add matchedfields for all endpoints and for structured searches * refactor path endpoint and filters to simplify structured filtering with just a field name and value type. Support component contains/not contains filters. * negation on group level, not record level * Make agent packages required and remove import safequards * Refactor traverse.py to use model based traversal with typing introsp… (#1069) * Refactor traverse.py to use model based traversal with typing introspection. Included with full unittest coverage * some fixes * move type mapping to types file and fix linting errors. * Sanitize product name for ltree, batched index deletes, add value_type to content hash, add test coverage * Sanitize product name for ltree, batched index deletes, add value_type to content hash, add test coverage * Now that we are introspecting models, is_embeddable should do a value check instead of relying on typehints * Mimick from_product_id without database operations to use a template subscription for traversal * Mimick from_product_id without database operations to use a template subscription for traversal * Make the LLM module more configurable. (#1070) * Make the LLM module more configurable and do not install all deps straight away * Fix linting problems * Agentic app * Fixes * Simplify start up * lint issue * Added some initial documentation for the LLM module --------- Co-authored-by: Tim Frohlich <[email protected]> --------- Co-authored-by: Mark90 <[email protected]> Co-authored-by: tjeerddie <[email protected]> Co-authored-by: Peter Boers <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR to make the LLM module more pluggable. Will add some docs in the next PR