Skip to content

Conversation

kyteinsky
Copy link
Contributor

No description provided.

@@ -9,6 +9,8 @@
> [!NOTE]
> Be mindful to install the backend before the Context Chat php app (Context Chat php app would sends all the user-accessible files to the backend for indexing in the background. It is not an issue even if the request fails to an uninitialised backend since those files would be tried again in the next background job run.)
>
> The CPU (or the virtual CPU) should support AVX2 instructions for the embedder/LLM to work.
>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs to be added to the docs as well

@kyteinsky
Copy link
Contributor Author

kyteinsky commented Feb 10, 2025

looks like the batch size for cpu is hurting the performance, for this cpu

@kyteinsky kyteinsky marked this pull request as draft February 10, 2025 11:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants