@@ -46,15 +46,13 @@ based on a similarity metric, the better its match.
4646vector function
4747
4848In most cases, you'll want to use approximate kNN. Approximate kNN offers lower
49- latency and better support for large datasets at the cost of slower indexing and
50- reduced accuracy. However, you can configure this method for higher accuracy in
51- exchange for slower searches.
49+ latency at the cost of slower indexing and imperfect accuracy.
5250
5351Exact, brute-force kNN guarantees accurate results but doesn't scale well with
54- large, unfiltered datasets. With this approach, a `script_score` query must scan
55- each matched document to compute the vector function, which can result in slow
56- search speeds. However, you can improve latency by using the <<query-dsl,Query
57- DSL>> to limit the number of matched documents passed to the function. If you
52+ large datasets. With this approach, a `script_score` query must scan each
53+ matching document to compute the vector function, which can result in slow
54+ search speeds. However, you can improve latency by using a <<query-dsl,query>>
55+ to limit the number of matching documents passed to the function. If you
5856filter your data to a small subset of documents, you can get good search
5957performance using this approach.
6058
@@ -78,8 +76,6 @@ score documents based on similarity between the query and document vector. For a
7876list of available metrics, see the <<dense-vector-similarity,`similarity`>>
7977parameter documentation.
8078
81- include::{es-repo-dir}/mapping/types/dense-vector.asciidoc[tag=dense-vector-indexing-speed]
82-
8379[source,console]
8480----
8581PUT my-approx-knn-index
@@ -156,13 +152,30 @@ most similar results from each shard. The search then merges the results from
156152each shard to return the global top `k` nearest neighbors.
157153
158154You can increase `num_candidates` for more accurate results at the cost of
159- slower search speeds. A search with a high number of `num_candidates` considers
160- more candidates from each shard. This takes more time, but the search has a
161- higher probability of finding the true `k` top nearest neighbors.
155+ slower search speeds. A search with a high value for `num_candidates`
156+ considers more candidates from each shard. This takes more time, but the
157+ search has a higher probability of finding the true `k` top nearest neighbors.
162158
163159Similarly, you can decrease `num_candidates` for faster searches with
164160potentially less accurate results.
165161
162+ [discrete]
163+ [[knn-indexing-considerations]]
164+ ==== Indexing considerations
165+
166+ {es} shards are composed of segments, which are internal storage elements in the
167+ index. For approximate kNN search, {es} stores the dense vector values of each
168+ segment as an https://arxiv.org/abs/1603.09320[HNSW graph]. Indexing vectors for
169+ approximate kNN search can take substantial time because of how expensive it is
170+ to build these graphs. You may need to increase the client request timeout for
171+ index and bulk requests.
172+
173+ <<indices-forcemerge,Force merging>> the index to a single segment can improve
174+ kNN search latency. With only one segment, the search needs to check a single,
175+ all-inclusive HNSW graph. When there are multiple segments, kNN search must
176+ check several smaller HNSW graphs as it searches each segment after another.
177+ You should only force merge an index if it is no longer being written to.
178+
166179[discrete]
167180[[approximate-knn-limitations]]
168181==== Limitations for approximate kNN search
0 commit comments