Skip to content

Commit 34e95e5

Browse files
[DOCS] Add supported token filters
Update normalizers.asciidoc with the list of supported token filters Closes #28605
1 parent 6c7d12c commit 34e95e5

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

docs/reference/analysis/normalizers.asciidoc

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,12 @@ token. As a consequence, they do not have a tokenizer and only accept a subset
88
of the available char filters and token filters. Only the filters that work on
99
a per-character basis are allowed. For instance a lowercasing filter would be
1010
allowed, but not a stemming filter, which needs to look at the keyword as a
11-
whole.
11+
whole. The current list of filters that can be used in a normalizer is
12+
following: `arabic_normalization`, `asciifolding`, `bengali_normalization`,
13+
`cjk_width`, `decimal_digit`, `elision`, `german_normalization`,
14+
`hindi_normalization`, `indic_normalization`, `lowercase`,
15+
`persian_normalization`, `scandinavian_folding`, `serbian_normalization`,
16+
`sorani_normalization`, `uppercase`.
1217

1318
[float]
1419
=== Custom normalizers

0 commit comments

Comments
 (0)