Skip to content

Commit da9f55c

Browse files
authored
Execute ingest node pipeline before creating the index (#39607) (#39809)
Prior to this commit (and after 6.5.0), if an ingest node changes the _index in a pipeline, the original target index would be created. For daily indexes this could create an extra, empty index per day. This commit changes the TransportBulkAction to execute the ingest node pipeline before attempting to create the index. This ensures that the only index created is the original or one set by the ingest node pipeline. This was the execution order prior to 6.5.0 (#32786). The execution order was changed in 6.5 to better support default pipelines. Specifically the execution order was changed to be able to read the settings from the index meta data. This commit also includes a change in logic such that if the target index does not exist when ingest node pipeline runs, it will now pull the default pipeline (if one exists) from the settings of the best matched of the index template. Relates #32786 Relates #32758 Closes #36545
1 parent 098d7dc commit da9f55c

File tree

5 files changed

+255
-121
lines changed

5 files changed

+255
-121
lines changed

docs/reference/ingest/ingest-node.asciidoc

Lines changed: 0 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -271,28 +271,6 @@ POST test/_doc/1?pipeline=drop_guests_network
271271
// CONSOLE
272272
// TEST[continued]
273273

274-
////
275-
Hidden example assertion:
276-
[source,js]
277-
--------------------------------------------------
278-
GET test/_doc/1
279-
--------------------------------------------------
280-
// CONSOLE
281-
// TEST[continued]
282-
// TEST[catch:missing]
283-
284-
[source,js]
285-
--------------------------------------------------
286-
{
287-
"_index": "test",
288-
"_type": "_doc",
289-
"_id": "1",
290-
"found": false
291-
}
292-
--------------------------------------------------
293-
// TESTRESPONSE
294-
////
295-
296274
Thanks to the `?.` operator the following document will not throw an error.
297275
If the pipeline used a `.` the following document would throw a NullPointerException
298276
since the `network` object is not part of the source document.
@@ -392,28 +370,6 @@ POST test/_doc/3?pipeline=drop_guests_network
392370
// CONSOLE
393371
// TEST[continued]
394372

395-
////
396-
Hidden example assertion:
397-
[source,js]
398-
--------------------------------------------------
399-
GET test/_doc/3
400-
--------------------------------------------------
401-
// CONSOLE
402-
// TEST[continued]
403-
// TEST[catch:missing]
404-
405-
[source,js]
406-
--------------------------------------------------
407-
{
408-
"_index": "test",
409-
"_type": "_doc",
410-
"_id": "3",
411-
"found": false
412-
}
413-
--------------------------------------------------
414-
// TESTRESPONSE
415-
////
416-
417373
The `?.` operators works well for use in the `if` conditional
418374
because the {painless}/painless-operators-reference.html#null-safe-operator[null safe operator]
419375
returns null if the object is null and `==` is null safe (as well as many other
@@ -511,28 +467,6 @@ POST test/_doc/1?pipeline=not_prod_dropper
511467
The document is <<drop-processor,dropped>> since `prod` (case insensitive)
512468
is not found in the tags.
513469

514-
////
515-
Hidden example assertion:
516-
[source,js]
517-
--------------------------------------------------
518-
GET test/_doc/1
519-
--------------------------------------------------
520-
// CONSOLE
521-
// TEST[continued]
522-
// TEST[catch:missing]
523-
524-
[source,js]
525-
--------------------------------------------------
526-
{
527-
"_index": "test",
528-
"_type": "_doc",
529-
"_id": "1",
530-
"found": false
531-
}
532-
--------------------------------------------------
533-
// TESTRESPONSE
534-
////
535-
536470
The following document is indexed (i.e. not dropped) since
537471
`prod` (case insensitive) is found in the tags.
538472

modules/ingest-common/src/test/resources/rest-api-spec/test/ingest/220_drop_processor.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,4 +97,4 @@ teardown:
9797
index: test
9898
type: test
9999
id: 3
100-
- match: { found: false }
100+
Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
---
2+
teardown:
3+
- do:
4+
ingest.delete_pipeline:
5+
id: "retarget"
6+
ignore: 404
7+
8+
- do:
9+
indices.delete:
10+
index: foo
11+
12+
---
13+
"Test Change Target Index with Explicit Pipeline":
14+
15+
- do:
16+
ingest.put_pipeline:
17+
id: "retarget"
18+
body: >
19+
{
20+
"processors": [
21+
{
22+
"set" : {
23+
"field" : "_index",
24+
"value" : "foo"
25+
}
26+
}
27+
]
28+
}
29+
- match: { acknowledged: true }
30+
31+
# no indices
32+
- do:
33+
cat.indices: {}
34+
35+
- match:
36+
$body: |
37+
/^$/
38+
39+
- do:
40+
index:
41+
index: test
42+
type: _doc
43+
id: 1
44+
pipeline: "retarget"
45+
body: {
46+
a: true
47+
}
48+
49+
- do:
50+
get:
51+
index: foo
52+
type: _doc
53+
id: 1
54+
- match: { _source.a: true }
55+
56+
# only the foo index
57+
- do:
58+
cat.indices:
59+
h: i
60+
61+
- match:
62+
$body: |
63+
/^foo\n$/
64+
65+
---
66+
"Test Change Target Index with Default Pipeline":
67+
68+
- do:
69+
indices.put_template:
70+
name: index_template
71+
body:
72+
index_patterns: test
73+
settings:
74+
default_pipeline: "retarget"
75+
76+
- do:
77+
ingest.put_pipeline:
78+
id: "retarget"
79+
body: >
80+
{
81+
"processors": [
82+
{
83+
"set" : {
84+
"field" : "_index",
85+
"value" : "foo"
86+
}
87+
}
88+
]
89+
}
90+
- match: { acknowledged: true }
91+
92+
# no indices
93+
- do:
94+
cat.indices: {}
95+
96+
- match:
97+
$body: |
98+
/^$/
99+
100+
- do:
101+
index:
102+
index: test
103+
type: _doc
104+
id: 1
105+
body: {
106+
a: true
107+
}
108+
109+
- do:
110+
get:
111+
index: foo
112+
type: _doc
113+
id: 1
114+
- match: { _source.a: true }
115+
116+
# only the foo index
117+
- do:
118+
cat.indices:
119+
h: i
120+
121+
- match:
122+
$body: |
123+
/^foo\n$/

server/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java

Lines changed: 72 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -47,8 +47,10 @@
4747
import org.elasticsearch.cluster.metadata.AliasOrIndex;
4848
import org.elasticsearch.cluster.metadata.IndexMetaData;
4949
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
50+
import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;
5051
import org.elasticsearch.cluster.metadata.MappingMetaData;
5152
import org.elasticsearch.cluster.metadata.MetaData;
53+
import org.elasticsearch.cluster.metadata.MetaDataIndexTemplateService;
5254
import org.elasticsearch.cluster.service.ClusterService;
5355
import org.elasticsearch.common.collect.ImmutableOpenMap;
5456
import org.elasticsearch.common.inject.Inject;
@@ -155,6 +157,72 @@ protected void doExecute(Task task, BulkRequest bulkRequest, ActionListener<Bulk
155157
final long startTime = relativeTime();
156158
final AtomicArray<BulkItemResponse> responses = new AtomicArray<>(bulkRequest.requests.size());
157159

160+
boolean hasIndexRequestsWithPipelines = false;
161+
final MetaData metaData = clusterService.state().getMetaData();
162+
ImmutableOpenMap<String, IndexMetaData> indicesMetaData = metaData.indices();
163+
for (DocWriteRequest<?> actionRequest : bulkRequest.requests) {
164+
IndexRequest indexRequest = getIndexWriteRequest(actionRequest);
165+
if (indexRequest != null) {
166+
// get pipeline from request
167+
String pipeline = indexRequest.getPipeline();
168+
if (pipeline == null) {
169+
// start to look for default pipeline via settings found in the index meta data
170+
IndexMetaData indexMetaData = indicesMetaData.get(actionRequest.index());
171+
if (indexMetaData == null && indexRequest.index() != null) {
172+
// if the write request if through an alias use the write index's meta data
173+
AliasOrIndex indexOrAlias = metaData.getAliasAndIndexLookup().get(indexRequest.index());
174+
if (indexOrAlias != null && indexOrAlias.isAlias()) {
175+
AliasOrIndex.Alias alias = (AliasOrIndex.Alias) indexOrAlias;
176+
indexMetaData = alias.getWriteIndex();
177+
}
178+
}
179+
if (indexMetaData != null) {
180+
// Find the the default pipeline if one is defined from and existing index.
181+
String defaultPipeline = IndexSettings.DEFAULT_PIPELINE.get(indexMetaData.getSettings());
182+
indexRequest.setPipeline(defaultPipeline);
183+
if (IngestService.NOOP_PIPELINE_NAME.equals(defaultPipeline) == false) {
184+
hasIndexRequestsWithPipelines = true;
185+
}
186+
} else if (indexRequest.index() != null) {
187+
// No index exists yet (and is valid request), so matching index templates to look for a default pipeline
188+
List<IndexTemplateMetaData> templates = MetaDataIndexTemplateService.findTemplates(metaData, indexRequest.index());
189+
assert (templates != null);
190+
String defaultPipeline = IngestService.NOOP_PIPELINE_NAME;
191+
// order of templates are highest order first, break if we find a default_pipeline
192+
for (IndexTemplateMetaData template : templates) {
193+
final Settings settings = template.settings();
194+
if (IndexSettings.DEFAULT_PIPELINE.exists(settings)) {
195+
defaultPipeline = IndexSettings.DEFAULT_PIPELINE.get(settings);
196+
break;
197+
}
198+
}
199+
indexRequest.setPipeline(defaultPipeline);
200+
if (IngestService.NOOP_PIPELINE_NAME.equals(defaultPipeline) == false) {
201+
hasIndexRequestsWithPipelines = true;
202+
}
203+
}
204+
} else if (IngestService.NOOP_PIPELINE_NAME.equals(pipeline) == false) {
205+
hasIndexRequestsWithPipelines = true;
206+
}
207+
}
208+
}
209+
210+
if (hasIndexRequestsWithPipelines) {
211+
// this method (doExecute) will be called again, but with the bulk requests updated from the ingest node processing but
212+
// also with IngestService.NOOP_PIPELINE_NAME on each request. This ensures that this on the second time through this method,
213+
// this path is never taken.
214+
try {
215+
if (clusterService.localNode().isIngestNode()) {
216+
processBulkIndexIngestRequest(task, bulkRequest, listener);
217+
} else {
218+
ingestForwarder.forwardIngestRequest(BulkAction.INSTANCE, bulkRequest, listener);
219+
}
220+
} catch (Exception e) {
221+
listener.onFailure(e);
222+
}
223+
return;
224+
}
225+
158226
if (needToCheck()) {
159227
// Attempt to create all the indices that we're going to need during the bulk before we start.
160228
// Step 1: collect all the indices in the request
@@ -185,15 +253,15 @@ protected void doExecute(Task task, BulkRequest bulkRequest, ActionListener<Bulk
185253
}
186254
// Step 3: create all the indices that are missing, if there are any missing. start the bulk after all the creates come back.
187255
if (autoCreateIndices.isEmpty()) {
188-
executeIngestAndBulk(task, bulkRequest, startTime, listener, responses, indicesThatCannotBeCreated);
256+
executeBulk(task, bulkRequest, startTime, listener, responses, indicesThatCannotBeCreated);
189257
} else {
190258
final AtomicInteger counter = new AtomicInteger(autoCreateIndices.size());
191259
for (String index : autoCreateIndices) {
192260
createIndex(index, bulkRequest.timeout(), new ActionListener<CreateIndexResponse>() {
193261
@Override
194262
public void onResponse(CreateIndexResponse result) {
195263
if (counter.decrementAndGet() == 0) {
196-
executeIngestAndBulk(task, bulkRequest, startTime, listener, responses, indicesThatCannotBeCreated);
264+
executeBulk(task, bulkRequest, startTime, listener, responses, indicesThatCannotBeCreated);
197265
}
198266
}
199267

@@ -209,7 +277,7 @@ public void onFailure(Exception e) {
209277
}
210278
}
211279
if (counter.decrementAndGet() == 0) {
212-
executeIngestAndBulk(task, bulkRequest, startTime, ActionListener.wrap(listener::onResponse, inner -> {
280+
executeBulk(task, bulkRequest, startTime, ActionListener.wrap(listener::onResponse, inner -> {
213281
inner.addSuppressed(e);
214282
listener.onFailure(inner);
215283
}), responses, indicesThatCannotBeCreated);
@@ -219,56 +287,7 @@ public void onFailure(Exception e) {
219287
}
220288
}
221289
} else {
222-
executeIngestAndBulk(task, bulkRequest, startTime, listener, responses, emptyMap());
223-
}
224-
}
225-
226-
private void executeIngestAndBulk(Task task, final BulkRequest bulkRequest, final long startTimeNanos,
227-
final ActionListener<BulkResponse> listener, final AtomicArray<BulkItemResponse> responses,
228-
Map<String, IndexNotFoundException> indicesThatCannotBeCreated) {
229-
boolean hasIndexRequestsWithPipelines = false;
230-
final MetaData metaData = clusterService.state().getMetaData();
231-
ImmutableOpenMap<String, IndexMetaData> indicesMetaData = metaData.indices();
232-
for (DocWriteRequest<?> actionRequest : bulkRequest.requests) {
233-
IndexRequest indexRequest = getIndexWriteRequest(actionRequest);
234-
if(indexRequest != null){
235-
String pipeline = indexRequest.getPipeline();
236-
if (pipeline == null) {
237-
IndexMetaData indexMetaData = indicesMetaData.get(actionRequest.index());
238-
if (indexMetaData == null && indexRequest.index() != null) {
239-
//check the alias
240-
AliasOrIndex indexOrAlias = metaData.getAliasAndIndexLookup().get(indexRequest.index());
241-
if (indexOrAlias != null && indexOrAlias.isAlias()) {
242-
AliasOrIndex.Alias alias = (AliasOrIndex.Alias) indexOrAlias;
243-
indexMetaData = alias.getWriteIndex();
244-
}
245-
}
246-
if (indexMetaData == null) {
247-
indexRequest.setPipeline(IngestService.NOOP_PIPELINE_NAME);
248-
} else {
249-
String defaultPipeline = IndexSettings.DEFAULT_PIPELINE.get(indexMetaData.getSettings());
250-
indexRequest.setPipeline(defaultPipeline);
251-
if (IngestService.NOOP_PIPELINE_NAME.equals(defaultPipeline) == false) {
252-
hasIndexRequestsWithPipelines = true;
253-
}
254-
}
255-
} else if (IngestService.NOOP_PIPELINE_NAME.equals(pipeline) == false) {
256-
hasIndexRequestsWithPipelines = true;
257-
}
258-
}
259-
}
260-
if (hasIndexRequestsWithPipelines) {
261-
try {
262-
if (clusterService.localNode().isIngestNode()) {
263-
processBulkIndexIngestRequest(task, bulkRequest, listener);
264-
} else {
265-
ingestForwarder.forwardIngestRequest(BulkAction.INSTANCE, bulkRequest, listener);
266-
}
267-
} catch (Exception e) {
268-
listener.onFailure(e);
269-
}
270-
} else {
271-
executeBulk(task, bulkRequest, startTimeNanos, listener, responses, indicesThatCannotBeCreated);
290+
executeBulk(task, bulkRequest, startTime, listener, responses, emptyMap());
272291
}
273292
}
274293

0 commit comments

Comments
 (0)