Skip to content

Commit a78a16f

Browse files
committed
TEST: Avoid triggering merges in FlushIT
In testSyncedFlushSkipOutOfSyncReplicas, we reindex the extra documents to all shards including the out-of-sync replica. However, reindexing to that replica can trigger merges (due to the new deletes) which cause the synced-flush failed. This test starts failing after we aggressively trigger merges segments with a large number of deletes in LUCENE-8263.
1 parent 877c544 commit a78a16f

File tree

1 file changed

+5
-2
lines changed
  • server/src/test/java/org/elasticsearch/indices/flush

1 file changed

+5
-2
lines changed

server/src/test/java/org/elasticsearch/indices/flush/FlushIT.java

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -274,8 +274,11 @@ public void testSyncedFlushSkipOutOfSyncReplicas() throws Exception {
274274
"out of sync replica; num docs on replica [" + (numDocs + extraDocs) + "]; num docs on primary [" + numDocs + "]"));
275275
// Index extra documents to all shards - synced-flush should be ok.
276276
for (IndexShard indexShard : indexShards) {
277-
for (int i = 0; i < extraDocs; i++) {
278-
indexDoc(IndexShardTestCase.getEngine(indexShard), "extra_" + i);
277+
// Do reindex documents to the out of sync replica to avoid trigger merges
278+
if (indexShard != outOfSyncReplica) {
279+
for (int i = 0; i < extraDocs; i++) {
280+
indexDoc(IndexShardTestCase.getEngine(indexShard), "extra_" + i);
281+
}
279282
}
280283
}
281284
final ShardsSyncedFlushResult fullResult = SyncedFlushUtil.attemptSyncedFlush(logger, internalCluster(), shardId);

0 commit comments

Comments
 (0)