Skip to content

Conversation

@ywelsch
Copy link
Contributor

@ywelsch ywelsch commented Dec 28, 2017

A shard is fully baked when it moves to POST_RECOVERY. There is no need to do an extra refresh on shard activation again as the shard has already been refreshed when it moved to POST_RECOVERY (relates to #26055).

@ywelsch ywelsch added :Internal :Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source. v6.2.0 v6.3.0 v7.0.0 labels Dec 28, 2017
@colings86 colings86 added v6.3.0 and removed v6.2.0 labels Jan 22, 2018
@ywelsch ywelsch force-pushed the no-refresh-on-activation branch from e330a31 to 5e0376c Compare February 6, 2018 13:47
@ywelsch
Copy link
Contributor Author

ywelsch commented Feb 6, 2018

ping @bleskes @jasontedor

Copy link
Contributor

@bleskes bleskes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I did some archaeology to see when this was added (1.0) and I think we are now in a different universe (refresh is a replication action) and don't need it indeed.

try {
getEngine().refresh("cluster_state_started");
} catch (Exception e) {
logger.debug("failed to refresh due to move to cluster wide started", e);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that error handling!

Copy link
Member

@jasontedor jasontedor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@ywelsch ywelsch merged commit c8df446 into elastic:master Feb 6, 2018
ywelsch added a commit that referenced this pull request Feb 6, 2018
A shard is fully baked when it moves to POST_RECOVERY. There is no need to do an extra refresh on shard activation again as the shard has already been refreshed when it moved to POST_RECOVERY.
martijnvg added a commit that referenced this pull request Feb 7, 2018
* es/master:
  Added more parameter to PersistentTaskPlugin#getPersistentTasksExecutor(...)
  [Tests] Relax assertion in SuggestStatsIT (#28544)
  Make internal Rounding fields final (#28532)
  Fix the ability to remove old plugin
  [TEST] Expand failure message for wildfly integration tests
  Add 6.2.1 version constant
  Remove feature parsing for GetIndicesAction (#28535)
  No refresh on shard activation needed (#28013)
  Improve failure message when restoring an index that already exists in the cluster (#28498)
  Use right skip versions.
  [Docs] Fix incomplete URLs (#28528)
  Use non deprecated xcontenthelper (#28503)
  Painless: Fixes a null pointer exception in certain cases of for loop usage (#28506)
martijnvg added a commit that referenced this pull request Feb 7, 2018
* es/6.x:
  Added more parameter to PersistentTaskPlugin#getPersistentTasksExecutor(...)
  [Tests] Relax assertion in SuggestStatsIT (#28544)
  Make internal Rounding fields final (#28532)
  Skip verify versions for buggy cgroup2 handling
  Fix the ability to remove old plugin
  [TEST] Expand failure message for wildfly integration tests
  Add 6.2.1 version constant
  [DOCS] Adding 6.2 RNs
  [DOCS] Added entry for 6.2.0 RNs
  Remove feature parsing for GetIndicesAction (#28535)
  No refresh on shard activation needed (#28013)
  Improve failure message when restoring an index that already exists in the cluster (#28498)
  testIndexCausesIndexCreation should not use the `_primary` preference
  Use right skip versions.
  [Docs] Fix incomplete URLs (#28528)
  Use non deprecated xcontenthelper (#28503)
  Painless: Fixes a null pointer exception in certain cases of for loop usage (#28506)
@jimczi jimczi added v7.0.0-beta1 and removed v7.0.0 labels Feb 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

:Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source. >enhancement v6.3.0 v7.0.0-beta1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants