Skip to content

Conversation

@original-brownbear
Copy link
Contributor

We must wait for ongoing restores to complete before shutting down the repositories
service. Otherwise we may leak file descriptors because tasks for releasing the store
are submitted to the SNAPSHOT or some searchable snapshot pools that quietly accept
but never reject/fail tasks after shutdown.

same as #46178 where we had the same bug in recoveries

closes #75686

We must wait for ongoing restores to complete before shutting down the repositories
service. Otherwise we may leak file descriptors because tasks for releasing the store
are submitted to the `SNAPSHOT` or some searchable snapshot pools that quietly accept
but never reject/fail tasks after shutdown.

same as elastic#46178 where we had the same bug in recoveries

closes elastic#75686
@original-brownbear original-brownbear added >test Issues or PRs that are addressing/adding tests :Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs v8.0.0 v7.14.1 v7.15.0 labels Aug 4, 2021
@elasticmachine elasticmachine added the Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. label Aug 4, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

if (store.isClosing()) {
throw new AlreadyClosedException("store is closing");
}
if (lifecycle.started() == false) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added this here as well since we close the repositories service before the indices service and would otherwise have to wait for the restores to actually complete in some cases I think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so.

Copy link
Member

@tlrx tlrx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

return;
}
final boolean added = ongoingRestores.add(shardId);
assert added;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add some context?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++

if (store.isClosing()) {
throw new AlreadyClosedException("store is closing");
}
if (lifecycle.started() == false) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so.

@original-brownbear
Copy link
Contributor Author

Thanks Tanguy!

@original-brownbear original-brownbear merged commit f62618c into elastic:master Aug 4, 2021
@original-brownbear original-brownbear deleted the 75686 branch August 4, 2021 13:33
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Aug 4, 2021
…76070)

We must wait for ongoing restores to complete before shutting down the repositories
service. Otherwise we may leak file descriptors because tasks for releasing the store
are submitted to the `SNAPSHOT` or some searchable snapshot pools that quietly accept
but never reject/fail tasks after shutdown.

same as elastic#46178 where we had the same bug in recoveries

closes elastic#75686
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Aug 4, 2021
…76070)

We must wait for ongoing restores to complete before shutting down the repositories
service. Otherwise we may leak file descriptors because tasks for releasing the store
are submitted to the `SNAPSHOT` or some searchable snapshot pools that quietly accept
but never reject/fail tasks after shutdown.

same as elastic#46178 where we had the same bug in recoveries

closes elastic#75686
original-brownbear added a commit that referenced this pull request Aug 4, 2021
…76095)

We must wait for ongoing restores to complete before shutting down the repositories
service. Otherwise we may leak file descriptors because tasks for releasing the store
are submitted to the `SNAPSHOT` or some searchable snapshot pools that quietly accept
but never reject/fail tasks after shutdown.

same as #46178 where we had the same bug in recoveries

closes #75686
original-brownbear added a commit that referenced this pull request Aug 4, 2021
…76092)

We must wait for ongoing restores to complete before shutting down the repositories
service. Otherwise we may leak file descriptors because tasks for releasing the store
are submitted to the `SNAPSHOT` or some searchable snapshot pools that quietly accept
but never reject/fail tasks after shutdown.

same as #46178 where we had the same bug in recoveries

closes #75686
@original-brownbear original-brownbear restored the 75686 branch April 18, 2023 20:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

:Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. >test Issues or PRs that are addressing/adding tests v7.14.1 v7.15.0 v8.0.0-alpha2

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[CI] FrozenSearchableSnapshotsIntegTests classMethod failing

5 participants