From 483abe3f142c39bdaab11b430102af73cf429ba5 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Tue, 26 Nov 2019 17:40:58 -0800 Subject: [PATCH 01/19] [DOCS] Move snapshot-restore docs out of modules. --- docs/reference/index.asciidoc | 2 + docs/reference/modules.asciidoc | 2 - docs/reference/modules/snapshots.asciidoc | 804 ------------------ docs/reference/redirects.asciidoc | 5 + .../monitor-snapshot-restore.asciidoc | 73 ++ .../restore-snapshot.asciidoc | 187 ++++ .../snapshots-register-repository.asciidoc | 285 +++++++ .../snapshots-take-snapshot.asciidoc | 194 +++++ .../snapshot-restore/snapshots.asciidoc | 87 ++ 9 files changed, 833 insertions(+), 806 deletions(-) delete mode 100644 docs/reference/modules/snapshots.asciidoc create mode 100644 docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc create mode 100644 docs/reference/snapshot-restore/restore-snapshot.asciidoc create mode 100644 docs/reference/snapshot-restore/snapshots-register-repository.asciidoc create mode 100644 docs/reference/snapshot-restore/snapshots-take-snapshot.asciidoc create mode 100644 docs/reference/snapshot-restore/snapshots.asciidoc diff --git a/docs/reference/index.asciidoc b/docs/reference/index.asciidoc index e283ac84ab753..4684b875ba2aa 100644 --- a/docs/reference/index.asciidoc +++ b/docs/reference/index.asciidoc @@ -50,6 +50,8 @@ include::data-rollup-transform.asciidoc[] include::high-availability.asciidoc[] +include::modules/snapshots.asciidoc[] + include::{xes-repo-dir}/security/index.asciidoc[] include::{xes-repo-dir}/watcher/index.asciidoc[] diff --git a/docs/reference/modules.asciidoc b/docs/reference/modules.asciidoc index 51f458c010e6d..23f7f7692142d 100644 --- a/docs/reference/modules.asciidoc +++ b/docs/reference/modules.asciidoc @@ -91,8 +91,6 @@ include::modules/node.asciidoc[] include::modules/plugins.asciidoc[] -include::modules/snapshots.asciidoc[] - include::modules/threadpool.asciidoc[] include::modules/transport.asciidoc[] diff --git a/docs/reference/modules/snapshots.asciidoc b/docs/reference/modules/snapshots.asciidoc deleted file mode 100644 index 666b8b4495fe5..0000000000000 --- a/docs/reference/modules/snapshots.asciidoc +++ /dev/null @@ -1,804 +0,0 @@ -[[modules-snapshots]] -== Snapshot And Restore - -// tag::snapshot-intro[] -A snapshot is a backup taken from a running Elasticsearch cluster. You can take -a snapshot of individual indices or of the entire cluster and store it in a -repository on a shared filesystem, and there are plugins that support remote -repositories on S3, HDFS, Azure, Google Cloud Storage and more. - -Snapshots are taken incrementally. This means that when it creates a snapshot of -an index, Elasticsearch avoids copying any data that is already stored in the -repository as part of an earlier snapshot of the same index. Therefore it can be -efficient to take snapshots of your cluster quite frequently. -// end::snapshot-intro[] - -// tag::restore-intro[] -You can restore snapshots into a running cluster via the -<>. When you restore an index, you can alter the -name of the restored index as well as some of its settings. There is a great -deal of flexibility in how the snapshot and restore functionality can be used. -// end::restore-intro[] - -You can automate your snapshot backup and restore process by using -<>. - -// tag::backup-warning[] -WARNING: You cannot back up an Elasticsearch cluster by simply taking a copy of -the data directories of all of its nodes. Elasticsearch may be making changes to -the contents of its data directories while it is running; copying its data -directories cannot be expected to capture a consistent picture of their contents. -If you try to restore a cluster from such a backup, it may fail and report -corruption and/or missing files. Alternatively, it may appear to have succeeded -though it silently lost some of its data. The only reliable way to back up a -cluster is by using the snapshot and restore functionality. - -// end::backup-warning[] - -[float] -=== Version compatibility - -IMPORTANT: Version compatibility refers to the underlying Lucene index -compatibility. Follow the <> -when migrating between versions. - -A snapshot contains a copy of the on-disk data structures that make up an -index. This means that snapshots can only be restored to versions of -Elasticsearch that can read the indices: - -* A snapshot of an index created in 6.x can be restored to 7.x. -* A snapshot of an index created in 5.x can be restored to 6.x. -* A snapshot of an index created in 2.x can be restored to 5.x. -* A snapshot of an index created in 1.x can be restored to 2.x. - -Conversely, snapshots of indices created in 1.x **cannot** be restored to 5.x -or 6.x, snapshots of indices created in 2.x **cannot** be restored to 6.x -or 7.x, and snapshots of indices created in 5.x **cannot** be restored to 7.x -or 8.x. - -Each snapshot can contain indices created in various versions of Elasticsearch, -and when restoring a snapshot it must be possible to restore all of the indices -into the target cluster. If any indices in a snapshot were created in an -incompatible version, you will not be able restore the snapshot. - -IMPORTANT: When backing up your data prior to an upgrade, keep in mind that you -won't be able to restore snapshots after you upgrade if they contain indices -created in a version that's incompatible with the upgrade version. - -If you end up in a situation where you need to restore a snapshot of an index -that is incompatible with the version of the cluster you are currently running, -you can restore it on the latest compatible version and use -<> to rebuild the index on the current -version. Reindexing from remote is only possible if the original index has -source enabled. Retrieving and reindexing the data can take significantly -longer than simply restoring a snapshot. If you have a large amount of data, we -recommend testing the reindex from remote process with a subset of your data to -understand the time requirements before proceeding. - -[float] -[[snapshots-repositories]] -=== Repositories - -You must register a snapshot repository before you can perform snapshot and -restore operations. We recommend creating a new snapshot repository for each -major version. The valid repository settings depend on the repository type. - -If you register same snapshot repository with multiple clusters, only -one cluster should have write access to the repository. All other clusters -connected to that repository should set the repository to `readonly` mode. - -IMPORTANT: The snapshot format can change across major versions, so if you have -clusters on different versions trying to write the same repository, snapshots -written by one version may not be visible to the other and the repository could -be corrupted. While setting the repository to `readonly` on all but one of the -clusters should work with multiple clusters differing by one major version, it -is not a supported configuration. - -[source,console] ------------------------------------ -PUT /_snapshot/my_backup -{ - "type": "fs", - "settings": { - "location": "my_backup_location" - } -} ------------------------------------ -// TESTSETUP - -To retrieve information about a registered repository, use a GET request: - -[source,console] ------------------------------------ -GET /_snapshot/my_backup ------------------------------------ - -which returns: - -[source,console-result] ------------------------------------ -{ - "my_backup": { - "type": "fs", - "settings": { - "location": "my_backup_location" - } - } -} ------------------------------------ - -To retrieve information about multiple repositories, specify a comma-delimited -list of repositories. You can also use the * wildcard when -specifying repository names. For example, the following request retrieves -information about all of the snapshot repositories that start with `repo` or -contain `backup`: - -[source,console] ------------------------------------ -GET /_snapshot/repo*,*backup* ------------------------------------ - -To retrieve information about all registered snapshot repositories, omit the -repository name or specify `_all`: - -[source,console] ------------------------------------ -GET /_snapshot ------------------------------------ - -or - -[source,console] ------------------------------------ -GET /_snapshot/_all ------------------------------------ - -[float] -===== Shared File System Repository - -The shared file system repository (`"type": "fs"`) uses the shared file system to store snapshots. In order to register -the shared file system repository it is necessary to mount the same shared filesystem to the same location on all -master and data nodes. This location (or one of its parent directories) must be registered in the `path.repo` -setting on all master and data nodes. - -Assuming that the shared filesystem is mounted to `/mount/backups/my_fs_backup_location`, the following setting should -be added to `elasticsearch.yml` file: - -[source,yaml] --------------- -path.repo: ["/mount/backups", "/mount/longterm_backups"] --------------- - -The `path.repo` setting supports Microsoft Windows UNC paths as long as at least server name and share are specified as -a prefix and back slashes are properly escaped: - -[source,yaml] --------------- -path.repo: ["\\\\MY_SERVER\\Snapshots"] --------------- - -After all nodes are restarted, the following command can be used to register the shared file system repository with -the name `my_fs_backup`: - -[source,console] ------------------------------------ -PUT /_snapshot/my_fs_backup -{ - "type": "fs", - "settings": { - "location": "/mount/backups/my_fs_backup_location", - "compress": true - } -} ------------------------------------ -// TEST[skip:no access to absolute path] - -If the repository location is specified as a relative path this path will be resolved against the first path specified -in `path.repo`: - -[source,console] ------------------------------------ -PUT /_snapshot/my_fs_backup -{ - "type": "fs", - "settings": { - "location": "my_fs_backup_location", - "compress": true - } -} ------------------------------------ -// TEST[continued] - -The following settings are supported: - -[horizontal] -`location`:: Location of the snapshots. Mandatory. -`compress`:: Turns on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. Defaults to `true`. -`chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. Specify the chunk size as a value and -unit, for example: `1GB`, `10MB`, `5KB`, `500B`. Defaults to `null` (unlimited chunk size). -`max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `40mb` per second. -`max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `40mb` per second. -`readonly`:: Makes repository read-only. Defaults to `false`. - -[float] -===== Read-only URL Repository - -The URL repository (`"type": "url"`) can be used as an alternative read-only way to access data created by the shared file -system repository. The URL specified in the `url` parameter should point to the root of the shared filesystem repository. -The following settings are supported: - -[horizontal] -`url`:: Location of the snapshots. Mandatory. - -URL Repository supports the following protocols: "http", "https", "ftp", "file" and "jar". URL repositories with `http:`, -`https:`, and `ftp:` URLs has to be whitelisted by specifying allowed URLs in the `repositories.url.allowed_urls` setting. -This setting supports wildcards in the place of host, path, query, and fragment. For example: - -[source,yaml] ------------------------------------ -repositories.url.allowed_urls: ["http://www.example.org/root/*", "https://*.mydomain.com/*?*#*"] ------------------------------------ - -URL repositories with `file:` URLs can only point to locations registered in the `path.repo` setting similar to -shared file system repository. - -[float] -[role="xpack"] -[testenv="basic"] -===== Source Only Repository - -A source repository enables you to create minimal, source-only snapshots that take up to 50% less space on disk. -Source only snapshots contain stored fields and index metadata. They do not include index or doc values structures -and are not searchable when restored. After restoring a source-only snapshot, you must <> -the data into a new index. - -Source repositories delegate to another snapshot repository for storage. - - -[IMPORTANT] -================================================== - -Source only snapshots are only supported if the `_source` field is enabled and no source-filtering is applied. -When you restore a source only snapshot: - - * The restored index is read-only and can only serve `match_all` search or scroll requests to enable reindexing. - - * Queries other than `match_all` and `_get` requests are not supported. - - * The mapping of the restored index is empty, but the original mapping is available from the types top - level `meta` element. - -================================================== - -When you create a source repository, you must specify the type and name of the delegate repository -where the snapshots will be stored: - -[source,console] ------------------------------------ -PUT _snapshot/my_src_only_repository -{ - "type": "source", - "settings": { - "delegate_type": "fs", - "location": "my_backup_location" - } -} ------------------------------------ -// TEST[continued] - -[float] -===== Repository plugins - -Other repository backends are available in these official plugins: - -* {plugins}/repository-s3.html[repository-s3] for S3 repository support -* {plugins}/repository-hdfs.html[repository-hdfs] for HDFS repository support in Hadoop environments -* {plugins}/repository-azure.html[repository-azure] for Azure storage repositories -* {plugins}/repository-gcs.html[repository-gcs] for Google Cloud Storage repositories - -[float] -===== Repository Verification -When a repository is registered, it's immediately verified on all master and data nodes to make sure that it is functional -on all nodes currently present in the cluster. The `verify` parameter can be used to explicitly disable the repository -verification when registering or updating a repository: - -[source,console] ------------------------------------ -PUT /_snapshot/my_unverified_backup?verify=false -{ - "type": "fs", - "settings": { - "location": "my_unverified_backup_location" - } -} ------------------------------------ -// TEST[continued] - -The verification process can also be executed manually by running the following command: - -[source,console] ------------------------------------ -POST /_snapshot/my_unverified_backup/_verify ------------------------------------ -// TEST[continued] - -It returns a list of nodes where repository was successfully verified or an error message if verification process failed. - -[float] -===== Repository Cleanup -Repositories can over time accumulate data that is not referenced by any existing snapshot. This is a result of the data safety guarantees -the snapshot functionality provides in failure scenarios during snapshot creation and the decentralized nature of the snapshot creation -process. This unreferenced data does in no way negatively impact the performance or safety of a snapshot repository but leads to higher -than necessary storage use. In order to clean up this unreferenced data, users can call the cleanup endpoint for a repository which will -trigger a complete accounting of the repositories contents and subsequent deletion of all unreferenced data that was found. - -[source,console] ------------------------------------ -POST /_snapshot/my_repository/_cleanup ------------------------------------ -// TEST[continued] - -The response to a cleanup request looks as follows: - -[source,console-result] --------------------------------------------------- -{ - "results": { - "deleted_bytes": 20, - "deleted_blobs": 5 - } -} --------------------------------------------------- - -Depending on the concrete repository implementation the numbers shown for bytes free as well as the number of blobs removed will either -be an approximation or an exact result. Any non-zero value for the number of blobs removed implies that unreferenced blobs were found and -subsequently cleaned up. - -Please note that most of the cleanup operations executed by this endpoint are automatically executed when deleting any snapshot from a -repository. If you regularly delete snapshots, you will in most cases not get any or only minor space savings from using this functionality -and should lower your frequency of invoking it accordingly. - -[float] -[[snapshots-take-snapshot]] -=== Snapshot - -A repository can contain multiple snapshots of the same cluster. Snapshots are identified by unique names within the -cluster. A snapshot with the name `snapshot_1` in the repository `my_backup` can be created by executing the following -command: - -[source,console] ------------------------------------ -PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true ------------------------------------ -// TEST[continued] - -The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot -initialization (default) or wait for snapshot completion. During snapshot initialization, information about all -previous snapshots is loaded into the memory, which means that in large repositories it may take several seconds (or -even minutes) for this command to return even if the `wait_for_completion` parameter is set to `false`. - -By default a snapshot of all open and started indices in the cluster is created. This behavior can be changed by -specifying the list of indices in the body of the snapshot request. - -[source,console] ------------------------------------ -PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true -{ - "indices": "index_1,index_2", - "ignore_unavailable": true, - "include_global_state": false, - "metadata": { - "taken_by": "kimchy", - "taken_because": "backup before upgrading" - } -} ------------------------------------ -// TEST[continued] - -The list of indices that should be included into the snapshot can be specified using the `indices` parameter that -supports <>. The snapshot request also supports the -`ignore_unavailable` option. Setting it to `true` will cause indices that do not exist to be ignored during snapshot -creation. By default, when `ignore_unavailable` option is not set and an index is missing the snapshot request will fail. -By setting `include_global_state` to false it's possible to prevent the cluster global state to be stored as part of -the snapshot. By default, the entire snapshot will fail if one or more indices participating in the snapshot don't have -all primary shards available. This behaviour can be changed by setting `partial` to `true`. - -The `metadata` field can be used to attach arbitrary metadata to the snapshot. This may be a record of who took the snapshot, -why it was taken, or any other data that might be useful. - -Snapshot names can be automatically derived using <>, similarly as when creating -new indices. Note that special characters need to be URI encoded. - -For example, creating a snapshot with the current day in the name, like `snapshot-2018.05.11`, can be achieved with -the following command: - -[source,console] ------------------------------------ -# PUT /_snapshot/my_backup/ -PUT /_snapshot/my_backup/%3Csnapshot-%7Bnow%2Fd%7D%3E ------------------------------------ -// TEST[continued] - - -The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses -the list of the index files that are already stored in the repository and copies only files that were created or -changed since the last snapshot. That allows multiple snapshots to be preserved in the repository in a compact form. -Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be -executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index -at the moment when snapshot was created, so no records that were added to the index after the snapshot process was started -will be present in the snapshot. The snapshot process starts immediately for the primary shards that has been started -and are not relocating at the moment. Before version 1.2.0, the snapshot operation fails if the cluster has any relocating or -initializing primaries of indices participating in the snapshot. Starting with version 1.2.0, Elasticsearch waits for -relocation or initialization of shards to complete before snapshotting them. - -Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent -cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of -the snapshot. - -Only one snapshot process can be executed in the cluster at any time. While snapshot of a particular shard is being -created this shard cannot be moved to another node, which can interfere with rebalancing process and allocation -filtering. Elasticsearch will only be able to move a shard to another node (according to the current allocation -filtering settings and rebalancing algorithm) once the snapshot is finished. - -Once a snapshot is created information about this snapshot can be obtained using the following command: - -[source,console] ------------------------------------ -GET /_snapshot/my_backup/snapshot_1 ------------------------------------ -// TEST[continued] - -This command returns basic information about the snapshot including start and end time, version of -Elasticsearch that created the snapshot, the list of included indices, the current state of the -snapshot and the list of failures that occurred during the snapshot. The snapshot `state` can be - -[horizontal] -`IN_PROGRESS`:: - - The snapshot is currently running. - -`SUCCESS`:: - - The snapshot finished and all shards were stored successfully. - -`FAILED`:: - - The snapshot finished with an error and failed to store any data. - -`PARTIAL`:: - - The global cluster state was stored, but data of at least one shard wasn't stored successfully. - The `failure` section in this case should contain more detailed information about shards - that were not processed correctly. - -`INCOMPATIBLE`:: - - The snapshot was created with an old version of Elasticsearch and therefore is incompatible with - the current version of the cluster. - - -Similar as for repositories, information about multiple snapshots can be queried in one go, supporting wildcards as well: - -[source,console] ------------------------------------ -GET /_snapshot/my_backup/snapshot_*,some_other_snapshot ------------------------------------ -// TEST[continued] - -All snapshots currently stored in the repository can be listed using the following command: - -[source,console] ------------------------------------ -GET /_snapshot/my_backup/_all ------------------------------------ -// TEST[continued] - -The command fails if some of the snapshots are unavailable. The boolean parameter `ignore_unavailable` can be used to -return all snapshots that are currently available. - -Getting all snapshots in the repository can be costly on cloud-based repositories, -both from a cost and performance perspective. If the only information required is -the snapshot names/uuids in the repository and the indices in each snapshot, then -the optional boolean parameter `verbose` can be set to `false` to execute a more -performant and cost-effective retrieval of the snapshots in the repository. Note -that setting `verbose` to `false` will omit all other information about the snapshot -such as status information, the number of snapshotted shards, etc. The default -value of the `verbose` parameter is `true`. - -It is also possible to retrieve snapshots from multiple repositories in one go, for example: - -[source,console] ------------------------------------ -GET /_snapshot/_all -GET /_snapshot/my_backup,my_fs_backup -GET /_snapshot/my*/snap* ------------------------------------ -// TEST[skip:no my_fs_backup] - -A currently running snapshot can be retrieved using the following command: - -[source,console] ------------------------------------ -GET /_snapshot/my_backup/_current ------------------------------------ -// TEST[continued] - -A snapshot can be deleted from the repository using the following command: - -[source,console] ------------------------------------ -DELETE /_snapshot/my_backup/snapshot_2 ------------------------------------ -// TEST[continued] - -When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted -snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being -created the snapshotting process will be aborted and all files created as part of the snapshotting process will be -cleaned. Therefore, the delete snapshot operation can be used to cancel long running snapshot operations that were -started by mistake. - -A repository can be unregistered using the following command: - -[source,console] ------------------------------------ -DELETE /_snapshot/my_backup ------------------------------------ -// TEST[continued] - -When a repository is unregistered, Elasticsearch only removes the reference to the location where the repository is storing -the snapshots. The snapshots themselves are left untouched and in place. - -[float] -[[restore-snapshot]] -=== Restore - -A snapshot can be restored using the following command: - -[source,console] ------------------------------------ -POST /_snapshot/my_backup/snapshot_1/_restore ------------------------------------ -// TEST[continued] - -By default, all indices in the snapshot are restored, and the cluster state is -*not* restored. It's possible to select indices that should be restored as well -as to allow the global cluster state from being restored by using `indices` and -`include_global_state` options in the restore request body. The list of indices -supports <>. The `rename_pattern` -and `rename_replacement` options can be also used to rename indices on restore -using regular expression that supports referencing the original text as -explained -http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here]. -Set `include_aliases` to `false` to prevent aliases from being restored together -with associated indices - -[source,console] ------------------------------------ -POST /_snapshot/my_backup/snapshot_1/_restore -{ - "indices": "index_1,index_2", - "ignore_unavailable": true, - "include_global_state": true, - "rename_pattern": "index_(.+)", - "rename_replacement": "restored_index_$1" -} ------------------------------------ -// TEST[continued] - -The restore operation can be performed on a functioning cluster. However, an -existing index can be only restored if it's <> and -has the same number of shards as the index in the snapshot. The restore -operation automatically opens restored indices if they were closed and creates -new indices if they didn't exist in the cluster. If cluster state is restored -with `include_global_state` (defaults to `false`), the restored templates that -don't currently exist in the cluster are added and existing templates with the -same name are replaced by the restored templates. The restored persistent -settings are added to the existing persistent settings. - -[float] -==== Partial restore - -By default, the entire restore operation will fail if one or more indices participating in the operation don't have -snapshots of all shards available. It can occur if some shards failed to snapshot for example. It is still possible to -restore such indices by setting `partial` to `true`. Please note, that only successfully snapshotted shards will be -restored in this case and all missing shards will be recreated empty. - - -[float] -==== Changing index settings during restore - -Most of index settings can be overridden during the restore process. For example, the following command will restore -the index `index_1` without creating any replicas while switching back to default refresh interval: - -[source,console] ------------------------------------ -POST /_snapshot/my_backup/snapshot_1/_restore -{ - "indices": "index_1", - "index_settings": { - "index.number_of_replicas": 0 - }, - "ignore_index_settings": [ - "index.refresh_interval" - ] -} ------------------------------------ -// TEST[continued] - -Please note, that some settings such as `index.number_of_shards` cannot be changed during restore operation. - -[float] -==== Restoring to a different cluster - -The information stored in a snapshot is not tied to a particular cluster or a cluster name. Therefore it's possible to -restore a snapshot made from one cluster into another cluster. All that is required is registering the repository -containing the snapshot in the new cluster and starting the restore process. The new cluster doesn't have to have the -same size or topology. However, the version of the new cluster should be the same or newer (only 1 major version newer) than the cluster that was used to create the snapshot. For example, you can restore a 1.x snapshot to a 2.x cluster, but not a 1.x snapshot to a 5.x cluster. - -If the new cluster has a smaller size additional considerations should be made. First of all it's necessary to make sure -that new cluster have enough capacity to store all indices in the snapshot. It's possible to change indices settings -during restore to reduce the number of replicas, which can help with restoring snapshots into smaller cluster. It's also -possible to select only subset of the indices using the `indices` parameter. - -If indices in the original cluster were assigned to particular nodes using -<>, the same rules will be enforced in the new cluster. Therefore -if the new cluster doesn't contain nodes with appropriate attributes that a restored index can be allocated on, such -index will not be successfully restored unless these index allocation settings are changed during restore operation. - -The restore operation also checks that restored persistent settings are compatible with the current cluster to avoid accidentally -restoring incompatible settings. If you need to restore a snapshot with incompatible persistent settings, try restoring it without -the global cluster state. - -[float] -=== Snapshot status - -A list of currently running snapshots with their detailed status information can be obtained using the following command: - -[source,console] ------------------------------------ -GET /_snapshot/_status ------------------------------------ -// TEST[continued] - -In this format, the command will return information about all currently running snapshots. By specifying a repository name, it's possible -to limit the results to a particular repository: - -[source,console] ------------------------------------ -GET /_snapshot/my_backup/_status ------------------------------------ -// TEST[continued] - -If both repository name and snapshot id are specified, this command will return detailed status information for the given snapshot even -if it's not currently running: - -[source,console] ------------------------------------ -GET /_snapshot/my_backup/snapshot_1/_status ------------------------------------ -// TEST[continued] - -The output looks similar to the following: - -[source,console-result] --------------------------------------------------- -{ - "snapshots": [ - { - "snapshot": "snapshot_1", - "repository": "my_backup", - "uuid": "XuBo4l4ISYiVg0nYUen9zg", - "state": "SUCCESS", - "include_global_state": true, - "shards_stats": { - "initializing": 0, - "started": 0, - "finalizing": 0, - "done": 5, - "failed": 0, - "total": 5 - }, - "stats": { - "incremental": { - "file_count": 8, - "size_in_bytes": 4704 - }, - "processed": { - "file_count": 7, - "size_in_bytes": 4254 - }, - "total": { - "file_count": 8, - "size_in_bytes": 4704 - }, - "start_time_in_millis": 1526280280355, - "time_in_millis": 358 - } - } - ] -} --------------------------------------------------- - -The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were -snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository, -the `stats` object contains a `total` section for all the files that are referenced by the snapshot, as well as an `incremental` section -for those files that actually needed to be copied over as part of the incremental snapshotting. In case of a snapshot that's still -in progress, there's also a `processed` section that contains information about the files that are in the process of being copied. - -Multiple ids are also supported: - -[source,console] ------------------------------------ -GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status ------------------------------------ -// TEST[continued] - -[float] -[[monitor-snapshot-restore-progress]] -=== Monitoring snapshot/restore progress - -There are several ways to monitor the progress of the snapshot and restores processes while they are running. Both -operations support `wait_for_completion` parameter that would block client until the operation is completed. This is -the simplest method that can be used to get notified about operation completion. - -The snapshot operation can be also monitored by periodic calls to the snapshot info: - -[source,console] ------------------------------------ -GET /_snapshot/my_backup/snapshot_1 ------------------------------------ -// TEST[continued] - -Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So, -executing a snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait -for available resources before returning the result. On very large shards the wait time can be significant. - -To get more immediate and complete information about snapshots the snapshot status command can be used instead: - -[source,console] ------------------------------------ -GET /_snapshot/my_backup/snapshot_1/_status ------------------------------------ -// TEST[continued] - -While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns -complete breakdown of the current state for each shard participating in the snapshot. - -The restore process piggybacks on the standard recovery mechanism of the Elasticsearch. As a result, standard recovery -monitoring services can be used to monitor the state of restore. When restore operation is executed the cluster -typically goes into `red` state. It happens because the restore operation starts with "recovering" primary shards of the -restored indices. During this operation the primary shards become unavailable which manifests itself in the `red` cluster -state. Once recovery of primary shards is completed Elasticsearch is switching to standard replication process that -creates the required number of replicas at this moment cluster switches to the `yellow` state. Once all required replicas -are created, the cluster switches to the `green` states. - -The cluster health operation provides only a high level status of the restore process. It's possible to get more -detailed insight into the current state of the recovery process by using <> and -<> APIs. - -[float] -=== Stopping currently running snapshot and restore operations - -The snapshot and restore framework allows running only one snapshot or one restore operation at a time. If a currently -running snapshot was executed by mistake, or takes unusually long, it can be terminated using the snapshot delete operation. -The snapshot delete operation checks if the deleted snapshot is currently running and if it does, the delete operation stops -that snapshot before deleting the snapshot data from the repository. - -[source,console] ------------------------------------ -DELETE /_snapshot/my_backup/snapshot_1 ------------------------------------ -// TEST[continued] - -The restore operation uses the standard shard recovery mechanism. Therefore, any currently running restore operation can -be canceled by deleting indices that are being restored. Please note that data for all deleted indices will be removed -from the cluster as a result of this operation. - -[float] -=== Effect of cluster blocks on snapshot and restore operations -Many snapshot and restore operations are affected by cluster and index blocks. For example, registering and unregistering -repositories require write global metadata access. The snapshot operation requires that all indices and their metadata as -well as the global metadata were readable. The restore operation requires the global metadata to be writable, however -the index level blocks are ignored during restore because indices are essentially recreated during restore. -Please note that a repository content is not part of the cluster and therefore cluster blocks don't affect internal -repository operations such as listing or deleting snapshots from an already registered repository. diff --git a/docs/reference/redirects.asciidoc b/docs/reference/redirects.asciidoc index 8580705623146..050b273e85ad3 100644 --- a/docs/reference/redirects.asciidoc +++ b/docs/reference/redirects.asciidoc @@ -1112,3 +1112,8 @@ See <>, <>, and [[ml-results-overall-buckets]] <>. + +[role="exclude",id="modules-snapshots"] +=== Snapshot module + +See <>. diff --git a/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc b/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc new file mode 100644 index 0000000000000..b97193bf6fe12 --- /dev/null +++ b/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc @@ -0,0 +1,73 @@ +[[snapshots-monitor-snapshot-restore]] +== Monitor snapshot and restore progress + +++++ +Monitor snapshot and restore +++++ + +There are several ways to monitor the progress of the snapshot and restores processes while they are running. Both +operations support `wait_for_completion` parameter that would block client until the operation is completed. This is +the simplest method that can be used to get notified about operation completion. + +The snapshot operation can be also monitored by periodic calls to the snapshot info: + +[source,console] +----------------------------------- +GET /_snapshot/my_backup/snapshot_1 +----------------------------------- +// TEST[continued] + +Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So, +executing a snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait +for available resources before returning the result. On very large shards the wait time can be significant. + +To get more immediate and complete information about snapshots the snapshot status command can be used instead: + +[source,console] +----------------------------------- +GET /_snapshot/my_backup/snapshot_1/_status +----------------------------------- +// TEST[continued] + +While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns +complete breakdown of the current state for each shard participating in the snapshot. + +The restore process piggybacks on the standard recovery mechanism of the Elasticsearch. As a result, standard recovery +monitoring services can be used to monitor the state of restore. When restore operation is executed the cluster +typically goes into `red` state. It happens because the restore operation starts with "recovering" primary shards of the +restored indices. During this operation the primary shards become unavailable which manifests itself in the `red` cluster +state. Once recovery of primary shards is completed Elasticsearch is switching to standard replication process that +creates the required number of replicas at this moment cluster switches to the `yellow` state. Once all required replicas +are created, the cluster switches to the `green` states. + +The cluster health operation provides only a high level status of the restore process. It's possible to get more +detailed insight into the current state of the recovery process by using <> and +<> APIs. + +[float] +=== Stop snapshot and restore operations + +The snapshot and restore framework allows running only one snapshot or one restore operation at a time. If a currently +running snapshot was executed by mistake, or takes unusually long, it can be terminated using the snapshot delete operation. +The snapshot delete operation checks if the deleted snapshot is currently running and if it does, the delete operation stops +that snapshot before deleting the snapshot data from the repository. + +[source,console] +----------------------------------- +DELETE /_snapshot/my_backup/snapshot_1 +----------------------------------- +// TEST[continued] + +The restore operation uses the standard shard recovery mechanism. Therefore, any currently running restore operation can +be canceled by deleting indices that are being restored. Please note that data for all deleted indices will be removed +from the cluster as a result of this operation. + +[float] +=== Effect of cluster blocks on snapshot and restore + +Many snapshot and restore operations are affected by cluster and index blocks. For example, registering and unregistering +repositories require write global metadata access. The snapshot operation requires that all indices and their metadata as +well as the global metadata were readable. The restore operation requires the global metadata to be writable, however +the index level blocks are ignored during restore because indices are essentially recreated during restore. +Please note that a repository content is not part of the cluster and therefore cluster blocks don't affect internal +repository operations such as listing or deleting snapshots from an already registered repository. \ No newline at end of file diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc new file mode 100644 index 0000000000000..343511083f3f0 --- /dev/null +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -0,0 +1,187 @@ +[[snapshots-restore-snapshot]] +== Restore indices from a snapshot + +++++ +Restore a snapshot +++++ + +A snapshot can be restored using the following command: + +[source,console] +----------------------------------- +POST /_snapshot/my_backup/snapshot_1/_restore +----------------------------------- +// TEST[continued] + +By default, all indices in the snapshot are restored, and the cluster state is +*not* restored. It's possible to select indices that should be restored as well +as to allow the global cluster state from being restored by using `indices` and +`include_global_state` options in the restore request body. The list of indices +supports <>. The `rename_pattern` +and `rename_replacement` options can be also used to rename indices on restore +using regular expression that supports referencing the original text as +explained +http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here]. +Set `include_aliases` to `false` to prevent aliases from being restored together +with associated indices + +[source,console] +----------------------------------- +POST /_snapshot/my_backup/snapshot_1/_restore +{ + "indices": "index_1,index_2", + "ignore_unavailable": true, + "include_global_state": true, + "rename_pattern": "index_(.+)", + "rename_replacement": "restored_index_$1" +} +----------------------------------- +// TEST[continued] + +The restore operation can be performed on a functioning cluster. However, an +existing index can be only restored if it's <> and +has the same number of shards as the index in the snapshot. The restore +operation automatically opens restored indices if they were closed and creates +new indices if they didn't exist in the cluster. If cluster state is restored +with `include_global_state` (defaults to `false`), the restored templates that +don't currently exist in the cluster are added and existing templates with the +same name are replaced by the restored templates. The restored persistent +settings are added to the existing persistent settings. + +[float] +=== Partial restore + +By default, the entire restore operation will fail if one or more indices participating in the operation don't have +snapshots of all shards available. It can occur if some shards failed to snapshot for example. It is still possible to +restore such indices by setting `partial` to `true`. Please note, that only successfully snapshotted shards will be +restored in this case and all missing shards will be recreated empty. + + +[float] +=== Changing index settings during restore + +Most of index settings can be overridden during the restore process. For example, the following command will restore +the index `index_1` without creating any replicas while switching back to default refresh interval: + +[source,console] +----------------------------------- +POST /_snapshot/my_backup/snapshot_1/_restore +{ + "indices": "index_1", + "index_settings": { + "index.number_of_replicas": 0 + }, + "ignore_index_settings": [ + "index.refresh_interval" + ] +} +----------------------------------- +// TEST[continued] + +Please note, that some settings such as `index.number_of_shards` cannot be changed during restore operation. + +[float] +=== Restoring to a different cluster + +The information stored in a snapshot is not tied to a particular cluster or a cluster name. Therefore it's possible to +restore a snapshot made from one cluster into another cluster. All that is required is registering the repository +containing the snapshot in the new cluster and starting the restore process. The new cluster doesn't have to have the +same size or topology. However, the version of the new cluster should be the same or newer (only 1 major version newer) than the cluster that was used to create the snapshot. For example, you can restore a 1.x snapshot to a 2.x cluster, but not a 1.x snapshot to a 5.x cluster. + +If the new cluster has a smaller size additional considerations should be made. First of all it's necessary to make sure +that new cluster have enough capacity to store all indices in the snapshot. It's possible to change indices settings +during restore to reduce the number of replicas, which can help with restoring snapshots into smaller cluster. It's also +possible to select only subset of the indices using the `indices` parameter. + +If indices in the original cluster were assigned to particular nodes using +<>, the same rules will be enforced in the new cluster. Therefore +if the new cluster doesn't contain nodes with appropriate attributes that a restored index can be allocated on, such +index will not be successfully restored unless these index allocation settings are changed during restore operation. + +The restore operation also checks that restored persistent settings are compatible with the current cluster to avoid accidentally +restoring incompatible settings. If you need to restore a snapshot with incompatible persistent settings, try restoring it without +the global cluster state. + +[float] +=== Snapshot status + +A list of currently running snapshots with their detailed status information can be obtained using the following command: + +[source,console] +----------------------------------- +GET /_snapshot/_status +----------------------------------- +// TEST[continued] + +In this format, the command will return information about all currently running snapshots. By specifying a repository name, it's possible +to limit the results to a particular repository: + +[source,console] +----------------------------------- +GET /_snapshot/my_backup/_status +----------------------------------- +// TEST[continued] + +If both repository name and snapshot id are specified, this command will return detailed status information for the given snapshot even +if it's not currently running: + +[source,console] +----------------------------------- +GET /_snapshot/my_backup/snapshot_1/_status +----------------------------------- +// TEST[continued] + +The output looks similar to the following: + +[source,console-result] +-------------------------------------------------- +{ + "snapshots": [ + { + "snapshot": "snapshot_1", + "repository": "my_backup", + "uuid": "XuBo4l4ISYiVg0nYUen9zg", + "state": "SUCCESS", + "include_global_state": true, + "shards_stats": { + "initializing": 0, + "started": 0, + "finalizing": 0, + "done": 5, + "failed": 0, + "total": 5 + }, + "stats": { + "incremental": { + "file_count": 8, + "size_in_bytes": 4704 + }, + "processed": { + "file_count": 7, + "size_in_bytes": 4254 + }, + "total": { + "file_count": 8, + "size_in_bytes": 4704 + }, + "start_time_in_millis": 1526280280355, + "time_in_millis": 358 + } + } + ] +} +-------------------------------------------------- + +The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were +snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository, +the `stats` object contains a `total` section for all the files that are referenced by the snapshot, as well as an `incremental` section +for those files that actually needed to be copied over as part of the incremental snapshotting. In case of a snapshot that's still +in progress, there's also a `processed` section that contains information about the files that are in the process of being copied. + +Multiple ids are also supported: + +[source,console] +----------------------------------- +GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status +----------------------------------- +// TEST[continued] diff --git a/docs/reference/snapshot-restore/snapshots-register-repository.asciidoc b/docs/reference/snapshot-restore/snapshots-register-repository.asciidoc new file mode 100644 index 0000000000000..0ceb87ffd0d35 --- /dev/null +++ b/docs/reference/snapshot-restore/snapshots-register-repository.asciidoc @@ -0,0 +1,285 @@ +[[snapshots-register-repository]] +== Register a snapshot repository + +++++ +Register repository +++++ + +You must register a snapshot repository before you can perform snapshot and +restore operations. We recommend creating a new snapshot repository for each +major version. The valid repository settings depend on the repository type. + +If you register same snapshot repository with multiple clusters, only +one cluster should have write access to the repository. All other clusters +connected to that repository should set the repository to `readonly` mode. + +IMPORTANT: The snapshot format can change across major versions, so if you have +clusters on different versions trying to write the same repository, snapshots +written by one version may not be visible to the other and the repository could +be corrupted. While setting the repository to `readonly` on all but one of the +clusters should work with multiple clusters differing by one major version, it +is not a supported configuration. + +[source,console] +----------------------------------- +PUT /_snapshot/my_backup +{ + "type": "fs", + "settings": { + "location": "my_backup_location" + } +} +----------------------------------- +// TESTSETUP + +To retrieve information about a registered repository, use a GET request: + +[source,console] +----------------------------------- +GET /_snapshot/my_backup +----------------------------------- + +which returns: + +[source,console-result] +----------------------------------- +{ + "my_backup": { + "type": "fs", + "settings": { + "location": "my_backup_location" + } + } +} +----------------------------------- + +To retrieve information about multiple repositories, specify a comma-delimited +list of repositories. You can also use the * wildcard when +specifying repository names. For example, the following request retrieves +information about all of the snapshot repositories that start with `repo` or +contain `backup`: + +[source,console] +----------------------------------- +GET /_snapshot/repo*,*backup* +----------------------------------- + +To retrieve information about all registered snapshot repositories, omit the +repository name or specify `_all`: + +[source,console] +----------------------------------- +GET /_snapshot +----------------------------------- + +or + +[source,console] +----------------------------------- +GET /_snapshot/_all +----------------------------------- + +[float] +=== Shared file system repository + +The shared file system repository (`"type": "fs"`) uses the shared file system to store snapshots. In order to register +the shared file system repository it is necessary to mount the same shared filesystem to the same location on all +master and data nodes. This location (or one of its parent directories) must be registered in the `path.repo` +setting on all master and data nodes. + +Assuming that the shared filesystem is mounted to `/mount/backups/my_fs_backup_location`, the following setting should +be added to `elasticsearch.yml` file: + +[source,yaml] +-------------- +path.repo: ["/mount/backups", "/mount/longterm_backups"] +-------------- + +The `path.repo` setting supports Microsoft Windows UNC paths as long as at least server name and share are specified as +a prefix and back slashes are properly escaped: + +[source,yaml] +-------------- +path.repo: ["\\\\MY_SERVER\\Snapshots"] +-------------- + +After all nodes are restarted, the following command can be used to register the shared file system repository with +the name `my_fs_backup`: + +[source,console] +----------------------------------- +PUT /_snapshot/my_fs_backup +{ + "type": "fs", + "settings": { + "location": "/mount/backups/my_fs_backup_location", + "compress": true + } +} +----------------------------------- +// TEST[skip:no access to absolute path] + +If the repository location is specified as a relative path this path will be resolved against the first path specified +in `path.repo`: + +[source,console] +----------------------------------- +PUT /_snapshot/my_fs_backup +{ + "type": "fs", + "settings": { + "location": "my_fs_backup_location", + "compress": true + } +} +----------------------------------- +// TEST[continued] + +The following settings are supported: + +[horizontal] +`location`:: Location of the snapshots. Mandatory. +`compress`:: Turns on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. Defaults to `true`. +`chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. Specify the chunk size as a value and +unit, for example: `1GB`, `10MB`, `5KB`, `500B`. Defaults to `null` (unlimited chunk size). +`max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `40mb` per second. +`max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `40mb` per second. +`readonly`:: Makes repository read-only. Defaults to `false`. + +[float] +=== Read-only URL repository + +The URL repository (`"type": "url"`) can be used as an alternative read-only way to access data created by the shared file +system repository. The URL specified in the `url` parameter should point to the root of the shared filesystem repository. +The following settings are supported: + +[horizontal] +`url`:: Location of the snapshots. Mandatory. + +URL Repository supports the following protocols: "http", "https", "ftp", "file" and "jar". URL repositories with `http:`, +`https:`, and `ftp:` URLs has to be whitelisted by specifying allowed URLs in the `repositories.url.allowed_urls` setting. +This setting supports wildcards in the place of host, path, query, and fragment. For example: + +[source,yaml] +----------------------------------- +repositories.url.allowed_urls: ["http://www.example.org/root/*", "https://*.mydomain.com/*?*#*"] +----------------------------------- + +URL repositories with `file:` URLs can only point to locations registered in the `path.repo` setting similar to +shared file system repository. + +[float] +[role="xpack"] +[testenv="basic"] +=== Source only repository + +A source repository enables you to create minimal, source-only snapshots that take up to 50% less space on disk. +Source only snapshots contain stored fields and index metadata. They do not include index or doc values structures +and are not searchable when restored. After restoring a source-only snapshot, you must <> +the data into a new index. + +Source repositories delegate to another snapshot repository for storage. + + +[IMPORTANT] +================================================== + +Source only snapshots are only supported if the `_source` field is enabled and no source-filtering is applied. +When you restore a source only snapshot: + + * The restored index is read-only and can only serve `match_all` search or scroll requests to enable reindexing. + + * Queries other than `match_all` and `_get` requests are not supported. + + * The mapping of the restored index is empty, but the original mapping is available from the types top + level `meta` element. + +================================================== + +When you create a source repository, you must specify the type and name of the delegate repository +where the snapshots will be stored: + +[source,console] +----------------------------------- +PUT _snapshot/my_src_only_repository +{ + "type": "source", + "settings": { + "delegate_type": "fs", + "location": "my_backup_location" + } +} +----------------------------------- +// TEST[continued] + +[float] +=== Repository plugins + +Other repository backends are available in these official plugins: + +* {plugins}/repository-s3.html[repository-s3] for S3 repository support +* {plugins}/repository-hdfs.html[repository-hdfs] for HDFS repository support in Hadoop environments +* {plugins}/repository-azure.html[repository-azure] for Azure storage repositories +* {plugins}/repository-gcs.html[repository-gcs] for Google Cloud Storage repositories + +[float] +=== Repository verification +When a repository is registered, it's immediately verified on all master and data nodes to make sure that it is functional +on all nodes currently present in the cluster. The `verify` parameter can be used to explicitly disable the repository +verification when registering or updating a repository: + +[source,console] +----------------------------------- +PUT /_snapshot/my_unverified_backup?verify=false +{ + "type": "fs", + "settings": { + "location": "my_unverified_backup_location" + } +} +----------------------------------- +// TEST[continued] + +The verification process can also be executed manually by running the following command: + +[source,console] +----------------------------------- +POST /_snapshot/my_unverified_backup/_verify +----------------------------------- +// TEST[continued] + +It returns a list of nodes where repository was successfully verified or an error message if verification process failed. + +[float] +=== Repository cleanup +Repositories can over time accumulate data that is not referenced by any existing snapshot. This is a result of the data safety guarantees +the snapshot functionality provides in failure scenarios during snapshot creation and the decentralized nature of the snapshot creation +process. This unreferenced data does in no way negatively impact the performance or safety of a snapshot repository but leads to higher +than necessary storage use. In order to clean up this unreferenced data, users can call the cleanup endpoint for a repository which will +trigger a complete accounting of the repositories contents and subsequent deletion of all unreferenced data that was found. + +[source,console] +----------------------------------- +POST /_snapshot/my_repository/_cleanup +----------------------------------- +// TEST[continued] + +The response to a cleanup request looks as follows: + +[source,console-result] +-------------------------------------------------- +{ + "results": { + "deleted_bytes": 20, + "deleted_blobs": 5 + } +} +-------------------------------------------------- + +Depending on the concrete repository implementation the numbers shown for bytes free as well as the number of blobs removed will either +be an approximation or an exact result. Any non-zero value for the number of blobs removed implies that unreferenced blobs were found and +subsequently cleaned up. + +Please note that most of the cleanup operations executed by this endpoint are automatically executed when deleting any snapshot from a +repository. If you regularly delete snapshots, you will in most cases not get any or only minor space savings from using this functionality +and should lower your frequency of invoking it accordingly. diff --git a/docs/reference/snapshot-restore/snapshots-take-snapshot.asciidoc b/docs/reference/snapshot-restore/snapshots-take-snapshot.asciidoc new file mode 100644 index 0000000000000..9e042f4598962 --- /dev/null +++ b/docs/reference/snapshot-restore/snapshots-take-snapshot.asciidoc @@ -0,0 +1,194 @@ +[[snapshots-take-snapshot]] +== Take a snapshot of one or more indices + +++++ +Take a snapshot +++++ + +A repository can contain multiple snapshots of the same cluster. Snapshots are identified by unique names within the +cluster. A snapshot with the name `snapshot_1` in the repository `my_backup` can be created by executing the following +command: + +[source,console] +----------------------------------- +PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true +----------------------------------- +// TEST[continued] + +The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot +initialization (default) or wait for snapshot completion. During snapshot initialization, information about all +previous snapshots is loaded into the memory, which means that in large repositories it may take several seconds (or +even minutes) for this command to return even if the `wait_for_completion` parameter is set to `false`. + +By default a snapshot of all open and started indices in the cluster is created. This behavior can be changed by +specifying the list of indices in the body of the snapshot request. + +[source,console] +----------------------------------- +PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true +{ + "indices": "index_1,index_2", + "ignore_unavailable": true, + "include_global_state": false, + "metadata": { + "taken_by": "kimchy", + "taken_because": "backup before upgrading" + } +} +----------------------------------- +// TEST[continued] + +The list of indices that should be included into the snapshot can be specified using the `indices` parameter that +supports <>. The snapshot request also supports the +`ignore_unavailable` option. Setting it to `true` will cause indices that do not exist to be ignored during snapshot +creation. By default, when `ignore_unavailable` option is not set and an index is missing the snapshot request will fail. +By setting `include_global_state` to false it's possible to prevent the cluster global state to be stored as part of +the snapshot. By default, the entire snapshot will fail if one or more indices participating in the snapshot don't have +all primary shards available. This behaviour can be changed by setting `partial` to `true`. + +The `metadata` field can be used to attach arbitrary metadata to the snapshot. This may be a record of who took the snapshot, +why it was taken, or any other data that might be useful. + +Snapshot names can be automatically derived using <>, similarly as when creating +new indices. Note that special characters need to be URI encoded. + +For example, creating a snapshot with the current day in the name, like `snapshot-2018.05.11`, can be achieved with +the following command: + +[source,console] +----------------------------------- +# PUT /_snapshot/my_backup/ +PUT /_snapshot/my_backup/%3Csnapshot-%7Bnow%2Fd%7D%3E +----------------------------------- +// TEST[continued] + + +The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses +the list of the index files that are already stored in the repository and copies only files that were created or +changed since the last snapshot. That allows multiple snapshots to be preserved in the repository in a compact form. +Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be +executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index +at the moment when snapshot was created, so no records that were added to the index after the snapshot process was started +will be present in the snapshot. The snapshot process starts immediately for the primary shards that has been started +and are not relocating at the moment. Before version 1.2.0, the snapshot operation fails if the cluster has any relocating or +initializing primaries of indices participating in the snapshot. Starting with version 1.2.0, Elasticsearch waits for +relocation or initialization of shards to complete before snapshotting them. + +Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent +cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of +the snapshot. + +Only one snapshot process can be executed in the cluster at any time. While snapshot of a particular shard is being +created this shard cannot be moved to another node, which can interfere with rebalancing process and allocation +filtering. Elasticsearch will only be able to move a shard to another node (according to the current allocation +filtering settings and rebalancing algorithm) once the snapshot is finished. + +Once a snapshot is created information about this snapshot can be obtained using the following command: + +[source,console] +----------------------------------- +GET /_snapshot/my_backup/snapshot_1 +----------------------------------- +// TEST[continued] + +This command returns basic information about the snapshot including start and end time, version of +Elasticsearch that created the snapshot, the list of included indices, the current state of the +snapshot and the list of failures that occurred during the snapshot. The snapshot `state` can be + +[horizontal] +`IN_PROGRESS`:: + + The snapshot is currently running. + +`SUCCESS`:: + + The snapshot finished and all shards were stored successfully. + +`FAILED`:: + + The snapshot finished with an error and failed to store any data. + +`PARTIAL`:: + + The global cluster state was stored, but data of at least one shard wasn't stored successfully. + The `failure` section in this case should contain more detailed information about shards + that were not processed correctly. + +`INCOMPATIBLE`:: + + The snapshot was created with an old version of Elasticsearch and therefore is incompatible with + the current version of the cluster. + + +Similar as for repositories, information about multiple snapshots can be queried in one go, supporting wildcards as well: + +[source,console] +----------------------------------- +GET /_snapshot/my_backup/snapshot_*,some_other_snapshot +----------------------------------- +// TEST[continued] + +All snapshots currently stored in the repository can be listed using the following command: + +[source,console] +----------------------------------- +GET /_snapshot/my_backup/_all +----------------------------------- +// TEST[continued] + +The command fails if some of the snapshots are unavailable. The boolean parameter `ignore_unavailable` can be used to +return all snapshots that are currently available. + +Getting all snapshots in the repository can be costly on cloud-based repositories, +both from a cost and performance perspective. If the only information required is +the snapshot names/uuids in the repository and the indices in each snapshot, then +the optional boolean parameter `verbose` can be set to `false` to execute a more +performant and cost-effective retrieval of the snapshots in the repository. Note +that setting `verbose` to `false` will omit all other information about the snapshot +such as status information, the number of snapshotted shards, etc. The default +value of the `verbose` parameter is `true`. + +It is also possible to retrieve snapshots from multiple repositories in one go, for example: + +[source,console] +----------------------------------- +GET /_snapshot/_all +GET /_snapshot/my_backup,my_fs_backup +GET /_snapshot/my*/snap* +----------------------------------- +// TEST[skip:no my_fs_backup] + +A currently running snapshot can be retrieved using the following command: + +[source,console] +----------------------------------- +GET /_snapshot/my_backup/_current +----------------------------------- +// TEST[continued] + +A snapshot can be deleted from the repository using the following command: + +[source,console] +----------------------------------- +DELETE /_snapshot/my_backup/snapshot_2 +----------------------------------- +// TEST[continued] + +When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted +snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being +created the snapshotting process will be aborted and all files created as part of the snapshotting process will be +cleaned. Therefore, the delete snapshot operation can be used to cancel long running snapshot operations that were +started by mistake. + +A repository can be unregistered using the following command: + +[source,console] +----------------------------------- +DELETE /_snapshot/my_backup +----------------------------------- +// TEST[continued] + +When a repository is unregistered, Elasticsearch only removes the reference to the location where the repository is storing +the snapshots. The snapshots themselves are left untouched and in place. + + diff --git a/docs/reference/snapshot-restore/snapshots.asciidoc b/docs/reference/snapshot-restore/snapshots.asciidoc new file mode 100644 index 0000000000000..fd65807ea1dbf --- /dev/null +++ b/docs/reference/snapshot-restore/snapshots.asciidoc @@ -0,0 +1,87 @@ +[[snapshot-restore]] += Snapshot and restore + +[partintro] +-- + +// tag::snapshot-intro[] +A snapshot is a backup taken from a running Elasticsearch cluster. You can take +a snapshot of individual indices or of the entire cluster and store it in a +repository on a shared filesystem, and there are plugins that support remote +repositories on S3, HDFS, Azure, Google Cloud Storage and more. + +Snapshots are taken incrementally. This means that when it creates a snapshot of +an index, Elasticsearch avoids copying any data that is already stored in the +repository as part of an earlier snapshot of the same index. Therefore it can be +efficient to take snapshots of your cluster quite frequently. +// end::snapshot-intro[] + +// tag::restore-intro[] +You can restore snapshots into a running cluster via the +<>. When you restore an index, you can alter the +name of the restored index as well as some of its settings. There is a great +deal of flexibility in how the snapshot and restore functionality can be used. +// end::restore-intro[] + +You can automate your snapshot backup and restore process by using +<>. + +// tag::backup-warning[] +WARNING: You cannot back up an Elasticsearch cluster by simply taking a copy of +the data directories of all of its nodes. Elasticsearch may be making changes to +the contents of its data directories while it is running; copying its data +directories cannot be expected to capture a consistent picture of their contents. +If you try to restore a cluster from such a backup, it may fail and report +corruption and/or missing files. Alternatively, it may appear to have succeeded +though it silently lost some of its data. The only reliable way to back up a +cluster is by using the snapshot and restore functionality. + +// end::backup-warning[] + +[float] +=== Version compatibility + +IMPORTANT: Version compatibility refers to the underlying Lucene index +compatibility. Follow the <> +when migrating between versions. + +A snapshot contains a copy of the on-disk data structures that make up an +index. This means that snapshots can only be restored to versions of +Elasticsearch that can read the indices: + +* A snapshot of an index created in 6.x can be restored to 7.x. +* A snapshot of an index created in 5.x can be restored to 6.x. +* A snapshot of an index created in 2.x can be restored to 5.x. +* A snapshot of an index created in 1.x can be restored to 2.x. + +Conversely, snapshots of indices created in 1.x **cannot** be restored to 5.x +or 6.x, snapshots of indices created in 2.x **cannot** be restored to 6.x +or 7.x, and snapshots of indices created in 5.x **cannot** be restored to 7.x +or 8.x. + +Each snapshot can contain indices created in various versions of Elasticsearch, +and when restoring a snapshot it must be possible to restore all of the indices +into the target cluster. If any indices in a snapshot were created in an +incompatible version, you will not be able restore the snapshot. + +IMPORTANT: When backing up your data prior to an upgrade, keep in mind that you +won't be able to restore snapshots after you upgrade if they contain indices +created in a version that's incompatible with the upgrade version. + +If you end up in a situation where you need to restore a snapshot of an index +that is incompatible with the version of the cluster you are currently running, +you can restore it on the latest compatible version and use +<> to rebuild the index on the current +version. Reindexing from remote is only possible if the original index has +source enabled. Retrieving and reindexing the data can take significantly +longer than simply restoring a snapshot. If you have a large amount of data, we +recommend testing the reindex from remote process with a subset of your data to +understand the time requirements before proceeding. + +-- + +include::snapshots-register-repository.asciidoc[] +include::snapshots-take-snapshot.asciidoc[] +include::snapshots-restore-snapshot.asciidoc[] +include::snapshots-monitor-snapshot-restore.asciidoc[] + From 0403bdac3d4a760ba82c091b5db7188a9038c99a Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Tue, 26 Nov 2019 19:12:17 -0800 Subject: [PATCH 02/19] [DOCS] Move snapshot-restore out of Modules --- docs/reference/glossary.asciidoc | 2 +- .../high-availability/backup-cluster-data.asciidoc | 2 +- .../high-availability/backup-cluster.asciidoc | 2 +- .../high-availability/restore-cluster-data.asciidoc | 2 +- docs/reference/ilm/getting-started-slm.asciidoc | 4 ++-- docs/reference/index.asciidoc | 2 +- docs/reference/indices/recovery.asciidoc | 2 +- docs/reference/redirects.asciidoc | 10 ++++++++++ .../{snapshots.asciidoc => index.asciidoc} | 10 +++++----- ...epository.asciidoc => register-repository.asciidoc} | 0 ...s-take-snapshot.asciidoc => take-snapshot.asciidoc} | 0 11 files changed, 23 insertions(+), 13 deletions(-) rename docs/reference/snapshot-restore/{snapshots.asciidoc => index.asciidoc} (93%) rename docs/reference/snapshot-restore/{snapshots-register-repository.asciidoc => register-repository.asciidoc} (100%) rename docs/reference/snapshot-restore/{snapshots-take-snapshot.asciidoc => take-snapshot.asciidoc} (100%) diff --git a/docs/reference/glossary.asciidoc b/docs/reference/glossary.asciidoc index 09b743b07e2ef..07270228bace9 100644 --- a/docs/reference/glossary.asciidoc +++ b/docs/reference/glossary.asciidoc @@ -223,7 +223,7 @@ during the following processes: This type of recovery is called a *local store recovery*. * <>. * Relocation of a shard to a different node in the same cluster. -* {ref}/modules-snapshots.html#restore-snapshot[Snapshot restoration]. +* {ref}/snapshots-restore-snapshot.html[Snapshot restoration]. // end::recovery-triggers[] // end::recovery-def[] -- diff --git a/docs/reference/high-availability/backup-cluster-data.asciidoc b/docs/reference/high-availability/backup-cluster-data.asciidoc index 485e047acd255..4053d6e47b691 100644 --- a/docs/reference/high-availability/backup-cluster-data.asciidoc +++ b/docs/reference/high-availability/backup-cluster-data.asciidoc @@ -6,7 +6,7 @@ To back up your cluster's data, you can use the <>. -include::{es-repo-dir}/modules/snapshots.asciidoc[tag=snapshot-intro] +include::../snapshot-restore/index.asciidoc[tag=snapshot-intro] [TIP] ==== diff --git a/docs/reference/high-availability/backup-cluster.asciidoc b/docs/reference/high-availability/backup-cluster.asciidoc index 5544af65bc172..3749029355453 100644 --- a/docs/reference/high-availability/backup-cluster.asciidoc +++ b/docs/reference/high-availability/backup-cluster.asciidoc @@ -1,7 +1,7 @@ [[backup-cluster]] == Back up a cluster -include::{es-repo-dir}/modules/snapshots.asciidoc[tag=backup-warning] +include::../snapshot-restore/index.asciidoc[tag=backup-warning] To have a complete backup for your cluster: diff --git a/docs/reference/high-availability/restore-cluster-data.asciidoc b/docs/reference/high-availability/restore-cluster-data.asciidoc index c9ae6da339fd4..da58e8c4f3e7f 100644 --- a/docs/reference/high-availability/restore-cluster-data.asciidoc +++ b/docs/reference/high-availability/restore-cluster-data.asciidoc @@ -4,7 +4,7 @@ Restore the data ++++ -include::{es-repo-dir}/modules/snapshots.asciidoc[tag=restore-intro] +include::../snapshot-restore/index.asciidoc[tag=restore-intro] [TIP] ==== diff --git a/docs/reference/ilm/getting-started-slm.asciidoc b/docs/reference/ilm/getting-started-slm.asciidoc index 54ebef9a8dd3b..a4d056046d684 100644 --- a/docs/reference/ilm/getting-started-slm.asciidoc +++ b/docs/reference/ilm/getting-started-slm.asciidoc @@ -5,7 +5,7 @@ Let's get started with snapshot lifecycle management (SLM) by working through a hands-on scenario. The goal of this example is to automatically back up {es} -indices using the <> every day at a particular +indices using the <> every day at a particular time. Once these snapshots have been created, they are kept for a configured amount of time and then deleted per a configured retention policy. @@ -59,7 +59,7 @@ POST /_security/role/slm-read-only === Setting up a repository Before we can set up an SLM policy, we'll need to set up a -<> where the snapshots will be +snapshot repository where the snapshots will be stored. Repositories can use {plugins}/repository.html[many different backends], including cloud storage providers. You'll probably want to use one of these in production, but for this example we'll use a shared file system repository: diff --git a/docs/reference/index.asciidoc b/docs/reference/index.asciidoc index 4684b875ba2aa..e203b1d4e8730 100644 --- a/docs/reference/index.asciidoc +++ b/docs/reference/index.asciidoc @@ -50,7 +50,7 @@ include::data-rollup-transform.asciidoc[] include::high-availability.asciidoc[] -include::modules/snapshots.asciidoc[] +include::snapshot-restore/index.asciidoc[] include::{xes-repo-dir}/security/index.asciidoc[] diff --git a/docs/reference/indices/recovery.asciidoc b/docs/reference/indices/recovery.asciidoc index 13a3e3788320c..884f7d2fd3122 100644 --- a/docs/reference/indices/recovery.asciidoc +++ b/docs/reference/indices/recovery.asciidoc @@ -77,7 +77,7 @@ This type of recovery is called a local store recovery. `SNAPSHOT`:: The recovery is related to -a <>. +a <>. `REPLICA`:: The recovery is related to diff --git a/docs/reference/redirects.asciidoc b/docs/reference/redirects.asciidoc index 050b273e85ad3..779c4ddf5ca49 100644 --- a/docs/reference/redirects.asciidoc +++ b/docs/reference/redirects.asciidoc @@ -1117,3 +1117,13 @@ See <>, === Snapshot module See <>. + +[role="exclude",id="restore-snapshot"] +=== Restore snapshot + +See <>. + +[role="exclude",id="snapshots-repositories"] +=== Snapshot repositories + +See <>. diff --git a/docs/reference/snapshot-restore/snapshots.asciidoc b/docs/reference/snapshot-restore/index.asciidoc similarity index 93% rename from docs/reference/snapshot-restore/snapshots.asciidoc rename to docs/reference/snapshot-restore/index.asciidoc index fd65807ea1dbf..63ddf806afdb4 100644 --- a/docs/reference/snapshot-restore/snapshots.asciidoc +++ b/docs/reference/snapshot-restore/index.asciidoc @@ -18,7 +18,7 @@ efficient to take snapshots of your cluster quite frequently. // tag::restore-intro[] You can restore snapshots into a running cluster via the -<>. When you restore an index, you can alter the +<>. When you restore an index, you can alter the name of the restored index as well as some of its settings. There is a great deal of flexibility in how the snapshot and restore functionality can be used. // end::restore-intro[] @@ -80,8 +80,8 @@ understand the time requirements before proceeding. -- -include::snapshots-register-repository.asciidoc[] -include::snapshots-take-snapshot.asciidoc[] -include::snapshots-restore-snapshot.asciidoc[] -include::snapshots-monitor-snapshot-restore.asciidoc[] +include::register-repository.asciidoc[] +include::take-snapshot.asciidoc[] +include::restore-snapshot.asciidoc[] +include::monitor-snapshot-restore.asciidoc[] diff --git a/docs/reference/snapshot-restore/snapshots-register-repository.asciidoc b/docs/reference/snapshot-restore/register-repository.asciidoc similarity index 100% rename from docs/reference/snapshot-restore/snapshots-register-repository.asciidoc rename to docs/reference/snapshot-restore/register-repository.asciidoc diff --git a/docs/reference/snapshot-restore/snapshots-take-snapshot.asciidoc b/docs/reference/snapshot-restore/take-snapshot.asciidoc similarity index 100% rename from docs/reference/snapshot-restore/snapshots-take-snapshot.asciidoc rename to docs/reference/snapshot-restore/take-snapshot.asciidoc From 7b438cf4cef7bce287725ecddd04eec34956cac8 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Tue, 3 Dec 2019 13:11:02 -0800 Subject: [PATCH 03/19] Updated redirects. --- docs/reference/redirects.asciidoc | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/reference/redirects.asciidoc b/docs/reference/redirects.asciidoc index 779c4ddf5ca49..aaaa882b7cc0b 100644 --- a/docs/reference/redirects.asciidoc +++ b/docs/reference/redirects.asciidoc @@ -1127,3 +1127,4 @@ See <>. === Snapshot repositories See <>. + From d225ba52d944572592569060d8dc90996dacd643 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Tue, 3 Dec 2019 16:16:15 -0800 Subject: [PATCH 04/19] [DOCS] Incorporates comments from @jrodewig. --- .../reference/snapshot-restore/index.asciidoc | 41 ++++++++++--------- .../monitor-snapshot-restore.asciidoc | 2 +- .../register-repository.asciidoc | 7 +++- 3 files changed, 29 insertions(+), 21 deletions(-) diff --git a/docs/reference/snapshot-restore/index.asciidoc b/docs/reference/snapshot-restore/index.asciidoc index 63ddf806afdb4..3752fd36a6d3b 100644 --- a/docs/reference/snapshot-restore/index.asciidoc +++ b/docs/reference/snapshot-restore/index.asciidoc @@ -5,30 +5,33 @@ -- // tag::snapshot-intro[] -A snapshot is a backup taken from a running Elasticsearch cluster. You can take -a snapshot of individual indices or of the entire cluster and store it in a -repository on a shared filesystem, and there are plugins that support remote -repositories on S3, HDFS, Azure, Google Cloud Storage and more. +A _snapshot_ is a backup taken from a running {es} cluster. +You can take snapshots of individual indices or of the entire cluster. +Snapshots can be stored in either local or remote repositories. +Remote repositories can reside on S3, HDFS, Azure, Google Cloud Storage, +and other platforms supported by a repository plugin. -Snapshots are taken incrementally. This means that when it creates a snapshot of -an index, Elasticsearch avoids copying any data that is already stored in the -repository as part of an earlier snapshot of the same index. Therefore it can be -efficient to take snapshots of your cluster quite frequently. -// end::snapshot-intro[] +Snapshots are incremental: each snapshot of an index only stores data that +is not part of an earlier snapshot. +This enables you to take frequent snapshots with minimal overhead. +// end::snapshot-intro[] // tag::restore-intro[] -You can restore snapshots into a running cluster via the -<>. When you restore an index, you can alter the -name of the restored index as well as some of its settings. There is a great -deal of flexibility in how the snapshot and restore functionality can be used. +You can restore snapshots to a running cluster with the <>. +By default, all indices in the snapshot are restored. +Alternatively, you can restore specific indices or restore the cluster state from a snapshot. +When restoring indices, you can modify the index name and selected index settings. // end::restore-intro[] -You can automate your snapshot backup and restore process by using -<>. +You must <> +before you can <>. + +You can use <> +to automatically take and manage snapshots. // tag::backup-warning[] -WARNING: You cannot back up an Elasticsearch cluster by simply taking a copy of -the data directories of all of its nodes. Elasticsearch may be making changes to +WARNING: You cannot back up an {es} cluster by simply copying +the data directories of all of its nodes. {es} may be making changes to the contents of its data directories while it is running; copying its data directories cannot be expected to capture a consistent picture of their contents. If you try to restore a cluster from such a backup, it may fail and report @@ -47,7 +50,7 @@ when migrating between versions. A snapshot contains a copy of the on-disk data structures that make up an index. This means that snapshots can only be restored to versions of -Elasticsearch that can read the indices: +{es} that can read the indices: * A snapshot of an index created in 6.x can be restored to 7.x. * A snapshot of an index created in 5.x can be restored to 6.x. @@ -59,7 +62,7 @@ or 6.x, snapshots of indices created in 2.x **cannot** be restored to 6.x or 7.x, and snapshots of indices created in 5.x **cannot** be restored to 7.x or 8.x. -Each snapshot can contain indices created in various versions of Elasticsearch, +Each snapshot can contain indices created in various versions of {es}, and when restoring a snapshot it must be possible to restore all of the indices into the target cluster. If any indices in a snapshot were created in an incompatible version, you will not be able restore the snapshot. diff --git a/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc b/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc index b97193bf6fe12..789454f905019 100644 --- a/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc +++ b/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc @@ -5,7 +5,7 @@ Monitor snapshot and restore ++++ -There are several ways to monitor the progress of the snapshot and restores processes while they are running. Both +There are several ways to monitor the progress of the snapshot and restore processes while they are running. Both operations support `wait_for_completion` parameter that would block client until the operation is completed. This is the simplest method that can be used to get notified about operation completion. diff --git a/docs/reference/snapshot-restore/register-repository.asciidoc b/docs/reference/snapshot-restore/register-repository.asciidoc index 0ceb87ffd0d35..85f1c40ebeb58 100644 --- a/docs/reference/snapshot-restore/register-repository.asciidoc +++ b/docs/reference/snapshot-restore/register-repository.asciidoc @@ -80,6 +80,7 @@ GET /_snapshot/_all ----------------------------------- [float] +[[snapshots-filesystem-repository]] === Shared file system repository The shared file system repository (`"type": "fs"`) uses the shared file system to store snapshots. In order to register @@ -147,6 +148,7 @@ unit, for example: `1GB`, `10MB`, `5KB`, `500B`. Defaults to `null` (unlimited c `readonly`:: Makes repository read-only. Defaults to `false`. [float] +[[snapshots-read-only-repository]] === Read-only URL repository The URL repository (`"type": "url"`) can be used as an alternative read-only way to access data created by the shared file @@ -171,6 +173,7 @@ shared file system repository. [float] [role="xpack"] [testenv="basic"] +[[snapshots-source-only-repository]] === Source only repository A source repository enables you to create minimal, source-only snapshots that take up to 50% less space on disk. @@ -180,7 +183,6 @@ the data into a new index. Source repositories delegate to another snapshot repository for storage. - [IMPORTANT] ================================================== @@ -213,6 +215,7 @@ PUT _snapshot/my_src_only_repository // TEST[continued] [float] +[[snapshots-repository-plugins]] === Repository plugins Other repository backends are available in these official plugins: @@ -223,6 +226,7 @@ Other repository backends are available in these official plugins: * {plugins}/repository-gcs.html[repository-gcs] for Google Cloud Storage repositories [float] +[[snapshots-repository-verification]] === Repository verification When a repository is registered, it's immediately verified on all master and data nodes to make sure that it is functional on all nodes currently present in the cluster. The `verify` parameter can be used to explicitly disable the repository @@ -251,6 +255,7 @@ POST /_snapshot/my_unverified_backup/_verify It returns a list of nodes where repository was successfully verified or an error message if verification process failed. [float] +[[snapshots-repository-cleanup]] === Repository cleanup Repositories can over time accumulate data that is not referenced by any existing snapshot. This is a result of the data safety guarantees the snapshot functionality provides in failure scenarios during snapshot creation and the decentralized nature of the snapshot creation From 08f5a09b6e68b3d21699f0c8b1f534375d7bd1a1 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Tue, 3 Dec 2019 17:18:01 -0800 Subject: [PATCH 05/19] [DOCS] Fix snippet tests --- .../monitor-snapshot-restore.asciidoc | 18 +++++++++++++++++- .../snapshot-restore/restore-snapshot.asciidoc | 18 +++++++++++++++++- .../snapshot-restore/take-snapshot.asciidoc | 16 ++++++++++++++-- 3 files changed, 48 insertions(+), 4 deletions(-) diff --git a/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc b/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc index 789454f905019..326ba9cba93fb 100644 --- a/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc +++ b/docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc @@ -9,13 +9,29 @@ There are several ways to monitor the progress of the snapshot and restore proce operations support `wait_for_completion` parameter that would block client until the operation is completed. This is the simplest method that can be used to get notified about operation completion. +//// +[source,console] +----------------------------------- +PUT /_snapshot/my_backup +{ + "type": "fs", + "settings": { + "location": "my_backup_location" + } +} + +PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true +----------------------------------- +// TESTSETUP + +//// + The snapshot operation can be also monitored by periodic calls to the snapshot info: [source,console] ----------------------------------- GET /_snapshot/my_backup/snapshot_1 ----------------------------------- -// TEST[continued] Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So, executing a snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index 343511083f3f0..197e15674a909 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -5,13 +5,29 @@ Restore a snapshot ++++ +//// +[source,console] +----------------------------------- +PUT /_snapshot/my_backup +{ + "type": "fs", + "settings": { + "location": "my_backup_location" + } +} + +PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true +----------------------------------- +// TESTSETUP + +//// + A snapshot can be restored using the following command: [source,console] ----------------------------------- POST /_snapshot/my_backup/snapshot_1/_restore ----------------------------------- -// TEST[continued] By default, all indices in the snapshot are restored, and the cluster state is *not* restored. It's possible to select indices that should be restored as well diff --git a/docs/reference/snapshot-restore/take-snapshot.asciidoc b/docs/reference/snapshot-restore/take-snapshot.asciidoc index 9e042f4598962..3a6e4488ec896 100644 --- a/docs/reference/snapshot-restore/take-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/take-snapshot.asciidoc @@ -9,11 +9,24 @@ A repository can contain multiple snapshots of the same cluster. Snapshots are i cluster. A snapshot with the name `snapshot_1` in the repository `my_backup` can be created by executing the following command: +//// +[source,console] +----------------------------------- +PUT /_snapshot/my_backup +{ + "type": "fs", + "settings": { + "location": "my_backup_location" + } +} +----------------------------------- +// TESTSETUP +//// + [source,console] ----------------------------------- PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true ----------------------------------- -// TEST[continued] The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot initialization (default) or wait for snapshot completion. During snapshot initialization, information about all @@ -36,7 +49,6 @@ PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true } } ----------------------------------- -// TEST[continued] The list of indices that should be included into the snapshot can be specified using the `indices` parameter that supports <>. The snapshot request also supports the From 98ab43580fc1e4b99ca4d38a2c6caf7a2e72cd8a Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Wed, 4 Dec 2019 15:20:34 -0800 Subject: [PATCH 06/19] Resolve test error. --- docs/reference/snapshot-restore/restore-snapshot.asciidoc | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index 197e15674a909..131d13d1183e3 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -84,6 +84,7 @@ the index `index_1` without creating any replicas while switching back to defaul POST /_snapshot/my_backup/snapshot_1/_restore { "indices": "index_1", + "ignore_unavailable": true, "index_settings": { "index.number_of_replicas": 0 }, From 6238f5982accf8121fbd4a811d5082ce0a4fa596 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Thu, 19 Dec 2019 16:53:49 -0800 Subject: [PATCH 07/19] Test fix --- docs/reference/snapshot-restore/take-snapshot.asciidoc | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/reference/snapshot-restore/take-snapshot.asciidoc b/docs/reference/snapshot-restore/take-snapshot.asciidoc index 3a6e4488ec896..35632afc4f5c6 100644 --- a/docs/reference/snapshot-restore/take-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/take-snapshot.asciidoc @@ -49,6 +49,7 @@ PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true } } ----------------------------------- +// TEST[continued] The list of indices that should be included into the snapshot can be specified using the `indices` parameter that supports <>. The snapshot request also supports the From 9e19e16b654deee1682e2b43b760f968bb1b00f6 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Thu, 19 Dec 2019 17:39:00 -0800 Subject: [PATCH 08/19] Skip subsequent snapshot --- docs/reference/snapshot-restore/take-snapshot.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/snapshot-restore/take-snapshot.asciidoc b/docs/reference/snapshot-restore/take-snapshot.asciidoc index 35632afc4f5c6..20c2b4837306b 100644 --- a/docs/reference/snapshot-restore/take-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/take-snapshot.asciidoc @@ -49,7 +49,7 @@ PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true } } ----------------------------------- -// TEST[continued] +// TEST[skip:cannot complete subsequent snapshot] The list of indices that should be included into the snapshot can be specified using the `indices` parameter that supports <>. The snapshot request also supports the From e3edec133540f6a3eefe28c5031fed846f2d051c Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Thu, 19 Dec 2019 18:03:09 -0800 Subject: [PATCH 09/19] Test fix. --- docs/reference/snapshot-restore/restore-snapshot.asciidoc | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index 131d13d1183e3..0e5a5ebd29b4c 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -188,6 +188,7 @@ The output looks similar to the following: ] } -------------------------------------------------- +// TESTRESPONSE[s/\d+/$body.$_path/] The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository, From b431ca9d371d2d1e8a73eff5b396493caf7fa190 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Fri, 20 Dec 2019 15:42:54 -0800 Subject: [PATCH 10/19] Fix test --- docs/reference/snapshot-restore/restore-snapshot.asciidoc | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index 0e5a5ebd29b4c..aae59ad17a88b 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -188,7 +188,8 @@ The output looks similar to the following: ] } -------------------------------------------------- -// TESTRESPONSE[s/\d+/$body.$_path/] +// TESTRESPONSE[s/"uuid": XuBo4l4ISYiVg0nYUen9zg/"uuid": $body.uuid/] +// TESTRESPONSE[s/"done": 5/"uuid": $body.done/] The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository, From cc17e6aeda7033c8d5696187568f09d4ff436b06 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Fri, 20 Dec 2019 16:30:10 -0800 Subject: [PATCH 11/19] Fix test --- docs/reference/snapshot-restore/restore-snapshot.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index aae59ad17a88b..b0f40ea447405 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -189,7 +189,7 @@ The output looks similar to the following: } -------------------------------------------------- // TESTRESPONSE[s/"uuid": XuBo4l4ISYiVg0nYUen9zg/"uuid": $body.uuid/] -// TESTRESPONSE[s/"done": 5/"uuid": $body.done/] +// TESTRESPONSE[s/"shards_stats.done": 5/"shards_stats.done": $body.shards_stats.done/] The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository, From e9ff3a1e648103f40e5f17ef8a7442abf842e516 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Fri, 20 Dec 2019 17:07:20 -0800 Subject: [PATCH 12/19] Fix test --- .../snapshot-restore/restore-snapshot.asciidoc | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index b0f40ea447405..8884fe55421d4 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -189,7 +189,16 @@ The output looks similar to the following: } -------------------------------------------------- // TESTRESPONSE[s/"uuid": XuBo4l4ISYiVg0nYUen9zg/"uuid": $body.uuid/] -// TESTRESPONSE[s/"shards_stats.done": 5/"shards_stats.done": $body.shards_stats.done/] +// TESTRESPONSE[s/"done": 5/"done": $body.shards_stats.done/] +// TESTRESPONSE[s/"total": 5/"total": $body.shards_stats.total/] +// TESTRESPONSE[s/"file_count": 8/"file_count": $body.stats.incremental.file_count/] +// TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": $body.stats.incremental.size_in_bytes/] +// TESTRESPONSE[s/"file_count": 7//] +// TESTRESPONSE[s/"size_in_bytes": 4704//] +// TESTRESPONSE[s/"file_count": 8/"file_count": $body.stats.total.file_count/] +// TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": $body.stats.total.size_in_bytes/] +// TESTRESPONSE[s/"start_time_in_millis": 1526280280355/"start_time_in_millis": $body.stats.start_time_in_millis/] +// TESTRESPONSE[s/"time_in_millis": 358/"time_in_millis": $body.stats.time_in_millis/] The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository, From 7468f03a9699908f88dfe2518a2a3b2160f5f2e0 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Fri, 20 Dec 2019 17:13:49 -0800 Subject: [PATCH 13/19] Fix test --- docs/reference/snapshot-restore/restore-snapshot.asciidoc | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index 8884fe55421d4..8241e55e00eec 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -193,8 +193,6 @@ The output looks similar to the following: // TESTRESPONSE[s/"total": 5/"total": $body.shards_stats.total/] // TESTRESPONSE[s/"file_count": 8/"file_count": $body.stats.incremental.file_count/] // TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": $body.stats.incremental.size_in_bytes/] -// TESTRESPONSE[s/"file_count": 7//] -// TESTRESPONSE[s/"size_in_bytes": 4704//] // TESTRESPONSE[s/"file_count": 8/"file_count": $body.stats.total.file_count/] // TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": $body.stats.total.size_in_bytes/] // TESTRESPONSE[s/"start_time_in_millis": 1526280280355/"start_time_in_millis": $body.stats.start_time_in_millis/] From 5f57c9c808a0aa98b37a962cf2bb1851964e84ad Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Fri, 20 Dec 2019 17:32:35 -0800 Subject: [PATCH 14/19] Fix test --- docs/reference/snapshot-restore/restore-snapshot.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index 8241e55e00eec..a0038ed8afc25 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -191,8 +191,8 @@ The output looks similar to the following: // TESTRESPONSE[s/"uuid": XuBo4l4ISYiVg0nYUen9zg/"uuid": $body.uuid/] // TESTRESPONSE[s/"done": 5/"done": $body.shards_stats.done/] // TESTRESPONSE[s/"total": 5/"total": $body.shards_stats.total/] -// TESTRESPONSE[s/"file_count": 8/"file_count": $body.stats.incremental.file_count/] -// TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": $body.stats.incremental.size_in_bytes/] +// TESTRESPONSE[s/"file_count": 8/"file_count": 0/] +// TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": 0/] // TESTRESPONSE[s/"file_count": 8/"file_count": $body.stats.total.file_count/] // TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": $body.stats.total.size_in_bytes/] // TESTRESPONSE[s/"start_time_in_millis": 1526280280355/"start_time_in_millis": $body.stats.start_time_in_millis/] From 5d41aa697a768d5da742ed14d1482afc20522062 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Fri, 20 Dec 2019 17:51:09 -0800 Subject: [PATCH 15/19] Fix test --- docs/reference/snapshot-restore/restore-snapshot.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index a0038ed8afc25..16650bd316ae1 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -196,7 +196,7 @@ The output looks similar to the following: // TESTRESPONSE[s/"file_count": 8/"file_count": $body.stats.total.file_count/] // TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": $body.stats.total.size_in_bytes/] // TESTRESPONSE[s/"start_time_in_millis": 1526280280355/"start_time_in_millis": $body.stats.start_time_in_millis/] -// TESTRESPONSE[s/"time_in_millis": 358/"time_in_millis": $body.stats.time_in_millis/] +// TESTRESPONSE[s/"time_in_millis": 358/"time_in_millis": 0/] The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository, From c5f2066f7a37759dfb6edfd631a27466302ff01f Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Fri, 20 Dec 2019 18:11:42 -0800 Subject: [PATCH 16/19] Fix test --- docs/reference/snapshot-restore/restore-snapshot.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index 16650bd316ae1..1fc7a54f0a9dc 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -195,7 +195,7 @@ The output looks similar to the following: // TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": 0/] // TESTRESPONSE[s/"file_count": 8/"file_count": $body.stats.total.file_count/] // TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": $body.stats.total.size_in_bytes/] -// TESTRESPONSE[s/"start_time_in_millis": 1526280280355/"start_time_in_millis": $body.stats.start_time_in_millis/] +// TESTRESPONSE[s/"start_time_in_millis": 1526280280355/"start_time_in_millis": ./] // TESTRESPONSE[s/"time_in_millis": 358/"time_in_millis": 0/] The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were From a4ad51a4db9eeb368381acd73339e3e49d2b8ce0 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Fri, 20 Dec 2019 18:21:43 -0800 Subject: [PATCH 17/19] Fix test --- docs/reference/snapshot-restore/restore-snapshot.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index 1fc7a54f0a9dc..b3b9bc1af3b3c 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -195,7 +195,7 @@ The output looks similar to the following: // TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": 0/] // TESTRESPONSE[s/"file_count": 8/"file_count": $body.stats.total.file_count/] // TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": $body.stats.total.size_in_bytes/] -// TESTRESPONSE[s/"start_time_in_millis": 1526280280355/"start_time_in_millis": ./] +// TESTRESPONSE[s/"start_time_in_millis": 1526280280355/"start_time_in_millis": \./] // TESTRESPONSE[s/"time_in_millis": 358/"time_in_millis": 0/] The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were From c9cb83da4444d13979075f8f09805b76aa8395bf Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Fri, 20 Dec 2019 18:47:27 -0800 Subject: [PATCH 18/19] Fix test --- .../snapshot-restore/restore-snapshot.asciidoc | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index b3b9bc1af3b3c..2967429ce9305 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -188,15 +188,7 @@ The output looks similar to the following: ] } -------------------------------------------------- -// TESTRESPONSE[s/"uuid": XuBo4l4ISYiVg0nYUen9zg/"uuid": $body.uuid/] -// TESTRESPONSE[s/"done": 5/"done": $body.shards_stats.done/] -// TESTRESPONSE[s/"total": 5/"total": $body.shards_stats.total/] -// TESTRESPONSE[s/"file_count": 8/"file_count": 0/] -// TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": 0/] -// TESTRESPONSE[s/"file_count": 8/"file_count": $body.stats.total.file_count/] -// TESTRESPONSE[s/"size_in_bytes": 4704/"size_in_bytes": $body.stats.total.size_in_bytes/] -// TESTRESPONSE[s/"start_time_in_millis": 1526280280355/"start_time_in_millis": \./] -// TESTRESPONSE[s/"time_in_millis": 358/"time_in_millis": 0/] +// TESTRESPONSE[skip: No snapshot status to validate.] The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository, From a8c758c85d6098af21a93d4c2eadf6eb59cb1d99 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Fri, 20 Dec 2019 19:06:13 -0800 Subject: [PATCH 19/19] Fix test --- docs/reference/snapshot-restore/restore-snapshot.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/snapshot-restore/restore-snapshot.asciidoc b/docs/reference/snapshot-restore/restore-snapshot.asciidoc index 2967429ce9305..1e1d8b914bb63 100644 --- a/docs/reference/snapshot-restore/restore-snapshot.asciidoc +++ b/docs/reference/snapshot-restore/restore-snapshot.asciidoc @@ -202,4 +202,4 @@ Multiple ids are also supported: ----------------------------------- GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status ----------------------------------- -// TEST[continued] +// TEST[skip: no snapshot_2 to get]