Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SNAPSHOTS: Allow Parallel Restore Operations #36397

Merged

Conversation

original-brownbear
Copy link
Member

@original-brownbear original-brownbear commented Dec 8, 2018

  • Enable parallel restore operations
  • Add uuid to restore in progress entries to uniquely identify them
  • Adjust restore in progress entries to be a map in cluster state
  • Added tests for:
    • Parallel restore from two different snapshots
    • Parallel restore from a single snapshot to different indices to test uuid identifiers are correctly used by RestoreService and routing allocator
    • Parallel restore with waiting for completion to test transport actions correctly use uuid identifiers

@original-brownbear original-brownbear added >enhancement :Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs v7.0.0 labels Dec 8, 2018
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed

@@ -330,7 +332,14 @@ public ClusterState execute(ClusterState currentState) {

shards = shardsBuilder.build();
RestoreInProgress.Entry restoreEntry = new RestoreInProgress.Entry(snapshot, overallState(RestoreInProgress.State.INIT, shards), Collections.unmodifiableList(new ArrayList<>(indices.keySet())), shards);
builder.putCustom(RestoreInProgress.TYPE, new RestoreInProgress(restoreEntry));
List<RestoreInProgress.Entry> newEntries;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we did assume only a single restore in parallel previously, now we have to add restoreEntry to the existing list if it's there.

@@ -617,7 +626,7 @@ public RestoreInProgress applyChanges(final RestoreInProgress oldRestore) {
for (RestoreInProgress.Entry entry : oldRestore.entries()) {
Snapshot snapshot = entry.snapshot();
Updates updates = shardChanges.get(snapshot);
if (updates.shards.isEmpty() == false) {
if (updates != null && updates.shards.isEmpty() == false) {
Copy link
Member Author

@original-brownbear original-brownbear Dec 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now having an update here doesn't necessarily mean an update for every snapshot so we need the null check.

@original-brownbear
Copy link
Member Author

@ywelsch how about this for the first step here.
I think it has everything we wanted functionally:

  • Enforces BWC
  • Parallel snapshots work

Then we could do potential (see below) next steps in a follow up:

  • Add UUIDs to restore operations and move to more map type data-structures where its a performance improvement.
    • I'm actually not so convinced we need to do either of these two at this point. The only place that I could find (so far ... could be I missed something) where we'd get an actual speed-up here is org.elasticsearch.snapshots.RestoreService#restoreInProgress which is only used in the restore transport action org.elasticsearch.action.admin.cluster.snapshots.restore.TransportRestoreSnapshotAction#masterOperation. I'm also not so convinced we should add UUIDs to restores when the shards allow us to perfectly identify what restore goes where (and having done the coding now that isn't really an issue in the first place). Isn't that just redundant bytes (that we would need to add some BwC code for!) in the clusterstate then?

Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add UUIDs to restore operations and move to more map type data-structures where its a performance improvement.
I'm actually not so convinced we need to do either of these two at this point. The only place that I could find (so far ... could be I missed something) where we'd get an actual speed-up here is org.elasticsearch.snapshots.RestoreService#restoreInProgress which is only used in the restore transport action

RestoreInProgressAllocationDecider is another such place, which might actually be called far more often.

The relation between RestoreInProgress and routing table can be quite implicit. Assume for example you restore index1 and index2. Index1 is finished but index2 not yet, you then close index1, and start another restore of index1. There will then be two RestoreInProgress entries in the cluster state for the same snapshot and index. One of them will only have completed entries, however. We then need to be extra careful never to update completed entries (see my other comment).

I also wonder how TransportRestoreSnapshotAction can handle concurrent restores of the same snapshot correctly. It determines "finished" status of the action based on the snapshot name, and can easily get confused if there are 2 concurrent restores from the same snapshot. The test you've added does not exhibit this because it's concurrently restoring from two different snapshots. For CCR, we will need this functionality though (concurrent restores from same snapshot, but different indices).

I'm not sure how we can do this without adding a uuid, at least on the RestoreInProgress level.

@@ -617,7 +626,7 @@ public RestoreInProgress applyChanges(final RestoreInProgress oldRestore) {
for (RestoreInProgress.Entry entry : oldRestore.entries()) {
Snapshot snapshot = entry.snapshot();
Updates updates = shardChanges.get(snapshot);
if (updates.shards.isEmpty() == false) {
if (updates != null && updates.shards.isEmpty() == false) {
ImmutableOpenMap.Builder<ShardId, ShardRestoreStatus> shardsBuilder = ImmutableOpenMap.builder(entry.shards());
for (Map.Entry<ShardId, ShardRestoreStatus> shard : updates.shards.entrySet()) {
shardsBuilder.put(shard.getKey(), shard.getValue());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should now only be updated if the previous entry is not completed

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added that check now :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant that if the respective ShardRestoreStatus is already completed, there is no need to update it here.

@original-brownbear
Copy link
Member Author

@ywelsch thanks for taking a look!

I also wonder how TransportRestoreSnapshotAction can handle concurrent restores of the same snapshot correctly. It determines "finished" status of the action based on the snapshot name, and can easily get confused if there are 2 concurrent restores from the same snapshot. The test you've added does not exhibit this because it's concurrently restoring from two different snapshots. For CCR, we will need this functionality though (concurrent restores from same snapshot, but different indices).

Right I missed this case, sorry about that => now I get this better :) => I'll add a test for this and UUIDs and map-style lookups :)

@original-brownbear
Copy link
Member Author

@elasticmachine retest this please

@@ -92,12 +92,13 @@ protected void masterOperation(final RestoreSnapshotRequest request, final Clust
public void onResponse(RestoreCompletionResponse restoreCompletionResponse) {
if (restoreCompletionResponse.getRestoreInfo() == null && request.waitForCompletion()) {
final Snapshot snapshot = restoreCompletionResponse.getSnapshot();
String uuid = restoreCompletionResponse.getUuid();
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lookup of in progress restores works by String uuid now :)

Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more minor things


public static final class Builder {

private final List<Entry> entries = new ArrayList<>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

easier just to use ImmutableOpenMap.Builder here. Then you can easily add to it and just have to call build() to pass the fully-built ImmutableOpenMap to the RestoreInProgress constructor.

Entry[] entries = new Entry[in.readVInt()];
for (int i = 0; i < entries.length; i++) {
int count = in.readVInt();
final List<Entry> entries = new ArrayList<>(count);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ImmutableOpenMap.Builder

// mark restore entry for this shard as failed when it's due to a file corruption. There is no need wait on retries
// to restore this shard on another node if the snapshot files are corrupt. In case where a node just left or crashed,
// however, we only want to acknowledge the restore operation once it has been successfully restored on another node.
if (unassignedInfo.getFailure() != null && Lucene.isCorruptionException(unassignedInfo.getFailure().getCause())) {
changes(snapshot).shards.put(failedShard.shardId(), new ShardRestoreStatus(failedShard.currentNodeId(),
changes(((SnapshotRecoverySource) recoverySource).restoreUUID()).shards.put(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

easier to change changes() method to directly take a RecoverySource and assert that it's a SnapshotRecoverySource, and then extract the restore uuid in the changes method.

@@ -617,7 +626,7 @@ public RestoreInProgress applyChanges(final RestoreInProgress oldRestore) {
for (RestoreInProgress.Entry entry : oldRestore.entries()) {
Snapshot snapshot = entry.snapshot();
Updates updates = shardChanges.get(snapshot);
if (updates.shards.isEmpty() == false) {
if (updates != null && updates.shards.isEmpty() == false) {
ImmutableOpenMap.Builder<ShardId, ShardRestoreStatus> shardsBuilder = ImmutableOpenMap.builder(entry.shards());
for (Map.Entry<ShardId, ShardRestoreStatus> shard : updates.shards.entrySet()) {
shardsBuilder.put(shard.getKey(), shard.getValue());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant that if the respective ShardRestoreStatus is already completed, there is no need to update it here.

for (RestoreInProgress.Entry entry : restoreInProgress.entries()) {
if (entry.state().completed()) {
assert completed(entry.shards()) : "state says completed but restore entries are not";
for (ObjectObjectCursor<String, RestoreInProgress.Entry> entry : restoreInProgress.entries()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we iterate purely over values here, we don't need the key? Might also be convenient if RestoreInProgress would implement Iterable<RestoreInProgress.Entry>?

return allocation.decision(Decision.YES, NAME, "shard is currently being restored");
}
break;
RestoreInProgress.Entry restoreInProgress = restoresInProgress.entries().get(source.restoreUUID());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

your call. With my suggestion to make RestoreInProgress an Iterable<RestoreInProgress.Entry> for easier iteration, we only need a getter and an isEmpty method.

@original-brownbear
Copy link
Member Author

Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@original-brownbear
Copy link
Member Author

@ywelsch thanks!

@ywelsch
Copy link
Contributor

ywelsch commented Dec 14, 2018

This needs to go into 6.6. as well.

@original-brownbear
Copy link
Member Author

Ah ok :) on it.

original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Dec 14, 2018
* Enable parallel restore operations
* Add uuid to restore in progress entries to uniquely identify them
* Adjust restore in progress entries to be a map in cluster state
* Added tests for:
   * Parallel restore from two different snapshots
   * Parallel restore from a single snapshot to different indices to test uuid identifiers are correctly used by `RestoreService` and routing allocator
   * Parallel restore with waiting for completion to test transport actions correctly use uuid identifiers
original-brownbear added a commit that referenced this pull request Dec 17, 2018
* Enable parallel restore operations
* Add uuid to restore in progress entries to uniquely identify them
* Adjust restore in progress entries to be a map in cluster state
* Added tests for:
   * Parallel restore from two different snapshots
   * Parallel restore from a single snapshot to different indices to test uuid identifiers are correctly used by `RestoreService` and routing allocator
   * Parallel restore with waiting for completion to test transport actions correctly use uuid identifiers
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Dec 17, 2018
* Re-enables bwc tests with adjusted version conditions now that elastic#36397 enables concurrent snapshots in 6.6+
original-brownbear added a commit that referenced this pull request Dec 17, 2018
* Re-enables bwc tests with adjusted version conditions now that #36397 enables concurrent snapshots in 6.6+
davidkyle pushed a commit that referenced this pull request Dec 18, 2018
* Re-enables bwc tests with adjusted version conditions now that #36397 enables concurrent snapshots in 6.6+
davidkyle added a commit that referenced this pull request Dec 18, 2018
* [ML] Job and datafeed mappings with index template (#32719)

Index mappings for the configuration documents

* [ML] Job config document CRUD operations (#32738)

* [ML] Datafeed config CRUD operations (#32854)

* [ML] Change JobManager to work with Job config in index  (#33064)

* [ML] Change Datafeed actions to read config from the config index (#33273)

* [ML] Allocate jobs based on JobParams rather than cluster state config (#33994)

* [ML] Return missing job error when .ml-config is does not exist (#34177)

* [ML] Close job in index (#34217)

* [ML] Adjust finalize job action to work with documents (#34226)

* [ML] Job in index: Datafeed node selector (#34218)

* [ML] Job in Index: Stop and preview datafeed (#34605)

* [ML] Delete job document (#34595)

* [ML] Convert job data remover to work with index configs (#34532)

* [ML] Job in index: Get datafeed and job stats from index (#34645)

* [ML] Job in Index: Convert get calendar events to index docs (#34710)

* [ML] Job in index: delete filter action (#34642)

This changes the delete filter action to search
for jobs using the filter to be deleted in the index
rather than the cluster state.

* [ML] Job in Index: Enable integ tests (#34851)

Enables the ml integration tests excluding the rolling upgrade tests and a lot of fixes to
make the tests pass again.

* [ML] Reimplement established model memory (#35500)

This is the 7.0 implementation of a master node service to
keep track of the native process memory requirement of each ML
job with an associated native process.

The new ML memory tracker service works when the whole cluster
is upgraded to at least version 6.6. For mixed version clusters
the old mechanism of established model memory stored on the job
in cluster state was used. This means that the old (and complex)
code to keep established model memory up to date on the job object
has been removed in 7.0.

Forward port of #35263

* [ML] Need to wait for shards to replicate in distributed test (#35541)

Because the cluster was expanded from 1 node to 3 indices would
initially start off with 0 replicas.  If the original node was
killed before auto-expansion to 1 replica was complete then
the test would fail because the indices would be unavailable.

* [ML] DelayedDataCheckConfig index mappings (#35646)

* [ML] JIndex: Restore finalize job action (#35939)

* [ML] Replace Version.CURRENT in streaming functions (#36118)

* [ML] Use 'anomaly-detector' in job config doc name (#36254)

* [ML] Job In Index: Migrate config from the clusterstate (#35834)

Migrate ML configuration from clusterstate to index for closed jobs
only once all nodes are v6.6.0 or higher

* [ML] Check groups against job Ids on update (#36317)

* [ML] Adapt to periodic persistent task refresh (#36633)

* [ML] Adapt to periodic persistent task refresh

If https://github.com/elastic/elasticsearch/pull/36069/files is
merged then the approach for reallocating ML persistent tasks
after refreshing job memory requirements can be simplified.
This change begins the simplification process.

* Remove AwaitsFix and implement TODO

* [ML] Default search size for configs

* Fix TooManyJobsIT.testMultipleNodes

Two problems:

1. Stack overflow during async iteration when lots of
   jobs on same machine
2. Not effectively setting search size in all cases

* Use execute() instead of submit() in MlMemoryTracker

We don't need a Future to wait for completion

* [ML][TEST] Fix NPE in JobManagerTests

* [ML] JIindex: Limit the size of bulk migrations (#36481)

* [ML] Prevent updates and upgrade tests (#36649)

* [FEATURE][ML] Add cluster setting that enables/disables config  migration (#36700)

This commit adds a cluster settings called `xpack.ml.enable_config_migration`.
The setting is `true` by default. When set to `false`, no config migration will
be attempted and non-migrated resources (e.g. jobs, datafeeds) will be able
to be updated normally.

Relates #32905

* [ML] Snapshot ml configs before migrating (#36645)

* [FEATURE][ML] Split in batches and migrate all jobs and datafeeds (#36716)

Relates #32905

* SQL: Fix translation of LIKE/RLIKE keywords (#36672)

* SQL: Fix translation of LIKE/RLIKE keywords

Refactor Like/RLike functions to simplify internals and improve query
 translation when chained or within a script context.

Fix #36039
Fix #36584

* Fixing line length for EnvironmentTests and RecoveryTests (#36657)

Relates #34884

* Add back one line removed by mistake regarding java version check and
COMPAT jvm parameter existence

* Do not resolve addresses in remote connection info (#36671)

The remote connection info API leads to resolving addresses of seed
nodes when invoked. This is problematic because if a hostname fails to
resolve, we would not display any remote connection info. Yet, a
hostname not resolving can happen across remote clusters, especially in
the modern world of cloud services with dynamically chaning
IPs. Instead, the remote connection info API should be providing the
configured seed nodes. This commit changes the remote connection info to
display the configured seed nodes, avoiding a hostname resolution. Note
that care was taken to preserve backwards compatibility with previous
versions that expect the remote connection info to serialize a transport
address instead of a string representing the hostname.

* [Painless] Add boxed type to boxed type casts for method/return (#36571)

This adds implicit boxed type to boxed types casts for non-def types to create asymmetric casting relative to the def type when calling methods or returning values. This means that a user calling a method taking an Integer can call it with a Byte, Short, etc. legally which matches the way def works. This creates consistency in the casting model that did not previously exist.

* SNAPSHOTS: Adjust BwC Versions in Restore Logic (#36718)

* Re-enables bwc tests with adjusted version conditions now that #36397 enables concurrent snapshots in 6.6+

* ingest: fix on_failure with Drop processor (#36686)

This commit allows a document to be dropped when a Drop processor
is used in the on_failure fork of the processor chain.

Fixes #36151

* Initialize startup `CcrRepositories` (#36730)

Currently, the CcrRepositoryManger only listens for settings updates
and installs new repositories. It does not install the repositories that
are in the initial settings. This commit, modifies the manager to
install the initial repositories. Additionally, it modifies the ccr
integration test to configure the remote leader node at startup, instead
of using a settings update.

* [TEST] fix float comparison in RandomObjects#getExpectedParsedValue

This commit fixes a test bug introduced with #36597. This caused some
test failure as stored field values comparisons would not work when CBOR
xcontent type was used.

Closes #29080

* [Geo] Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach (#35320)

This commit  exposes lucene's LatLonShape field as the
default type in GeoShapeFieldMapper. To use the new 
indexing approach, simply set "type" : "geo_shape" in 
the mappings without setting any of the strategy, precision, 
tree_levels, or distance_error_pct parameters. Note the 
following when using the new indexing approach:

* geo_shape query does not support querying by 
MULTIPOINT.
* LINESTRING and MULTILINESTRING queries do not 
yet support WITHIN relation.
* CONTAINS relation is not yet supported.
The tree, precision, tree_levels, distance_error_pct, 
and points_only parameters are deprecated.

* TESTS:Debug Log. IndexStatsIT#testFilterCacheStats

* ingest: support default pipelines + bulk upserts (#36618)

This commit adds support to enable bulk upserts to use an index's
default pipeline. Bulk upsert, doc_as_upsert, and script_as_upsert
are all supported.

However, bulk script_as_upsert has slightly surprising behavior since
the pipeline is executed _before_ the script is evaluated. This means
that the pipeline only has access the data found in the upsert field
of the script_as_upsert. The non-bulk script_as_upsert (existing behavior)
runs the pipeline _after_ the script is executed. This commit
does _not_ attempt to consolidate the bulk and non-bulk behavior for
script_as_upsert.

This commit also adds additional testing for the non-bulk behavior,
which remains unchanged with this commit.

fixes #36219

* Fix duplicate phrase in shrink/split error message (#36734)

This commit removes a duplicate "must be a" from the shrink/split error
messages.

* Deprecate types in get_source and exist_source (#36426)

This change adds a new untyped endpoint `{index}/_source/{id}` for both the
GET and the HEAD methods to get the source of a document or check for its
existance. It also adds deprecation warnings to RestGetSourceAction that emit
a warning when the old deprecated "type" parameter is still used. Also updating
documentation and tests where appropriate.

Relates to #35190

* Revert "[Geo] Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach (#35320)"

This reverts commit 5bc7822.

* Enhance Invalidate Token API (#35388)

This change:

- Adds functionality to invalidate all (refresh+access) tokens for all users of a realm
- Adds functionality to invalidate all (refresh+access)tokens for a user in all realms
- Adds functionality to invalidate all (refresh+access) tokens for a user in a specific realm
- Changes the response format for the invalidate token API to contain information about the 
   number of the invalidated tokens and possible errors that were encountered.
- Updates the API Documentation

After back-porting to 6.x, the `created` field will be removed from master as a field in the 
response

Resolves: #35115
Relates: #34556

* Add raw sort values to SearchSortValues transport serialization (#36617)

In order for CCS alternate execution mode (see #32125) to be able to do the final reduction step on the CCS coordinating node, we need to serialize additional info in the transport layer as part of each `SearchHit`. Sort values are already present but they are formatted according to the provided `DocValueFormat` provided. The CCS node needs to be able to reconstruct the lucene `FieldDoc` to include in the `TopFieldDocs` and `CollapseTopFieldDocs` which will feed the `mergeTopDocs` method used to reduce multiple search responses (one per cluster) into one.

This commit adds such information to the `SearchSortValues` and exposes it through a new getter method added to `SearchHit` for retrieval. This info is only serialized at transport and never printed out at REST.

* Watcher: Ensure all internal search requests count hits (#36697)

In previous commits only the stored toXContent version of a search
request was using the old format. However an executed search request was
already disabling hit counts. In 7.0 hit counts will stay enabled by
default to allow for proper migration.

Closes #36177

* [TEST] Ensure shard follow tasks have really stopped.

Relates to #36696

* Ensure MapperService#getAllMetaFields elements order is deterministic (#36739)

MapperService#getAllMetaFields returns an array, which is created out of
an `ObjectHashSet`. Such set does not guarantee deterministic hash
ordering. The array returned by its toArray may be sorted differently
at each run. This caused some repeatability issues in our tests (see #29080)
as we pick random fields from the array of possible metadata fields,
but that won't be repeatable if the input array is sorted differently at
every run. Once setting the tests seed, hppc picks that up and the sorting is
deterministic, but failures don't repeat with the seed that gets printed out
originally (as a seed was not originally set).
See also https://issues.carrot2.org/projects/HPPC/issues/HPPC-173.

With this commit, we simply create a static sorted array that is used for
`getAllMetaFields`. The change is in production code but really affects
only testing as the only production usage of this method was to iterate
through all values when parsing fields in the high-level REST client code.
Anyways, this seems like a good change as returning an array would imply
that it's deterministically sorted.

* Expose Sequence Number based Optimistic Concurrency Control in the rest layer (#36721)

Relates #36148 
Relates #10708

* [ML] Mute MlDistributedFailureIT
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs >enhancement v6.6.0 v7.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants